Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Application Proxy Release Version History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-release-version-history.md | Here is a list of related resources: | Understand Azure AD Application Proxy connectors | Find out more about [connector management](application-proxy-connectors.md) and how connectors [auto-upgrade](application-proxy-connectors.md#automatic-updates). | | Azure AD Application Proxy Connector Download | [Download the latest connector](https://download.msappproxy.net/subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/connector/download). | +## 1.5.3437.0 ++### Release status ++June 20, 2023: Released for download. This version is only available for install via the download page. ++### New features and improvements ++- Support for Microsoft Entra Private Access. +- Updated ΓÇ£Third-Party NoticesΓÇ¥. ++### Fixed issues +- Silent registration of connector with credentials. See [Create an unattended installation script for the Azure Active Directory Application Proxy connector](application-proxy-register-connector-powershell.md) for more details. +- Fixed dropping of ΓÇ£SecureΓÇ¥ and ΓÇ£HttpOnlyΓÇ¥ attributes on the cookies passed by backend servers when there are trailing spaces in these attributes. +- Fixed services crash when back-end server of an application sets "Set-Cookie" header with empty value. + ## 1.5.2846.0 ### Release status |
active-directory | Fido2 Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md | |
active-directory | Sample V2 Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md | The following samples illustrate web applications that sign in users. Some sampl > | Java </p> Spring |Azure AD Spring Boot Starter Series <br/> • [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> • [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> • [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) <br/> • [Protect a web API](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/protect-web-api) | • [MSAL Java](/java/api/com.microsoft.aad.msal4j) <br/> • Azure AD Boot Starter | Authorization code | > | Java </p> Servlets | Spring-less Servlet Series <br/> • [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> • [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> • [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Authorization code | > | Node.js </p> Express | Express web app series <br/> • [Quickstart: sign in users](https://github.com/Azure-Samples/ms-identity-node/blob/main/README.md)<br/> • [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md) <br/> • [Call Microsoft Graph via BFF proxy](https://github.com/Azure-Samples/ms-identity-node) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> • [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> • [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) | [MSAL Node](/javascript/api/@azure/msal-node) | • Authorization code <br/>• Backend-for-Frontend (BFF) proxy |-> | Python </p> Flask | Flask Series <br/> • [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>• [A template to sign in AAD or B2C users, and optionally call a downstream API (Microsoft Graph)](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | [MSAL Python](/python/api/msal/overview-msal) | Authorization code | -> | Python </p> Django | Django Series <br/> • [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| [MSAL Python](/python/api/msal/overview-msal) | Authorization code | +> | Python </p> Flask | Flask Series <br/> • [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>• [A template to sign in AAD or B2C users, and optionally call a downstream API (Microsoft Graph)](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | [MSAL Python](/entra/msal/python) | Authorization code | +> | Python </p> Django | Django Series <br/> • [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| [MSAL Python](/entra/msal/python) | Authorization code | > | Ruby | Graph Training <br/> • [Sign in users and call Microsoft Graph](https://github.com/microsoftgraph/msgraph-training-rubyrailsapp) | OmniAuth OAuth2 | Authorization code | ### Web API The following samples show public client desktop applications that access the Mi > | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Integrated-Windows-Auth-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Integrated Windows authentication | > | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | [MSAL Node](/javascript/api/@azure/msal-node) | Authorization code with PKCE | > | .NET Core | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | [MSAL.NET](/entra/msal/dotnet) | Resource owner password credentials |-> | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | [MSAL Python](/python/api/msal/overview-msal) | Resource owner password credentials | +> | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | [MSAL Python](/entra/msal/python) | Resource owner password credentials | > | Universal Window Platform (UWP) | [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/2-With-broker) | [MSAL.NET](/entra/msal/dotnet) | Web account manager | > | Windows Presentation Foundation (WPF) | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | [MSAL.NET](/entra/msal/dotnet) | Authorization code with PKCE | > | Windows Presentation Foundation (WPF) | • [Sign in users and call ASP.NET Core web API](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/1.%20Desktop%20app%20calls%20Web%20API) <br/> • [Sign in users and call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | [MSAL.NET](/entra/msal/dotnet) | Authorization code with PKCE | The following samples show an application that accesses the Microsoft Graph API > | ASP.NET |[Multi-tenant with Microsoft identity platform endpoint](https://github.com/Azure-Samples/ms-identity-aspnet-daemon-webapp) | [MSAL.NET](/entra/msal/dotnet) | Client credentials grant| > | Java | • [Call Microsoft Graph with Secret](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-secret) <br/> • [Call Microsoft Graph with Certificate](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/1.%20Server-Side%20Scenarios/msal-client-credential-certificate)| [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Client credentials grant| > | Node.js | [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console) | [MSAL Node](/javascript/api/@azure/msal-node) | Client credentials grant |-> | Python | • [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> • [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | [MSAL Python](/python/api/msal/overview-msal)| Client credentials grant| +> | Python | • [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> • [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | [MSAL Python](/entra/msal/python)| Client credentials grant| ### Azure Functions as web APIs The following samples show how to protect an Azure Function using HttpTrigger an > | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- | > | .NET | [.NET Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-dotnet-webapi-azurefunctions) | [MSAL.NET](/entra/msal/dotnet) | Authorization code |-> | Python | [Python Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) | [MSAL Python](/python/api/msal/overview-msal) | Authorization code | +> | Python | [Python Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) | [MSAL Python](/entra/msal/python) | Authorization code | ### Browserless (Headless) The following sample shows a public client application running on a device witho > | -- | -- |-- |-- | > | .NET Core | [Invoke protected API from text-only device](https://github.com/azure-samples/active-directory-dotnetcore-devicecodeflow-v2) | [MSAL.NET](/entra/msal/dotnet) | Device code| > | Java | [Sign in users and invoke protected API from text-only device](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Device-Code-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Device code |-> | Python | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | [MSAL Python](/python/api/msal/overview-msal) | Device code | +> | Python | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | [MSAL Python](/entra/msal/python) | Device code | ### Microsoft Teams applications The following samples show how to build applications for the Python language and > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- |-> | Azure Functions as web APIs | [Python Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) | [MSAL Python](/python/api/msal/overview-msal) | Authorization code | -> | Desktop | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | [MSAL Python](/python/api/msal/overview-msal) | Resource owner password credentials | -> | Headless | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | [MSAL Python](/python/api/msal/overview-msal) | Device code | -> | Daemon | • [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> • [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | [MSAL Python](/python/api/msal/overview-msal)| Client credentials grant| +> | Azure Functions as web APIs | [Python Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) | [MSAL Python](/entra/msal/python) | Authorization code | +> | Desktop | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | [MSAL Python](/entra/msal/python) | Resource owner password credentials | +> | Headless | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | [MSAL Python](/entra/msal/python) | Device code | +> | Daemon | • [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> • [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | [MSAL Python](/entra/msal/python)| Client credentials grant| #### Flask > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- |-> | Web application | • [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>• [A template to sign in AAD or B2C users, and optionally call a downstream API (Microsoft Graph)](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | [MSAL Python](/python/api/msal/overview-msal) | Authorization code | +> | Web application | • [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>• [A template to sign in AAD or B2C users, and optionally call a downstream API (Microsoft Graph)](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | [MSAL Python](/entra/msal/python) | Authorization code | #### Django > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- |-> | Web application | • [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| [MSAL Python](/python/api/msal/overview-msal) | Authorization code | +> | Web application | • [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> • [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> • [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> • [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| [MSAL Python](/entra/msal/python) | Authorization code | ### Kotlin |
active-directory | Hybrid Azuread Join Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-plan.md | If your Windows 10 or newer domain joined devices are [Azure AD registered](conc ### Hybrid Azure AD join for single forest, multiple Azure AD tenants -To register devices as hybrid Azure AD join to respective tenants, organizations need to ensure that the SCP configuration is done on the devices and not in AD. More details on how to accomplish this task can be found in the article [Hybrid Azure AD join targeted deployment](hybrid-azuread-join-control.md). It's important for organizations to understand that certain Azure AD capabilities won't work in a single forest, multiple Azure AD tenants configurations. +To register devices as hybrid Azure AD join to respective tenants, organizations need to ensure that the Service Connection Points (SCP) configuration is done on the devices and not in AD. More details on how to accomplish this task can be found in the article [Hybrid Azure AD join targeted deployment](hybrid-azuread-join-control.md). It's important for organizations to understand that certain Azure AD capabilities won't work in a single forest, multiple Azure AD tenants configurations. - [Device writeback](../hybrid/how-to-connect-device-writeback.md) won't work. This configuration affects [Device based Conditional Access for on-premises apps that are federated using ADFS](/windows-server/identity/ad-fs/operations/configure-device-based-conditional-access-on-premises). This configuration also affects [Windows Hello for Business deployment when using the Hybrid Cert Trust model](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust). - [Groups writeback](../hybrid/how-to-connect-group-writeback.md) won't work. This configuration affects writeback of Office 365 Groups to a forest with Exchange installed. |
active-directory | Licensing Service Plan Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Microsoft Dynamics AX7 User Trial | AX7_USER_TRIAL | fcecd1f9-a91e-488d-a918-a96cdb6ce2b0 | ERP_TRIAL_INSTANCE (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Operations Trial Environment (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Microsoft Azure Multi-Factor Authentication | MFA_STANDALONE | cb2020b1-d8f6-41c0-9acd-8ff3d6d7831b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0) | | Microsoft Defender for Office 365 (Plan 2) | THREAT_INTELLIGENCE | 3dd6cf57-d688-4eed-ba52-9e40b5468c3e | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70) |-| Microsoft Defender Vulnerability Management Add-on | TVM_Premium_Add_on | ad7a56e0-6903-4d13-94f3-5ad491e78960 | TVM_PREMIUM_1 (36810a13-b903-490a-aa45-afbeb7540832) | Microsoft Defender Vulnerability Management (36810a13-b903-490a-aa45-afbeb7540832) | -| Microsoft Intune Suite | Microsoft_Intune_Suite | a929cd4d-8672-47c9-8664-159c1f322ba8 | Intune-MAMTunnel (a6e407da-7411-4397-8a2e-d9b52780849e)<br/>INTUNE_P2 (d9923fe3-a2de-4d29-a5be-e3e83bb786be)<br/>Intune-EPM (bb73f429-78ef-4ff2-83c8-722b04c3e7d1)<br/>REMOTE_HELP (a4c6cf29-1168-4076-ba5c-e8fe0e62b17e)<br/>Intune_AdvancedEA (2a4baa0e-5e99-4c38-b1f2-6864960f1bd1) | Microsoft Tunnel for Mobile Application Management (a6e407da-7411-4397-8a2e-d9b52780849e)<br/>Intune Plan 2 (d9923fe3-a2de-4d29-a5be-e3e83bb786be)<br/>Intune Endpoint Privilege Management (bb73f429-78ef-4ff2-83c8-722b04c3e7d1)<br/>Remote Help (a4c6cf29-1168-4076-ba5c-e8fe0e62b17e)<br/>Intune Advanced endpoint analytics (2a4baa0e-5e99-4c38-b1f2-6864960f1bd1) | | Microsoft 365 A1 | M365EDU_A1 | b17653a4-2443-4e8c-a550-18249dda78bb | AAD_EDU (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | Azure Active Directory for Education (3a3976ce-de18-4a87-a78e-5e9245e252df)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Windows Store Service (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | | Microsoft 365 A3 for faculty | M365EDU_A3_FACULTY | 4b590615-0888-425a-a965-b3bf7789848d | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) | | Microsoft 365 A3 for students | M365EDU_A3_STUDENT | 7cfd9a2b-e110-4c39-bf20-c6a3f36a3121 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) | |
active-directory | How To Connect Sync Feature Directory Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions.md | During installation of Azure AD Connect, an application is registered where thes  +>[!NOTE] +> The **Tenant Schema Extension App** is a system-only application that can't be deleted and attribute extension definitions can't be removed. + Make sure you select **All applications** to see this app. The attributes are prefixed with **extension \_{ApplicationId}\_**. ApplicationId has the same value for all attributes in your Azure AD tenant. You will need this value for all other scenarios in this topic. |
active-directory | Manage Application Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md | To review permissions granted to applications, you need: - One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator. - A Service principal owner who isn't an administrator is able to invalidate refresh tokens. +## Restoring permissions ++Please see [Restore permissions granted to applications](restore-permissions.md) for information on how to restore permissions that have been revoked or deleted. + :::zone pivot="portal" -## Review permissions +## Review and revoke permissions ++You can access the Azure portal to view the permissions granted to an app. You can revoke permissions granted by admins for your entire organization, and you can get contextual PowerShell scripts to perform other actions. -You can access the Azure portal to get contextual PowerShell scripts to perform the actions. +To revoke application permissions granted for the entire organization: ++1. Sign in to the [Azure portal](https://portal.azure.com) using one of the roles listed in the prerequisites section. +1. Select **Azure Active Directory**, and then select **Enterprise applications**. +1. Select the application that you want to restrict access to. +1. Select **Permissions**. +1. The permissions listed in the **Admin consent** tab apply to your entire organization. Choose the permission you would like to remove, select the **...** control for that permission, and then choose **Revoke permission**. To review application permissions: Run the following queries to remove appRoleAssignments of users or groups to the - [Configure user consent setting](configure-user-consent.md) - [Configure admin consent workflow](configure-admin-consent-workflow.md)+- [Restore revoked permissions](restore-permissions.md) |
active-directory | Albert Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/albert-provisioning-tutorial.md | This tutorial describes the steps you need to perform in both Albert and Azure A ## Supported capabilities > [!div class="checklist"]-> * Create users in Albert. +> * Update user status in Albert. > * Remove users in Albert when they do not require access anymore. > * Keep user attributes synchronized between Azure AD and Albert. > * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Albert (recommended). The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Albert](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Albert to support provisioning with Azure AD-Contact Albert support to configure Albert to support provisioning with Azure AD. +Contact [Albert support](mailto:support@albertinvent.com) to configure Albert to support provisioning with Azure AD. ## Step 3. Add Albert from the Azure AD application gallery |
active-directory | Axiad Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/axiad-cloud-provisioning-tutorial.md | This tutorial describes the steps you need to perform in both Axiad Cloud and Az The scenario outlined in this tutorial assumes that you already have the following prerequisites: -* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).-* A user account in Axiad Cloud with Admin permissions. +* An Axiad Cloud tenant. ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Axiad Cloud](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Axiad Cloud to support provisioning with Azure AD-Contact Axiad Cloud support to configure Axiad Cloud to support provisioning with Azure AD. +Contact [Axiad Customer Success](mailto:customer.success@axiad.com) to request your Axiad Cloud tenant be configured for Azure AD SCIM provisioning. The Axiad Customer Success team will also provide the configuration information and SCIM API credentials for your Axiad Cloud tenant that are needed for the next steps. ## Step 3. Add Axiad Cloud from the Azure AD application gallery |
active-directory | Citrix Cloud Saml Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citrix-cloud-saml-sso-tutorial.md | To configure the integration of Citrix Cloud SAML SSO into Azure AD, you need to ## Configure and test Azure AD SSO for Citrix Cloud SAML SSO -Configure and test Azure AD SSO with Citrix Cloud SAML SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Citrix Cloud SAML SSO.This user must also exist in your Active Directory that is synced with Azure AD Connect to your Azure AD subscription. +Configure and test Azure AD SSO with Citrix Cloud SAML SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Citrix Cloud SAML SSO. This user must also exist in your Active Directory that is synced with Azure AD Connect to your Azure AD subscription. To configure and test Azure AD SSO with Citrix Cloud SAML SSO, perform the following steps: Follow these steps to enable Azure AD SSO in the Azure portal.  -1. In addition to above, Citrix Cloud SAML SSO application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre-populated but you can review them as per your requirements.The values passed in the SAML response should map to the Active Directory attributes of the user. +1. In addition to above, Citrix Cloud SAML SSO application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre-populated but you can review them as per your requirements. The values passed in the SAML response should map to the Active Directory attributes of the user. | Name | Source Attribute | | --|--| | cip_sid | user.onpremisesecurityidentifier | | cip_upn | user.userprincipalname |- | cip_oid | ObjectGUID (Extension Attribute ) | + | cip_oid | ObjectGUID (Extension Attribute) | | cip_email | user.mail | | displayName | user.displayname | In this section, you'll enable B.Simon to use Azure single sign-on by granting a ## Configure Citrix Cloud SAML SSO --- 1. In a different web browser window, sign in to your up Citrix Cloud SAML SSO company site as an administrator 1. Navigate to the Citrix Cloud menu and select **Identity and Access Management**. -  +  1. Under **Authentication**, locate **SAML 2.0** and select **Connect** from the ellipsis menu. -  +  1. In the **Configure SAML** page, perform the following steps. -  +  a. In the **Entity ID** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal. - b. In the **Sign Authentication Request**, select **No**. + b. In the **Sign Authentication Request**, select **Yes**, if you want to use `SAML Request signing`, else select **No**. c. In the **SSO Service URL** textbox, paste the **Login URL** value which you have copied from the Azure portal. - d. Select **Binding Mechanism** from the drop down, you can select either **HTTP-POST** or **HTTP-Redirect** binding. + d. Select **Binding Mechanism** from the drop-down, you can select either **HTTP-POST** or **HTTP-Redirect** binding. e. Under **SAML Response**, select **Sign Either Response or Assertion** from the dropdown. |
active-directory | Cleanmail Swiss Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cleanmail-swiss-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Cleanmail Swiss for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and deprovision user accounts from Azure AD to Cleanmail Swiss. +++writer: twimmers ++ms.assetid: 1281f790-7f6d-4558-bb31-015f92ae579d ++++ Last updated : 07/10/2023++++# Tutorial: Configure Cleanmail Swiss for automatic user provisioning ++This tutorial describes the steps you need to do in both Cleanmail Swiss and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Cleanmail](https://www.alinto.com/fr) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Capabilities supported +> [!div class="checklist"] +> * Create users in Cleanmail +> * Remove users in Cleanmail Swiss when they do not require access anymore +> * Keep user attributes synchronized between Azure AD and Cleanmail +> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Cleanmail Swiss (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Cleanmail Swiss with Admin permission ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Cleanmail](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Cleanmail Swiss to support provisioning with Azure AD ++Contact [Cleanmail Swiss Support](https://www.alinto.com/contact-email-provider/) to configure Cleanmail Swiss to support provisioning with Azure AD. ++## Step 3. Add Cleanmail Swiss from the Azure AD application gallery ++Add Cleanmail Swiss from the Azure AD application gallery to start managing provisioning to Cleanmail. If you have previously setup Cleanmail Swiss for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Cleanmail Swiss ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Cleanmail Swiss based on user and group assignments in Azure AD. ++### To configure automatic user provisioning for Cleanmail Swiss in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Cleanmail**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. In the **Admin Credentials** section, input your Cleanmail Swiss Tenant URL as `https://cloud.cleanmail.ch/api/v3/scim2` and corresponding Secret Token obtained from Step 2. Click **Test Connection** to ensure Azure AD can connect to Cleanmail. If the connection fails, ensure your Cleanmail Swiss account has Admin permissions and try again. ++  + +1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Cleanmail**. ++1. Review the user attributes that are synchronized from Azure AD to Cleanmail Swiss in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Cleanmail Swiss for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Cleanmail Swiss API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Cleanmail| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |name.givenName|String|| + |name.familyName|String|| + |externalId|String|| ++1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Cleanmail, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and groups that you would like to provision to Cleanmail Swiss by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Tanium Sso Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-sso-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Tanium SSO for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Tanium SSO. +++writer: twimmers ++ms.assetid: 967937de3-81c7-4c61-ae7e-7dad6c46411b ++++ Last updated : 07/10/2023++++# Tutorial: Configure Tanium SSO for automatic user provisioning ++This tutorial describes the steps you need to perform in both Tanium SSO and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Tanium SSO](https://www.tanium.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Tanium SSO. +> * Remove users in Tanium SSO when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Tanium SSO. +> * Provision groups and group memberships in Tanium SSO. +> * [Single sign-on](tanium-cloud-sso-tutorial.md) to Tanium SSO (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Tanium SSO with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Tanium SSO](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Tanium SSO to support provisioning with Azure AD +Contact Tanium SSO support to configure Tanium SSO to support provisioning with Azure AD. ++## Step 3. Add Tanium SSO from the Azure AD application gallery ++Add Tanium SSO from the Azure AD application gallery to start managing provisioning to Tanium SSO. If you have previously setup Tanium SSO for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Tanium SSO ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Tanium SSO in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Tanium SSO**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Tanium SSO Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Tanium SSO. If the connection fails, ensure your Tanium SSO account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Tanium SSO**. ++1. Review the user attributes that are synchronized from Azure AD to Tanium SSO in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Tanium SSO for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Tanium SSO API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Tanium SSO| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |displayName|String||✓ + |externalId|String||✓ +++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Tanium SSO**. ++1. Review the group attributes that are synchronized from Azure AD to Tanium SSO in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Tanium SSO for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Tanium SSO| + ||||| + |displayName|String|✓|✓ + |externalId|String||✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Tanium SSO, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Tanium SSO by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Uber Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uber-tutorial.md | + + Title: Azure Active Directory SSO integration with Uber +description: Learn how to configure single sign-on between Azure Active Directory and Uber. ++++++++ Last updated : 07/07/2023+++++# Azure Active Directory SSO integration with Uber ++In this article, you'll learn how to integrate Uber with Azure Active Directory (Azure AD). This app helps you automatically provision and de-provision users to Uber for business using the Azure AD Provisioning service. When you integrate Uber with Azure AD, you can: ++* Control in Azure AD who has access to Uber. +* Enable your users to be automatically signed-in to Uber with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Uber in a test environment. Uber supports **IDP** initiated single sign-on and **Automated user provisioning**. ++## Prerequisites ++To integrate Azure Active Directory with Uber, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Uber single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Uber application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Uber from the Azure AD gallery ++Add Uber from the Azure AD application gallery to configure single sign-on with Uber. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Uber** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Uber** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Uber SSO ++To configure single sign-on on **Uber** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Uber support team](mailto:business-api-support@uber.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Uber test user ++In this section, you create a user called Britta Simon in Uber. Work with [Uber support team](mailto:business-api-support@uber.com) to add the users in the Uber platform. Users must be created and activated before you use single sign-on. Uber also supports automatic user provisioning, you can find more details [here](uber-provisioning-tutorial.md) on how to configure automatic user provisioning. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on Test this application in Azure portal and you should be automatically signed in to the Uber for which you set up the SSO. ++* You can use Microsoft My Apps. When you click the Uber tile in the My Apps, you should be automatically signed in to the Uber for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Uber you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
aks | Azure Disk Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md | Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Ser description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks. Previously updated : 05/10/2023 Last updated : 07/10/2023 # Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service (AKS) Azure Storage encrypts all data in a storage account at rest. By default, data i Learn more about customer-managed keys on [Linux][customer-managed-keys-linux] and [Windows][customer-managed-keys-windows]. +## Prerequisites ++* You must enable soft delete and purge protection for *Azure Key Vault* when using Key Vault to encrypt managed disks. +* You need the Azure CLI version 2.11.1 or later. +* Data disk encryption and customer-managed keys are supported on Kubernetes versions 1.24 and higher. +* If you choose to rotate (change) your keys periodically, see [Customer-managed keys and encryption of Azure managed disk](../virtual-machines/disk-encryption.md) for more information. + ## Limitations -* Data disk encryption support is limited to AKS clusters running Kubernetes version 1.17 and above. * Encryption of OS disk with customer-managed keys can only be enabled when creating an AKS cluster.+* When encrypting ephemeral OS disk-enabled node pool with customer-managed keys, if you want to rotate the key in Azure Key Vault, you need to: -## Prerequisites + * Scale down the node pool count to 0 + * Rotate the key + * Scale up the node pool to the original count. -* You must enable soft delete and purge protection for *Azure Key Vault* when using Key Vault to encrypt managed disks. -* You need the Azure CLI version 2.11.1 or later. -* Customer-managed keys are only supported in Kubernetes versions 1.17 and higher. -* If you choose to rotate (change) your keys periodically, for more information see [Customer-managed keys and encryption of Azure managed disk](../virtual-machines/disk-encryption.md). +## Register customer-managed key (preview) feature ++To enable customer-managed key for ephemeral OS disk (preview) feature, you must register *EnableBYOKOnEphemeralOSDiskPreview* feature flag on *Microsoft.ContainerService* over the subscription. To perform the registration, run the following commands. ++1. Install the *aks-preview* extension: ++ ```azurecli-interactive + az extension add --name aks-preview + ``` ++1. Update to the latest version of the extension released: ++ ```azurecli-interactive + az extension update --name aks-preview + ``` ++1. Register the *EnableBYOKOnEphemeralOSDiskPreview* feature flag: ++ ```azurecli-interactive + az feature register --namespace "Microsoft.ContainerService" --name "EnableBYOKOnEphemeralOSDiskPreview" + ``` ++ It takes a few minutes for the status to show *Registered*. ++1. Verify the registration status: ++ ```azurecli-interactive + az feature show --namespace "Microsoft.ContainerService" --name "EnableBYOKOnEphemeralOSDiskPreview" + ``` ++1. When the status shows *Registered*, refresh the `Microsoft.ContainerService` resource provider registration: ++ ```azurecli-interactive + az provider register --namespace Microsoft.ContainerService + ``` ## Create an Azure Key Vault instance az keyvault create -n myKeyVaultName -g myResourceGroup -l myAzureRegionName -- ## Create an instance of a DiskEncryptionSet -Replace *myKeyVaultName* with the name of your key vault. You will also need a *key* stored in Azure Key Vault to complete the following steps. Either store your existing Key in the Key Vault you created on the previous steps, or [generate a new key][key-vault-generate] and replace *myKeyName* below with the name of your key. +Replace *myKeyVaultName* with the name of your key vault. You also need a *key* stored in Azure Key Vault to complete the following steps. Either store your existing Key in the Key Vault you created on the previous steps, or [generate a new key][key-vault-generate] and replace *myKeyName* with the name of your key. ```azurecli-interactive # Retrieve the Key Vault Id and store it in a variable az keyvault set-policy -n myKeyVaultName -g myResourceGroup --object-id $desIden ## Create a new AKS cluster and encrypt the OS disk -Create a **new resource group** and AKS cluster, then use your key to encrypt the OS disk. +Either create a new resource group, or select an existing resource group hosting other AKS clusters, then use your key to encrypt the either using network-attached OS disks or ephemeral OS disk. By default, a cluster uses ephemeral OS disk when possible in conjunction with VM size and OS disk size. -> [!IMPORTANT] -> Ensure you create a new resource group for your AKS cluster +Run the following command to retrieve the DiskEncryptionSet value and set a variable: ```azurecli-interactive-# Retrieve the DiskEncryptionSet value and set a variable diskEncryptionSetId=$(az disk-encryption-set show -n mydiskEncryptionSetName -g myResourceGroup --query "[id]" -o tsv)+``` ++If you want to create a new resource group for the cluster, run the following command: -# Create a resource group for the AKS cluster +```azurecli-interactive az group create -n myResourceGroup -l myAzureRegionName+``` ++To create a regular cluster using network-attached OS disks encrypted with your key, you can do so by specifying the `--node-osdisk-type=Managed` argument. ++```azurecli-interactive +az aks create -n myAKSCluster -g myResourceGroup --node-osdisk-diskencryptionset-id $diskEncryptionSetId --generate-ssh-keys --node-osdisk-type Managed +``` ++To create a cluster with ephemeral OS disk encrypted with your key, you can do so by specifying the `--node-osdisk-type=Ephemeral` argument. You also need to specify the argument `--node-vm-size` because the default vm size is too small and doesn't support ephemeral OS disk. -# Create the AKS cluster -az aks create -n myAKSCluster -g myResourceGroup --node-osdisk-diskencryptionset-id $diskEncryptionSetId --kubernetes-version KUBERNETES_VERSION --generate-ssh-keys +```azurecli-interactive +az aks create -n myAKSCluster -g myResourceGroup --node-osdisk-diskencryptionset-id $diskEncryptionSetId --generate-ssh-keys --node-osdisk-type Ephemeral --node-vm-size Standard_DS3_v2 ``` -When new node pools are added to the cluster created above, the customer-managed key provided during the create process is used to encrypt the OS disk. +When new node pools are added to the cluster, the customer-managed key provided during the create process is used to encrypt the OS disk. The following example shows how to deploy a new node pool with an ephemeral OS disk. ++```azurecli-interactive +az aks nodepool add --cluster-name $CLUSTER_NAME -g $RG_NAME --name $NODEPOOL_NAME --node-osdisk-type Ephemeral +``` ## Encrypt your AKS cluster data disk az aks get-credentials --name myAksCluster --resource-group myResourceGroup --ou kubectl apply -f byok-azure-disk.yaml ``` -## Using Azure tags --For more information on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags]. - ## Next steps Review [best practices for AKS cluster security][best-practices-security] Review [best practices for AKS cluster security][best-practices-security] <!-- LINKS - external --> <!-- LINKS - internal -->-[az-extension-add]: /cli/azure/extension#az_extension_add -[az-extension-update]: /cli/azure/extension#az_extension_update [best-practices-security]: ./operator-best-practices-cluster-security.md [byok-azure-portal]: ../storage/common/customer-managed-keys-configure-key-vault.md [customer-managed-keys-windows]: ../virtual-machines/disk-encryption.md#customer-managed-keys [customer-managed-keys-linux]: ../virtual-machines/disk-encryption.md#customer-managed-keys-[key-vault-generate]: ../key-vault/general/manage-with-cli2.md -[supported-regions]: ../virtual-machines/disk-encryption.md#supported-regions -[use-tags]: use-tags.md +[key-vault-generate]: ../key-vault/general/manage-with-cli2.md |
aks | Concepts Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md | Title: Concepts - Storage in Azure Kubernetes Services (AKS) description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims. Previously updated : 06/20/2023 Last updated : 06/27/2023 By contrast, ephemeral OS disks are stored only on the host machine, just like a Size requirements and recommendations for ephemeral OS disks are available in the [Azure VM documentation][azure-vm-ephemeral-os-disks]. The following are some general sizing considerations: -* If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GB, the default VM size supports ephemeral OS, but only has 86 GiB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you receive a validation error. +* If you chose to use the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with the default OS disk size of 100 GiB, the default VM size supports ephemeral OS, but only has 86 GiB of cache size. This configuration would default to managed disks if you don't explicitly specify it. If you do request an ephemeral OS, you receive a validation error. -* If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60 GiB OS disk, this configuration would default to ephemeral OS. The requested size of 60 GiB is smaller than the maximum cache size of 86 GiB. +* If you request the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) SKU with a 60-GiB OS disk, this configuration would default to ephemeral OS. The requested size of 60 GiB is smaller than the maximum cache size of 86 GiB. -* If you select the [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) SKU with 100 GB OS disk, this VM size supports ephemeral OS and has 200 GiB of cache space. If you don't specify the OS disk type, the node pool would receive ephemeral OS by default. +* If you select the [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) SKU with 100-GB OS disk, this VM size supports ephemeral OS and has 200 GiB of cache space. If you don't specify the OS disk type, the node pool would receive ephemeral OS by default. The latest generation of VM series doesn't have a dedicated cache, but only temporary storage. For example, if you selected the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB, it supports ephemeral OS disks, but only has 75 GB of temporary storage. This configuration would default to managed OS disks if you don't explicitly specify it. If you do request an ephemeral OS disk, you receive a validation error. -* If you request the same [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration defaults to ephemeral OS disks. The requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB. +* If you request the same [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with a 60-GiB OS disk, this configuration defaults to ephemeral OS disks. The requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB. -* If you select [Standard_E4bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) SKU with 100 GiB OS disk, this VM size supports ephemeral OS +* If you select [Standard_E4bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) SKU with 100-GiB OS disk, this VM size supports ephemeral OS and has 150 GiB of temporary storage. If you don't specify the OS disk type, by default Azure provisions an ephemeral OS disk to the node pool. -### Customer Managed key +### Customer-managed keys -You can manage encryption for your ephemeral OS disk with your own keys on an AKS cluster. For more information, see [Azure ephemeral OS disks Customer Managed key][azure-disk-customer-managed-key]. +You can manage encryption for your ephemeral OS disk with your own keys on an AKS cluster. For more information, see [Use Customer Managed key with Azure disk on AKS][azure-disk-customer-managed-key]. ## Volumes Kubernetes typically treats individual pods as ephemeral, disposable resources. Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly or have Kubernetes automatically create them. Data volumes can use: [Azure Disk][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview]. > [!NOTE]-> Depending on the VM SKU you're using, the Azure Disk CSI driver might have a per-node volume limit. For some high perfomance VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes]. +> Depending on the VM SKU you're using, the Azure Disk CSI driver might have a per-node volume limit. For some high performance VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes]. To help determine best fit for your workload between Azure Files and Azure NetApp Files, review the information provided in the article [Azure Files and Azure NetApp Files comparison][azure-files-azure-netapp-comparison]. Use [Azure Disk][azure-disk-csi] to create a Kubernetes *DataDisk* resource. Dis > [!TIP] > For most production and development workloads, use Premium SSD. -Because Azure Disk are mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes that can be accessed by pods on multiple nodes simultaneously, use Azure Files. +Because Azure Disk is mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes accessible by pods on multiple nodes simultaneously, use Azure Files. ### Azure Files You can use *secret* volumes to inject sensitive data into pods, such as passwor 1. Define your pod or deployment and request a specific Secret. * Secrets are only provided to nodes with a scheduled pod that requires them. * The Secret is stored in *tmpfs*, not written to disk.-1. When you delete the last pod on a node requiring a Secret, the Secret is deleted from the node's tmpfs. - * Secrets are stored within a given namespace and can only be accessed by pods within the same namespace. +1. When you delete the last pod on a node requiring a Secret, the Secret is deleted from the node's tmpfs. + * Secrets are stored within a given namespace and are only accessed by pods within the same namespace. #### configMap Like using a secret: 1. Create a ConfigMap using the Kubernetes API. 1. Request the ConfigMap when you define a pod or deployment.- * ConfigMaps are stored within a given namespace and can only be accessed by pods within the same namespace. + * ConfigMaps are stored within a given namespace and are only accessed by pods within the same namespace. ## Persistent volumes You can use [Azure Disk](azure-csi-disk-storage-provision.md) or [Azure Files](a  -A PersistentVolume can be *statically* created by a cluster administrator, or *dynamically* created by the Kubernetes API server. If a pod is scheduled and requests currently unavailable storage, Kubernetes can create the underlying Azure Disk or File storage and attach it to the pod. Dynamic provisioning uses a *StorageClass* to identify what type of Azure storage needs to be created. +A cluster administrator can *statically* create a PersistentVolume, or the volume is created *dynamically* by the Kubernetes API server. If a pod is scheduled and requests currently unavailable storage, Kubernetes can create the underlying Azure Disk or File storage and attach it to the pod. Dynamic provisioning uses a *StorageClass* to identify what type of Azure storage needs to be created. > [!IMPORTANT] > Persistent volumes can't be shared by Windows and Linux pods due to differences in file system support between the two operating systems. For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-d | Permission | Reason | |||-| `managed-csi` | Uses Azure StandardSSD locally redundant storage (LRS) to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. The storage class also configures the persistent volumes to be expandable, you just need to edit the persistent volume claim with the new size. | -| `managed-csi-premium` | Uses Azure Premium locally redundant storage (LRS) to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. Similarly, this storage class allows for persistent volumes to be expanded. | -| `azurefile-csi` | Uses Azure Standard storage to create an Azure file share. The reclaim policy ensures that the underlying Azure file share is deleted when the persistent volume that used it is deleted. | -| `azurefile-csi-premium` | Uses Azure Premium storage to create an Azure file share. The reclaim policy ensures that the underlying Azure file share is deleted when the persistent volume that used it is deleted.| -| `azureblob-nfs-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using the NFS v3 protocol. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. | -| `azureblob-fuse-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using BlobFuse. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. | +| `managed-csi` | Uses Azure StandardSSD locally redundant storage (LRS) to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it's deleted. The storage class also configures the persistent volumes to be expandable, you just need to edit the persistent volume claim with the new size. | +| `managed-csi-premium` | Uses Azure Premium locally redundant storage (LRS) to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it's deleted. Similarly, this storage class allows for persistent volumes to be expanded. | +| `azurefile-csi` | Uses Azure Standard storage to create an Azure file share. The reclaim policy ensures that the underlying Azure file share is deleted when the persistent volume that used it +s deleted. | +| `azurefile-csi-premium` | Uses Azure Premium storage to create an Azure file share. The reclaim policy ensures that the underlying Azure file share is deleted when the persistent volume that used it's deleted.| +| `azureblob-nfs-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using the NFS v3 protocol. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it's deleted. | +| `azureblob-fuse-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using BlobFuse. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it's deleted. | -Unless you specify a StorageClass for a persistent volume, the default StorageClass will be used. Ensure volumes use the appropriate storage you need when requesting persistent volumes. +Unless you specify a StorageClass for a persistent volume, the default StorageClass is used. Ensure volumes use the appropriate storage you need when requesting persistent volumes. > [!IMPORTANT]-> Starting with Kubernetes version 1.21, AKS only uses CSI drivers by default and CSI migration is enabled. While existing in-tree persistent volumes continue to function, starting with version 1.26, AKS will no longer support volumes created using in-tree driver and storage provisioned for files and disk. +> Starting with Kubernetes version 1.21, AKS only uses CSI drivers by default and CSI migration is enabled. While existing in-tree persistent volumes continue to function, starting with version 1.26, AKS will no longer support volumes created using in-tree driver and storage provisioned for files and disk. > > The `default` class will be the same as `managed-csi`. -You can create a StorageClass for additional needs using `kubectl`. The following example uses Premium Managed Disks and specifies that the underlying Azure Disk should be *retained* when you delete the pod: +You can create a StorageClass for other needs using `kubectl`. The following example uses Premium Managed Disks and specifies that the underlying Azure Disk should be *retained* when you delete the pod: ```yaml apiVersion: storage.k8s.io/v1 For more information on core Kubernetes and AKS concepts, see the following arti [azure-disk-csi]: azure-disk-csi.md [azure-netapp-files]: azure-netapp-files.md [azure-files-csi]: azure-files-csi.md-[azure-files-volume]: azure-files-volume.md [aks-concepts-clusters-workloads]: concepts-clusters-workloads.md [aks-concepts-identity]: concepts-identity.md [aks-concepts-scale]: concepts-scale.md [aks-concepts-security]: concepts-security.md [aks-concepts-network]: concepts-network.md [operator-best-practices-storage]: operator-best-practices-storage.md-[csi-storage-drivers]: csi-storage-drivers.md [azure-blob-csi]: azure-blob-csi.md [general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md [azure-files-azure-netapp-comparison]: ../storage/files/storage-files-netapp-comparison.md-[azure-disk-customer-managed-key]: ../virtual-machines/ephemeral-os-disks.md#customer-managed-key +[azure-disk-customer-managed-key]: azure-disk-customer-managed-keys.md |
aks | Use Multiple Node Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md | Title: Use multiple node pools in Azure Kubernetes Service (AKS) description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) Previously updated : 03/11/2023 Last updated : 06/27/2023 # Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) -In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a [system node pool][use-system-pool]. To support applications that have different compute or storage demands, you can create additional *user node pools*. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and `konnectivity`. User node pools serve the primary purpose of hosting your application pods. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster. User node pools are where you place your application-specific pods. For example, use these additional user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage. +In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a [system node pool][use-system-pool]. To support applications that have different compute or storage demands, you can create more *user node pools*. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and `konnectivity`. User node pools serve the primary purpose of hosting your application pods. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster. User node pools are where you place your application-specific pods. For example, use more user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage. > [!NOTE] > This feature enables higher control over how to create and manage multiple node pools. As a result, separate commands are required for create/update/delete. Previously cluster operations through `az aks create` or `az aks update` used the managedCluster API and were the only options to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and require use of the `az aks nodepool` command set to execute operations on an individual node pool. -This article shows you how to create and manage multiple node pools in an AKS cluster. +This article shows you how to create and manage one or more node pools in an AKS cluster. ## Before you begin -You need the Azure CLI version 2.2.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. +* You need the Azure CLI version 2.2.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. +* Review [Storage options for applications in Azure Kubernetes Service][aks-storage-concepts] to plan your storage configuration. ## Limitations The following example output shows that *mynodepool* has been successfully creat ### Add an ARM64 node pool -The ARM64 processor provides low power compute for your Kubernetes workloads. To create an ARM64 node pool, you will need to choose a [Dpsv5][arm-sku-vm1], [Dplsv5][arm-sku-vm2] or [Epsv5][arm-sku-vm3] series Virtual Machine. +The ARM64 processor provides low power compute for your Kubernetes workloads. To create an ARM64 node pool, you need to choose a [Dpsv5][arm-sku-vm1], [Dplsv5][arm-sku-vm2] or [Epsv5][arm-sku-vm3] series Virtual Machine. #### Limitations -* ARM64 node pools are not supported on Defender-enabled clusters -* FIPS-enabled node pools are not supported with ARM64 SKUs +* ARM64 node pools aren't supported on Defender-enabled clusters +* FIPS-enabled node pools aren't supported with ARM64 SKUs Use `az aks nodepool add` command to add an ARM64 node pool. az aks nodepool add \ ### Add an Azure Linux node pool -The Azure Linux container host for AKS is an open-source Linux distribution available as an AKS container host. It provides high reliability, security, and consistency. It only includes the minimal set of packages needed for running container workloads, which improves boot times and overall performance. +The Azure Linux container host for AKS is an open-source Linux distribution available as an AKS container host. It provides high reliability, security, and consistency. It only includes the minimal set of packages needed for running container workloads, which improve boot times and overall performance. You can add an Azure Linux node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku AzureLinux`. az aks nodepool add \ Use the following instructions to migrate your Ubuntu nodes to Azure Linux nodes. -1. Add a Azure Linux node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku AzureLinux`. +1. Add an Azure Linux node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku AzureLinux`. > [!NOTE] > When adding a new Azure Linux node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing Ubuntu node pool. -2. [Cordon the existing Ubuntu nodes][cordon-and-drain]. +2. [Cordon the existing Ubuntu nodes][cordon-and-drain.md]. 3. [Drain the existing Ubuntu nodes][drain-nodes]. 4. Remove the existing Ubuntu nodes using the `az aks delete` command. A workload may require splitting a cluster's nodes into separate pools for logic * All subnets assigned to node pools must belong to the same virtual network. * System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.-* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original CIDR block. AKS will error-out on the agent pool add now though we originally allowed it. The `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command will perform an update operation without making any changes, which can recover a cluster stuck in a failed state. +* If you expand your VNET after creating the cluster, you must update your cluster (perform any managed cluster operations, but node pool operations don't count) before adding a subnet outside the original CIDR block. While AKS errors-out on the agent pool add, the `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command performs an update operation without making any changes, which can recover a cluster stuck in a failed state. * In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets. * Windows nodes will SNAT traffic to the new subnets until the node pool is reimaged. * Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation). To override this behavior, you can [specify the load balancer's subnet explicitly using an annotation][internal-lb-different-subnet]. -To create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool. +To create a node pool with a dedicated subnet, pass the subnet resource ID as another parameter when creating a node pool. ```azurecli-interactive az aks nodepool add \ az aks nodepool add \ > [!NOTE] > Upgrade and scale operations on a cluster or node pool cannot occur simultaneously, if attempted an error is returned. Instead, each operation type must complete on the target resource prior to the next request on that same resource. Read more about this on our [troubleshooting guide](./troubleshooting.md#im-receiving-errors-when-trying-to-upgrade-or-scale-that-state-my-cluster-is-being-upgraded-or-has-failed-upgrade). -The commands in this section explain how to upgrade a single specific node pool. The relationship between upgrading the Kubernetes version of the control plane and the node pool are explained in the [section below](#upgrade-a-cluster-control-plane-with-multiple-node-pools). +The commands in this section explain how to upgrade a single specific node pool. The relationship between upgrading the Kubernetes version of the control plane and the node pool are explained in the [Upgrade a cluster control plan with multiple node pools](#upgrade-a-cluster-control-plane-with-multiple-node-pools) section. > [!NOTE]-> The node pool OS image version is tied to the Kubernetes version of the cluster. You will only get OS image upgrades, following a cluster upgrade. +> The node pool OS image version is tied to the Kubernetes version of the cluster. You only get OS image upgrades, following a cluster upgrade. Since there are two node pools in this example, we must use [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] to upgrade a node pool. To see the available upgrades use [`az aks get-upgrades`][az-aks-get-upgrades] As a best practice, you should upgrade all node pools in an AKS cluster to the s ## Upgrade a cluster control plane with multiple node pools > [!NOTE]-> Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme. The version number is expressed as *x.y.z*, where *x* is the major version, *y* is the minor version, and *z* is the patch version. For example, in version *1.12.6*, 1 is the major version, 12 is the minor version, and 6 is the patch version. The Kubernetes version of the control plane and the initial node pool are set during cluster creation. All additional node pools have their Kubernetes version set when they are added to the cluster. The Kubernetes versions may differ between node pools as well as between a node pool and the control plane. +> Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme. The version number is expressed as *x.y.z*, where *x* is the major version, *y* is the minor version, and *z* is the patch version. For example, in version *1.12.6*, 1 is the major version, 12 is the minor version, and 6 is the patch version. The Kubernetes version of the control plane and the initial node pool are set during cluster creation. Other node pools have their Kubernetes version set when they are added to the cluster. The Kubernetes versions may differ between node pools as well as between a node pool and the control plane. An AKS cluster has two cluster resource objects with Kubernetes versions associated. Upgrading individual node pools requires using `az aks nodepool upgrade`. This c ### Validation rules for upgrades -The valid Kubernetes upgrades for a cluster's control plane and node pools are validated by the following sets of rules. +Kubernetes upgrades for a cluster's control plane and node pools are validated using the following sets of rules. * Rules for valid versions to upgrade node pools: * The node pool version must have the same *major* version as the control plane. It takes a few minutes to delete the nodes and the node pool. As your application workloads demands, you may associate node pools to capacity reservation groups already created. This ensures guaranteed capacity is allocated for your node pools. -For more information on the capacity reservation groups, please refer to [Capacity Reservation Groups][capacity-reservation-groups]. +For more information on the capacity reservation groups, review [Capacity Reservation Groups][capacity-reservation-groups]. ### Register preview feature az provider register --namespace Microsoft.ContainerService ### Manage capacity reservations -Associating a node pool with an existing capacity reservation group can be done using [`az aks nodepool add`][az-aks-nodepool-add] command and specifying a capacity reservation group with the --capacityReservationGroup flag". The capacity reservation group should already exist, otherwise the node pool will be added to the cluster with a warning and no capacity reservation group gets associated. +Associating a node pool with an existing capacity reservation group can be done using [`az aks nodepool add`][az-aks-nodepool-add] command and specifying a capacity reservation group with the --capacityReservationGroup flag". The capacity reservation group should already exist, otherwise the node pool is added to the cluster with a warning and no capacity reservation group gets associated. ```azurecli-interactive az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG Associating a system node pool with an existing capacity reservation group can b az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG ``` -Deleting a node pool command will implicitly dissociate a node pool from any associated capacity reservation group, before that node pool is deleted. +Deleting a node pool command implicitly dissociates a node pool from any associated capacity reservation group, before that node pool is deleted. ```azurecli-interactive az aks nodepool delete -g MyRG --cluster-name MyMC -n myAP Events: Normal Started 4m40s kubelet Started container ``` -Only pods that have this toleration applied can be scheduled on nodes in *taintnp*. Any other pod would be scheduled in the *nodepool1* node pool. If you create additional node pools, you can use additional taints and tolerations to limit what pods can be scheduled on those node resources. +Only pods that have this toleration applied can be scheduled on nodes in *taintnp*. Any other pod would be scheduled in the *nodepool1* node pool. If you create more node pools, you can use taints and tolerations to limit what pods can be scheduled on those node resources. ### Setting node pool labels To delete the cluster itself, use the [`az group delete`][az-group-delete] comma az group delete --name myResourceGroup --yes --no-wait ``` -You can also delete the additional cluster you created for the public IP for node pools scenario. +You can also delete the other cluster you created for the public IP for node pools scenario. ```azurecli-interactive az group delete --name myResourceGroup2 --yes --no-wait az group delete --name myResourceGroup2 --yes --no-wait * Use [instance-level public IP addresses](use-node-public-ips.md) to make your nodes able to serve traffic directly. <!-- EXTERNAL LINKS -->--[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ -[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get -[kubectl-taint]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#taint [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe-[kubernetes-labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ -[kubernetes-label-syntax]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set [capacity-reservation-groups]:/azure/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set <!-- INTERNAL LINKS -->+[aks-storage-concepts]: concepts-storage.md [arm-sku-vm1]: ../virtual-machines/dpsv5-dpdsv5-series.md [arm-sku-vm2]: ../virtual-machines/dplsv5-dpldsv5-series.md [arm-sku-vm3]: ../virtual-machines/epsv5-epdsv5-series.md az group delete --name myResourceGroup2 --yes --no-wait [az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-nodepool-list]: /cli/azure/aks/nodepool#az_aks_nodepool_list-[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az_aks_nodepool_update [az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade [az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az_aks_nodepool_scale [az-aks-nodepool-delete]: /cli/azure/aks/nodepool#az_aks_nodepool_delete-[az-aks-show]: /cli/azure/aks#az_aks_show -[az-extension-add]: /cli/azure/extension#az_extension_add -[az-extension-update]: /cli/azure/extension#az_extension_update [az-feature-register]: /cli/azure/feature#az_feature_register-[az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register [az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete [az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create-[az-aks-nodepool-add]: /cli/azure/aks#az_aks_nodepool_add [enable-fips-nodes]: enable-fips-nodes.md-[gpu-cluster]: gpu-cluster.md [install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md [quotas-skus-regions]: quotas-skus-regions.md-[supported-versions]: supported-kubernetes-versions.md -[tag-limitation]: ../azure-resource-manager/management/tag-resources.md [taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations [vm-sizes]: ../virtual-machines/sizes.md [use-system-pool]: use-system-pools.md-[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks -[vmss-commands]: ../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine -[az-list-ips]: /cli/azure/vmss#az_vmss_list_instance_public_ips [reduce-latency-ppg]: reduce-latency-ppg.md-[public-ip-prefix-benefits]: ../virtual-network/ip-services/public-ip-address-prefix.md -[az-public-ip-prefix-create]: /cli/azure/network/public-ip/prefix#az_network_public_ip_prefix_create -[node-image-upgrade]: node-image-upgrade.md -[use-tags]: use-tags.md +[[use-tags]: use-tags.md [use-labels]: use-labels.md [cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes [internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet |
aks | Use Pod Sandboxing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md | When a pod uses the *kata-mshv-vm-isolation* runtimeClass, it creates a VM to se ## Deploy new cluster -Perform the following steps to deploy a Azure Linux AKS cluster using the Azure CLI. +Perform the following steps to deploy an Azure Linux AKS cluster using the Azure CLI. 1. Create an AKS cluster using the [az aks create][az-aks-create] command and specifying the following parameters: Perform the following steps to deploy a Azure Linux AKS cluster using the Azure ```azurecli-interactive az aks create --name myAKSCluster --resource-group myResourceGroup --os-sku AzureLinux --workload-runtime KataMshvVmIsolation --node-vm-size Standard_D4s_v3 --node-count 1+ ``` 2. Run the following command to get access credentials for the Kubernetes cluster. Use the [az aks get-credentials][aks-get-credentials] command and replace the values for the cluster name and the resource group name. kubectl delete pod pod-name ## Next steps -* Learn more about [Azure Dedicated hosts][azure-dedicated-hosts] for nodes with your AKS cluster to use hardware isolation and control over Azure platform maintenance events. +Learn more about [Azure Dedicated hosts][azure-dedicated-hosts] for nodes with your AKS cluster to use hardware isolation and control over Azure platform maintenance events. <!-- EXTERNAL LINKS --> [kata-containers-overview]: https://katacontainers.io/ |
app-service | Deploy Staging Slots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md | Title: Set up staging environments -description: Learn how to deploy apps to a non-production slot and autoswap into production. Increase the reliability and eliminate app downtime from deployments. +description: Learn how to deploy apps to a nonproduction slot and autoswap into production. Increase the reliability and eliminate app downtime from deployments. ms.assetid: e224fc4f-800d-469a-8d6a-72bcde612450 Previously updated : 04/30/2020 Last updated : 07/30/2023 -Deploying your application to a non-production slot has the following benefits: +Deploying your application to a nonproduction slot has the following benefits: * You can validate app changes in a staging deployment slot before swapping it with the production slot. * Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, and no requests are dropped because of swap operations. You can automate this entire workflow by configuring [auto swap](#Auto-Swap) when pre-swap validation isn't needed. * After a swap, the slot with previously staged app now has the previous production app. If the changes swapped into the production slot aren't as you expect, you can perform the same swap immediately to get your "last known good site" back. -Each App Service plan tier supports a different number of deployment slots. There's no additional charge for using deployment slots. To find out the number of slots your app's tier supports, see [App Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits). +Each App Service plan tier supports a different number of deployment slots. There's no extra charge for using deployment slots. To find out the number of slots your app's tier supports, see [App Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits). To scale your app to a different tier, make sure that the target tier supports the number of slots your app already uses. For example, if your app has more than five slots, you can't scale it down to the **Standard** tier, because the **Standard** tier supports only five deployment slots. +## Prerequisites ++For information on the permissions you need to perform the slot operation you want, see [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftweb) (search for *slot*, for example). + <a name="Add"></a> ## Add a slot The app must be running in the **Standard**, **Premium**, or **Isolated** tier in order for you to enable multiple deployment slots. +# [Azure portal](#tab/portal) -1. in the [Azure portal](https://portal.azure.com/), search for and select **App Services** and select your app. - -  - +1. In the [Azure portal](https://portal.azure.com), navigate to your app's management page. -2. In the left pane, select **Deployment slots** > **Add Slot**. +1. In the left pane, select **Deployment slots** > **Add Slot**. -  - - > [!NOTE] - > If the app isn't already in the **Standard**, **Premium**, or **Isolated** tier, you receive a message that indicates the supported tiers for enabling staged publishing. At this point, you have the option to select **Upgrade** and go to the **Scale** tab of your app before continuing. - > + > [!NOTE] + > If the app isn't already in the **Standard**, **Premium**, or **Isolated** tier, select **Upgrade** and go to the **Scale** tab of your app before continuing. 3. In the **Add a slot** dialog box, give the slot a name, and select whether to clone an app configuration from another deployment slot. Select **Add** to continue.- -  ++ :::image type="content" source="media/web-sites-staged-publishing/configure-new-slot.png" alt-text="A screenshot that shows how to configure a new deployment slot called 'staging' in the portal." lightbox="media/web-sites-staged-publishing/configure-new-slot.png"::: You can clone a configuration from any existing slot. Settings that can be cloned include app settings, connection strings, language framework versions, web sockets, HTTP version, and platform bitness. - > [!NOTE] - > Currently, a Private Endpoint isn't cloned across slots. - > -+ > [!NOTE] + > Currently, a private endpoint isn't cloned across slots. + 4. After the slot is added, select **Close** to close the dialog box. The new slot is now shown on the **Deployment slots** page. By default, **Traffic %** is set to 0 for the new slot, with all customer traffic routed to the production slot. 5. Select the new deployment slot to open that slot's resource page.- -  ++ :::image type="content" source="media/web-sites-staged-publishing/open-deployment-slot.png" alt-text="A screenshot that shows how to open deployment slot's management page in the portal." lightbox="media/web-sites-staged-publishing/open-deployment-slot.png"::: The staging slot has a management page just like any other App Service app. You can change the slot's configuration. To remind you that you're viewing the deployment slot, the app name is shown as **\<app-name>/\<slot-name>**, and the app type is **App Service (Slot)**. You can also see the slot as a separate app in your resource group, with the same designations. 6. Select the app URL on the slot's resource page. The deployment slot has its own host name and is also a live app. To limit public access to the deployment slot, see [Azure App Service IP restrictions](app-service-ip-restrictions.md). +# [Azure CLI](#tab/cli) ++Run the following command in a terminal: ++```azurecli-interactive +az webapp deployment slot create --name <app-name> --resource-group <group-name> --slot <slot-name> +``` ++For more information, see [az webapp deployment slot create](/cli/azure/webapp/deployment/slot#az-webapp-deployment-slot-create). ++# [Azure PowerShell](#tab/powershell) ++Run the following cmdlet in a PowerShell terminal: ++```azurepowershell-interactive +New-AzWebAppSlot -ResourceGroupName <group-name> -Name <app-name> -Slot <slot-name> -AppServicePlan <plan-name> +``` ++For more information, see [New-AzWebAppSlot](/powershell/module/az.websites/new-azwebappslot). ++-- + The new deployment slot has no content, even if you clone the settings from a different slot. For example, you can [publish to this slot with Git](./deploy-local-git.md). You can deploy to the slot from a different repository branch or a different repository. Get publish profile [from Azure App Service](/visualstudio/azure/how-to-get-publish-profile-from-azure-app-service) can provide required information to deploy to the slot. The profile can be imported by Visual Studio to deploy contents to the slot. -The slot's URL will be of the format `http://sitename-slotname.azurewebsites.net`. To keep the URL length within necessary DNS limits, the site name will be truncated at 40 characters, the slot name will be truncated at 19 characters, and an additional 4 random characters will be appended to ensure the resulting domain name is unique. +The slot's URL has the format `http://sitename-slotname.azurewebsites.net`. To keep the URL length within necessary DNS limits, the site name is truncated at 40 characters, the slot name is truncated at 19 characters, and 4 extra random characters are appended to ensure the resulting domain name is unique. <a name="AboutConfiguration"></a> At any point of the swap operation, all work of initializing the swapped apps ha [!INCLUDE [app-service-deployment-slots-settings](../../includes/app-service-deployment-slots-settings.md)] -To configure an app setting or connection string to stick to a specific slot (not swapped), go to the **Configuration** page for that slot. Add or edit a setting, and then select **deployment slot setting**. Selecting this check box tells App Service that the setting is not swappable. +To configure an app setting or connection string to stick to a specific slot (not swapped), go to the **Configuration** page for that slot. Add or edit a setting, and then select **deployment slot setting**. Selecting this check box tells App Service that the setting isn't swappable. - <a name="Swap"></a> You can swap deployment slots on your app's **Deployment slots** page and the ** > > +# [Azure portal](#tab/portal) + To swap deployment slots: 1. Go to your app's **Deployment slots** page and select **Swap**.- -  ++ :::image type="content" source="media/web-sites-staged-publishing/swap-initiate.png" alt-text="A screenshot that shows how to initiate a swap operation in the portal." lightbox="media/web-sites-staged-publishing/swap-initiate.png"::: The **Swap** dialog box shows settings in the selected source and target slots that will be changed. 2. Select the desired **Source** and **Target** slots. Usually, the target is the production slot. Also, select the **Source Changes** and **Target Changes** tabs and verify that the configuration changes are expected. When you're finished, you can swap the slots immediately by selecting **Swap**. -  + :::image type="content" source="media/web-sites-staged-publishing/swap-configure-source-target-slots.png" alt-text="A screenshot that shows how to configure and complete a swap in the portal." lightbox="media/web-sites-staged-publishing/swap-configure-source-target-slots.png"::: To see how your target slot would run with the new settings before the swap actually happens, don't select **Swap**, but follow the instructions in [Swap with preview](#Multi-Phase). 3. When you're finished, close the dialog box by selecting **Close**. +# [Azure CLI](#tab/cli) ++To swap a slot into production, run the following command in a terminal: ++```azurecli-interactive +az webapp deployment slot swap --resource-group <group-name> --name <app-name> --slot <source-slot-name> --target-slot production +``` ++For more information, see [az webapp deployment slot swap](/cli/azure/webapp/deployment/slot#az-webapp-deployment-slot-swap). ++# [Azure PowerShell](#tab/powershell) ++To swap a slot into production, run the following cmdlet in a PowerShell terminal: ++```azurepowershell-interactive +Switch-AzWebAppSlot -SourceSlotName "<source-slot-name>" -DestinationSlotName "production" -ResourceGroupName "<group-name>" -Name "<app-name>" +``` ++For more information, see [Switch-AzWebAppSlot](/powershell/module/az.websites/switch-azwebappslot). ++-- + If you have any problems, see [Troubleshoot swaps](#troubleshoot-swaps). <a name="Multi-Phase"></a> If you cancel the swap, App Service reapplies configuration elements to the sour > Swap with preview can't be used when one of the slots has site authentication enabled. > +# [Azure portal](#tab/portal) + To swap with preview: 1. Follow the steps in [Swap deployment slots](#Swap) but select **Perform swap with preview**. -  + :::image type="content" source="media/web-sites-staged-publishing/swap-with-preview.png" alt-text="A screenshot that shows how to configure a swap with preview in the portal." lightbox="media/web-sites-staged-publishing/swap-with-preview.png"::: The dialog box shows you how the configuration in the source slot changes in phase 1, and how the source and target slot change in phase 2. To swap with preview: 3. When you're ready to complete the pending swap, select **Complete Swap** in **Swap action** and select **Complete Swap**. - To cancel a pending swap, select **Cancel Swap** instead. + To cancel a pending swap, select **Cancel Swap** instead, and then select **Cancel Swap** at the bottom. 4. When you're finished, close the dialog box by selecting **Close**. -If you have any problems, see [Troubleshoot swaps](#troubleshoot-swaps). +# [Azure CLI](#tab/cli) -To automate a multi-phase swap, see [Automate with PowerShell](#automate-with-powershell). +To swap a slot into production with preview, run the following command in a terminal: ++```azurecli-interactive +az webapp deployment slot swap --resource-group <group-name> --name <app-name> --slot <source-slot-name> --target-slot production --action preview +``` ++To complete the swap: ++```azurecli-interactive +az webapp deployment slot swap --resource-group <group-name> --name <app-name> --slot <source-slot-name> --target-slot production --action swap +``` ++To cancel the swap: ++```azurecli-interactive +az webapp deployment slot swap --resource-group <group-name> --name <app-name> --slot <source-slot-name> --target-slot production --action reset +``` ++For more information, see [az webapp deployment slot swap](/cli/azure/webapp/deployment/slot#az-webapp-deployment-slot-swap). ++# [Azure PowerShell](#tab/powershell) ++To swap a slot into production with preview, run the following cmdlet in a PowerShell terminal: ++```azurepowershell-interactive +Switch-AzWebAppSlot -SourceSlotName "<source-slot-name>" -DestinationSlotName "production" -ResourceGroupName "<group-name>" -Name "<app-name>" -SwapWithPreviewAction ApplySlotConfig +``` ++To complete the swap: ++```azurepowershell-interactive +Switch-AzWebAppSlot -SourceSlotName "<source-slot-name>" -DestinationSlotName "production" -ResourceGroupName "<group-name>" -Name "<app-name>" -SwapWithPreviewAction CompleteSlotSwap +``` ++To cancel the swap: ++```azurepowershell-interactive +Switch-AzWebAppSlot -SourceSlotName "<source-slot-name>" -DestinationSlotName "production" -ResourceGroupName "<group-name>" -Name "<app-name>" -SwapWithPreviewAction ResetSlotSwap +``` ++For more information, see [Switch-AzWebAppSlot](/powershell/module/az.websites/switch-azwebappslot). ++-- ++If you have any problems, see [Troubleshoot swaps](#troubleshoot-swaps). <a name="Rollback"></a> If any errors occur in the target slot (for example, the production slot) after Auto swap streamlines Azure DevOps scenarios where you want to deploy your app continuously with zero cold starts and zero downtime for customers of the app. When auto swap is enabled from a slot into production, every time you push your code changes to that slot, App Service automatically [swaps the app into production](#swap-operation-steps) after it's warmed up in the source slot. > [!NOTE]- > Before you configure auto swap for the production slot, consider testing auto swap on a non-production target slot. + > Before you configure auto swap for the production slot, consider testing auto swap on a nonproduction target slot. > +# [Azure portal](#tab/portal) + To configure auto swap: 1. Go to your app's resource page. Select **Deployment slots** > *\<desired source slot>* > **Configuration** > **General settings**. 2. For **Auto swap enabled**, select **On**. Then select the desired target slot for **Auto swap deployment slot**, and select **Save** on the command bar. - -  ++ :::image type="content" source="media/web-sites-staged-publishing/auto-swap.png" alt-text="A screenshot that shows how to configure auto swap into the production slot in the portal." lightbox="media/web-sites-staged-publishing/auto-swap.png"::: 3. Execute a code push to the source slot. Auto swap happens after a short time, and the update is reflected at your target slot's URL. +# [Azure CLI](#tab/cli) ++To configure auto swap into the production slot, run the following command in a terminal: ++```azurecli-interactive +az webapp deployment slot auto-swap --name <app-name> --resource-group <group-name> --slot <source-slot-name> +``` ++To disable auto swap: ++```azurecli-interactive +az webapp deployment slot auto-swap --name <app-name> --resource-group <group-name> --slot <source-slot-name> --disable +``` ++For more information, see [az webapp deployment slot auto-swap](/cli/azure/webapp/deployment/slot#az-webapp-deployment-slot-auto-swap). ++# [Azure PowerShell](#tab/powershell) ++```azurepowershell-interactive +Set-AzWebAppSlot -ResourceGroupName "<group-name>" -Name "<app-name>" -Slot "<source-slot-name>" -AutoSwapSlotName "production" +``` ++For more information, see [Set-AzWebAppSlot](/powershell/module/az.websites/set-azwebappslot). ++-- + If you have any problems, see [Troubleshoot swaps](#troubleshoot-swaps). <a name="Warm-up"></a> If you have any problems, see [Troubleshoot swaps](#troubleshoot-swaps). If the [swap operation](#AboutConfiguration) takes a long time to complete, you can get information on the swap operation in the [activity log](../azure-monitor/essentials/platform-logs-overview.md). +# [Azure portal](#tab/portal) + On your app's resource page in the portal, in the left pane, select **Activity log**. A swap operation appears in the log query as `Swap Web App Slots`. You can expand it and select one of the suboperations or errors to see the details. -## Route traffic +# [Azure CLI](#tab/cli) ++To monitor swap events in the activity log, run the following command: ++```azurecli-interactive +az monitor activity-log list --resource-group <group-name> --query "[?contains(operationName.value,'Microsoft.Web/sites/slots/slotsswap/action')]" +``` ++For more information, see [az monitor activity-log list +](/cli/azure/monitor/activity-log#az-monitor-activity-log-list). ++# [Azure PowerShell](#tab/powershell) ++To monitor swap events in the activity log, run the following command: ++```azurepowershell-interactive +Get-AzLog -ResourceGroup <group-name> -StartTime 2023-07-07 | where{$_.OperationName -eq 'Swap Web App Slots'} +``` ++For more information, see [Get-AzLog](/powershell/module/az.monitor/get-azlog). ++-- ++## Route production traffic automatically By default, all client requests to the app's production URL (`http://<app_name>.azurewebsites.net`) are routed to the production slot. You can route a portion of the traffic to another slot. This feature is useful if you need user feedback for a new update, but you're not ready to release it to production. -### Route production traffic automatically +# [Azure portal](#tab/portal) To route production traffic automatically: To route production traffic automatically: 2. In the **Traffic %** column of the slot you want to route to, specify a percentage (between 0 and 100) to represent the amount of total traffic you want to route. Select **Save**. -  + :::image type="content" source="media/web-sites-staged-publishing/route-traffic-to-slot.png" alt-text="A screenshot that shows how to route a percentage of request traffic to a deployment slot, in the portal." lightbox="media/web-sites-staged-publishing/route-traffic-to-slot.png"::: -After the setting is saved, the specified percentage of clients is randomly routed to the non-production slot. +After the setting is saved, the specified percentage of clients is randomly routed to the nonproduction slot. -After a client is automatically routed to a specific slot, it's "pinned" to that slot for one hour or until the cookies are deleted. On the client browser, you can see which slot your session is pinned to by looking at the `x-ms-routing-name` cookie in your HTTP headers. A request that's routed to the "staging" slot has the cookie `x-ms-routing-name=staging`. A request that's routed to the production slot has the cookie `x-ms-routing-name=self`. +# [Azure CLI](#tab/cli) - > [!NOTE] - > You can also use the [`az webapp traffic-routing set`](/cli/azure/webapp/traffic-routing#az-webapp-traffic-routing-set) command in the Azure CLI to set the routing percentages from CI/CD tools like GitHub Actions, DevOps pipelines, or other automation systems. +To add a routing rule on a slot and transfer 15% of production traffic it, run the following command: ++```azurecli-interactive +az webapp traffic-routing set --resource-group <group-name> --name <app-name> --distribution <slot-name>=15 +``` ++For more information, see [az webapp traffic-routing set](/cli/azure/webapp/traffic-routing#az-webapp-traffic-routing-set). ++# [Azure PowerShell](#tab/powershell) ++To add a routing rule on a slot and transfer 15% of production traffic it, run the following command: ++```azurepowershell-interactive +Add-AzWebAppTrafficRouting -ResourceGroupName "<group-name>" -WebAppName "<app-name>" -RoutingRule @{ActionHostName='<app-name>-<slot-name>.azurewebsites.net';ReroutePercentage='15';Name='<slot-name>'} +``` ++For more information, see [Add-AzWebAppTrafficRouting](/powershell/module/az.websites/add-azwebapptrafficrouting). To update an existing rule, use [Update-AzWebAppTrafficRouting](/powershell/module/az.websites/update-azwebapptrafficrouting). ++-- -### Route production traffic manually +After a client is automatically routed to a specific slot, it's "pinned" to that slot for one hour or until the cookies are deleted. On the client browser, you can see which slot your session is pinned to by looking at the `x-ms-routing-name` cookie in your HTTP headers. A request that's routed to the "staging" slot has the cookie `x-ms-routing-name=staging`. A request that's routed to the production slot has the cookie `x-ms-routing-name=self`. ++## Route production traffic manually In addition to automatic traffic routing, App Service can route requests to a specific slot. This is useful when you want your users to be able to opt in to or opt out of your beta app. To route production traffic manually, you use the `x-ms-routing-name` query parameter. To let users opt out of your beta app, for example, you can put this link on you The string `x-ms-routing-name=self` specifies the production slot. After the client browser accesses the link, it's redirected to the production slot. Every subsequent request has the `x-ms-routing-name=self` cookie that pins the session to the production slot. -To let users opt in to your beta app, set the same query parameter to the name of the non-production slot. Here's an example: +To let users opt in to your beta app, set the same query parameter to the name of the nonproduction slot. Here's an example: ``` <webappname>.azurewebsites.net/?x-ms-routing-name=staging By default, new slots are given a routing rule of `0%`, shown in grey. When you ## Delete a slot -Search for and select your app. Select **Deployment slots** > *\<slot to delete>* > **Overview**. The app type is shown as **App Service (Slot)** to remind you that you're viewing a deployment slot. Before deleting a slot, make sure to stop the slot and set the traffic in the slot to zero. Select **Delete** on the command bar. -- +# [Azure portal](#tab/portal) -<!-- ======== AZURE POWERSHELL CMDLETS =========== --> --<a name="PowerShell"></a> --## Automate with PowerShell +Search for and select your app. Select **Deployment slots** > *\<slot to delete>* > **Overview**. The app type is shown as **App Service (Slot)** to remind you that you're viewing a deployment slot. Before deleting a slot, make sure to stop the slot and set the traffic in the slot to zero. Select **Delete** on the command bar. -Azure PowerShell is a module that provides cmdlets to manage Azure through Windows PowerShell, including support for managing deployment slots in Azure App Service. +# [Azure CLI](#tab/cli) -For information on installing and configuring Azure PowerShell, and on authenticating Azure PowerShell with your Azure subscription, see [How to install and configure Microsoft Azure PowerShell](/powershell/azure/). +Run the following command in a terminal: --### Create a web app -```powershell -New-AzWebApp -ResourceGroupName [resource group name] -Name [app name] -Location [location] -AppServicePlan [app service plan name] +```azurecli-interactive +az webapp deployment slot delete --name <app-name> --resource-group <group-name> --slot <slot-name> ``` --### Create a slot -```powershell -New-AzWebAppSlot -ResourceGroupName [resource group name] -Name [app name] -Slot [deployment slot name] -AppServicePlan [app service plan name] -``` +For more information, see [az webapp deployment slot delete](/cli/azure/webapp/deployment/slot#az-webapp-deployment-slot-delete). --### Initiate a swap with a preview (multi-phase swap), and apply destination slot configuration to the source slot -```powershell -$ParametersObject = @{targetSlot = "[slot name ΓÇô e.g. "production"]"} -Invoke-AzResourceAction -ResourceGroupName [resource group name] -ResourceType Microsoft.Web/sites/slots -ResourceName [app name]/[slot name] -Action applySlotConfig -Parameters $ParametersObject -ApiVersion 2015-07-01 -``` +# [Azure PowerShell](#tab/powershell) --### Cancel a pending swap (swap with review) and restore the source slot configuration -```powershell -Invoke-AzResourceAction -ResourceGroupName [resource group name] -ResourceType Microsoft.Web/sites/slots -ResourceName [app name]/[slot name] -Action resetSlotConfig -ApiVersion 2015-07-01 -``` +Run the following cmdlet in a PowerShell terminal: --### Swap deployment slots -```powershell -$ParametersObject = @{targetSlot = "[slot name ΓÇô e.g. "production"]"} -Invoke-AzResourceAction -ResourceGroupName [resource group name] -ResourceType Microsoft.Web/sites/slots -ResourceName [app name]/[slot name] -Action slotsswap -Parameters $ParametersObject -ApiVersion 2015-07-01 +```azurepowershell-interactive +Remove-AzWebAppSlot -ResourceGroupName "<group-name>" -Name "<app-name>" -Slot "<slot-name>" ``` -### Monitor swap events in the activity log -```powershell -Get-AzLog -ResourceGroup [resource group name] -StartTime 2018-03-07 -Caller SlotSwapJobProcessor -``` +For more information, see [Remove-AzWebAppSlot](/powershell/module/az.websites/remove-azwebappslot). --### Delete a slot -```powershell -Remove-AzResource -ResourceGroupName [resource group name] -ResourceType Microsoft.Web/sites/slots ΓÇôName [app name]/[slot name] -ApiVersion 2015-07-01 -``` --To perform a slot swap from the production slot, the identity needs (at minimum) permissions to perform the `Microsoft.Web/sites/slotsswap/Action` operation. For more information, see the [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftweb) +-- ## Automate with Resource Manager templates -[Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) are declarative JSON files used to automate the deployment and configuration of Azure resources. To swap slots by using Resource Manager templates, you will set two properties on the *Microsoft.Web/sites/slots* and *Microsoft.Web/sites* resources: +[Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) are declarative JSON files used to automate the deployment and configuration of Azure resources. To swap slots by using Resource Manager templates, you set two properties on the *Microsoft.Web/sites/slots* and *Microsoft.Web/sites* resources: -- `buildVersion`: this is a string property which represents the current version of the app deployed in the slot. For example: "v1", "1.0.0.1", or "2019-09-20T11:53:25.2887393-07:00".-- `targetBuildVersion`: this is a string property that specifies what `buildVersion` the slot should have. If the targetBuildVersion does not equal the current `buildVersion`, then this will trigger the swap operation by finding the slot which has the specified `buildVersion`.+- `buildVersion`: this is a string property that represents the current version of the app deployed in the slot. For example: "v1", "1.0.0.1", or "2019-09-20T11:53:25.2887393-07:00". +- `targetBuildVersion`: this is a string property that specifies what `buildVersion` the slot should have. If the `targetBuildVersion` doesn't equal the current `buildVersion`, it triggers the swap operation by finding the slot with the specified `buildVersion`. ### Example Resource Manager template -The following Resource Manager template will update the `buildVersion` of the staging slot and set the `targetBuildVersion` on the production slot. This will swap the two slots. The template assumes you already have a webapp created with a slot named "staging". +The following Resource Manager template swap two slots by updating the `buildVersion` of the `staging` slot and setting the `targetBuildVersion` on the production slot. It assumes you've created a slot called `staging`. ```json { The following Resource Manager template will update the `buildVersion` of the st } ``` -This Resource Manager template is idempotent, meaning that it can be executed repeatedly and produce the same state of the slots. After the first execution, `targetBuildVersion` will match the current `buildVersion`, so a swap will not be triggered. --<!-- ======== Azure CLI =========== --> --<a name="CLI"></a> --## Automate with the CLI --For [Azure CLI](https://github.com/Azure/azure-cli) commands for deployment slots, see [az webapp deployment slot](/cli/azure/webapp/deployment/slot). +This Resource Manager template is idempotent, meaning that it can be executed repeatedly and produce the same state of the slots. Without any change to the template, subsequent runs of the same template don't trigger any slot swap because the slots are already in the desired state. ## Troubleshoot swaps If any error occurs during a [slot swap](#AboutConfiguration), it's logged in *D Here are some common swap errors: -- An HTTP request to the application root is timed. The swap operation waits for 90 seconds for each HTTP request, and retries up to 5 times. If all retries are timed out, the swap operation is stopped.+- An HTTP request to the application root is timed. The swap operation waits for 90 seconds for each HTTP request, and retries up to five times. If all retries are timed out, the swap operation is stopped. - Local cache initialization might fail when the app content exceeds the local disk quota specified for the local cache. For more information, see [Local cache overview](overview-local-cache.md). Here are some common swap errors: - After slot swaps, the app may experience unexpected restarts. This is because after a swap, the hostname binding configuration goes out of sync, which by itself doesn't cause restarts. However, certain underlying storage events (such as storage volume failovers) may detect these discrepancies and force all worker processes to restart. To minimize these types of restarts, set the [`WEBSITE_ADD_SITENAME_BINDINGS_IN_APPHOST_CONFIG=1` app setting](https://github.com/projectkudu/kudu/wiki/Configurable-settings#disable-the-generation-of-bindings-in-applicationhostconfig) on *all slots*. However, this app setting does *not* work with Windows Communication Foundation (WCF) apps. ## Next steps-[Block access to non-production slots](app-service-ip-restrictions.md) +[Block access to nonproduction slots](app-service-ip-restrictions.md) |
automanage | Automanage Hotpatch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-hotpatch.md | Hotpatching is a new way to install updates on supported _Windows Server Azure E ## How hotpatch works -Hotpatch works by first establishing a baseline with a Windows Update Latest Cumulative Update. Hotpatches are periodically released (for example, on the second Tuesday of the month) that builds on that baseline. Hotpatches will contain updates that don't require a reboot. Periodically (starting at every three months), the baseline is refreshed with a new Latest Cumulative Update. +Hotpatch works by first establishing a baseline with a Windows Update Latest Cumulative Update. Hotpatches are periodically released (for example, on the second Tuesday of the month) that builds on that baseline. Hotpatches updates the VM without requiring a reboot. Periodically (starting at every three months), the baseline is refreshed with a new Latest Cumulative Update. :::image type="content" source="media\automanage-hotpatch\hotpatch-sample-schedule.png" alt-text="Hotpatch Sample Schedule."::: There are two types of baselines: **Planned baselines** and **Unplanned baselines**. * **Planned baselines** are released on a regular cadence, with hotpatch releases in between. Planned baselines include all the updates in a comparable _Latest Cumulative Update_ for that month, and require a reboot.- * The sample schedule above illustrates four planned baseline releases in a calendar year (five total in the diagram), and eight hotpatch releases. -* **Unplanned baselines** are released when an important update (such as a zero-day fix) is released, and that particular update can't be released as a hotpatch. When unplanned baselines are released, a hotpatch release will be replaced with an unplanned baseline in that month. Unplanned baselines also include all the updates in a comparable _Latest Cumulative Update_ for that month, and also require a reboot. - * The sample schedule above illustrates two unplanned baselines that would replace the hotpatch releases for those months (the actual number of unplanned baselines in a year isn't known in advance). + * The sample schedule illustrates four planned baseline releases in a calendar year (five total in the diagram), and eight hotpatch releases. +* **Unplanned baselines** are released when an important update (such as a zero-day fix) is released, and that particular update can't be released as a hotpatch. When unplanned baselines are released, they replace a hotpatch update in that month. Unplanned baselines also include all the updates in a comparable _Latest Cumulative Update_ for that month, and also require a reboot. + * The sample schedule illustrates two unplanned baselines that would replace the hotpatch releases for those months (the actual number of unplanned baselines in a year isn't known in advance). ## Regional availability Hotpatch is available in all global Azure regions. To start using hotpatch on a new VM, follow these steps: * You can preview onboarding Automanage machine best practices during VM creation in the Azure portal by visiting the [Azure Marketplace](https://aka.ms/AzureEdition). 1. Supply details during VM creation * Ensure that a supported _Windows Server Azure Edition_ image is selected in the Image dropdown. See [automanage windows server services](automanage-windows-server-services-overview.md#getting-started-with-windows-server-azure-edition) to determine which images are supported.- * On the Management tab under section ΓÇÿGuest OS updatesΓÇÖ, the checkbox for 'Enable hotpatch' will be selected. Patch orchestration options are set to 'Azure-orchestrated'. + * On the Management tab under section ΓÇÿGuest OS updatesΓÇÖ, the checkbox for 'Enable hotpatch' is selected. Patch orchestration options are set to 'Azure-orchestrated'. * If you create a VM by visiting the [Azure Marketplace](https://aka.ms/AzureEdition), on the Management tab under section 'Azure Automanage', select 'Dev/Test' or 'Production' for 'Azure Automanage environment' to evaluate Automanage machine best practices while in preview. 1. Create your new VM az provider register --namespace Microsoft.Compute [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled automatically for all VMs created with a supported _Windows Server Azure Edition_ image. With automatic VM guest patching enabled: * Patches classified as Critical or Security are automatically downloaded and applied on the VM. * Patches are applied during off-peak hours in the VM's time zone.-* Patch orchestration is managed by Azure and patches are applied following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). +* Azure manages the patch orchestration and patches are applied following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). * Virtual machine health, as determined through platform health signals, is monitored to detect patching failures. ## How does automatic VM guest patching work? When [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled on a VM, the available Critical and Security patches are downloaded and applied automatically. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required. -With hotpatch enabled on supported _Windows Server Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months require VM reboots. Additional Critical or Security patches may also be available periodically, which may require VM reboots. +With hotpatch enabled on supported _Windows Server Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months require VM reboots. Other Critical or Security patches may also be available periodically, which may require VM reboots. The VM is assessed automatically every few days and multiple times within any 30-day period to determine the applicable patches for that VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity. Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM is assessed and applicable patches are installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days. -Definition updates and other patches not classified as Critical or Security won't be installed through automatic VM guest patching. +Definition updates and other patches not classified as Critical or Security won't be installed through Automatic VM Guest Patching. ## Understanding the patch status for your VM To view the patch status for your VM, navigate to the **Guest + host updates** section for your VM in the Azure portal. Under the **Guest OS updates** section, select ΓÇÿGo to Hotpatch (Preview)ΓÇÖ to view the latest patch status for your VM. -On this screen, you'll see the hotpatch status for your VM. You can also review if there any available patches for your VM that haven't been installed. As described in the ΓÇÿPatch installationΓÇÖ section above, all security and critical updates are automatically installed on your VM using [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) and no extra actions are required. Patches with other update classifications aren't automatically installed. Instead, they're viewable in the list of available patches under the ΓÇÿUpdate complianceΓÇÖ tab. You can also view the history of update deployments on your VM through the ΓÇÿUpdate historyΓÇÖ. Update history from the past 30 days is displayed, along with patch installation details. +The Hotpatch status associated with your VM is displayed on the page. You can also review if there any available patches for your VM that haven't been installed. As described in the ΓÇÿPatch installationΓÇÖ section, all security and critical updates are automatically installed on your VM using [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) and no extra actions are required. Patches with other update classifications aren't automatically installed. Instead, they're viewable in the list of available patches under the ΓÇÿUpdate complianceΓÇÖ tab. You can also view the history of update deployments on your VM through the ΓÇÿUpdate historyΓÇÖ. Update history from the past 30 days is displayed, along with patch installation details. :::image type="content" source="media\automanage-hotpatch\hotpatch-management-ui.png" alt-text="Hotpatch Management."::: -With automatic VM guest patching, your VM is periodically and automatically assessed for available updates. These periodic assessments ensure that available patches are detected. You can view the results of the assessment on the Updates screen above, including the time of the last assessment. You can also choose to trigger an on-demand patch assessment for your VM at any time using the ΓÇÿAssess nowΓÇÖ option and review the results after assessment completes. +With automatic VM guest patching, your VM is periodically and automatically assessed for available updates. These periodic assessments ensure that available patches are detected. You can view the results of the assessment on the Updates Page, including the time of the last assessment. You can also choose to trigger an on-demand patch assessment for your VM at any time using the ΓÇÿAssess nowΓÇÖ option and review the results after assessment completes. Similar to on-demand assessment, you can also install patches on-demand for your VM using the ΓÇÿInstall updates nowΓÇÖ option. Here you can choose to install all updates under specific patch classifications. You can also specify updates to include or exclude by providing a list of individual knowledge base articles. Patches installed on-demand aren't installed using availability-first principles and may require more reboots and VM downtime for update installation. Similar to on-demand assessment, you can also install patches on-demand for your Hotpatch covers Windows Security updates and maintains parity with the content of security updates issued to in the regular (non-hotpatch) Windows update channel. -There are some important considerations to running a supported _Windows Server Azure Edition_ VM with hotpatch enabled. Reboots are still required to install updates that aren't included in the hotpatch program. Reboots are also required periodically after a new baseline has been installed. These reboots keep the VM in sync with non-security patches included in the latest cumulative update. -* Patches that are currently not included in the hotpatch program include non-security updates released for Windows, and non-Windows updates (such as .NET patches). These types of patches need to be installed during a baseline month, and will require a reboot. +There are some important considerations to running a supported _Windows Server Azure Edition_ VM with hotpatch enabled. Reboots are still required to install updates that aren't included in the hotpatch program. Reboots are also required periodically after a new baseline has been installed. The reboots keep the VM in sync with non-security patches included in the latest cumulative update. +* Patches that are currently not included in the hotpatch program include non-security updates released for Windows, and non-Windows updates (such as .NET patches). These types of patches need to be installed during a baseline month, and require a reboot. ## Frequently asked questions There are some important considerations to running a supported _Windows Server A ### When will I receive the first hotpatch update? -* Hotpatch updates are typically released on the second Tuesday of each month. For more information, see below. +* Hotpatch updates are typically released on the second Tuesday of each month. ### What will the hotpatch schedule look like? -* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with hotpatch updates released monthly. Baselines will be released starting out every three months. See the image below for an example of an annual three-month schedule (including example unplanned baselines due to zero-day fixes). +* Hotpatching works by establishing a baseline with a Windows Update Latest Cumulative Update, then builds upon that baseline with hotpatch updates released monthly. A typical baseline update is released every three months. See the image below for an example of an annual three-month schedule (including example unplanned baselines due to zero-day fixes). :::image type="content" source="media\automanage-hotpatch\hotpatch-sample-schedule.png" alt-text="Hotpatch Sample Schedule."::: -### Are reboots still needed for a VM enrolled in hotpatch? +### Do VMs need a reboot after enrolling in hotpatch? * Reboots are still required to install updates not included in the hotpatch program, and are required periodically after a baseline (Windows Update Latest Cumulative Update) has been installed. This reboot will keep your VM in sync with all the patches included in the cumulative update. Baselines (which require a reboot) will start out on a three-month cadence and increase over time. There are some important considerations to running a supported _Windows Server A * You can file a [technical support case ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). For the Service option, search for and select **Virtual Machine running Windows** under Compute. Select **Azure Features** for the problem type and **Automatic VM Guest Patching** for the problem subtype. +### Are Azure Virtual Machine Scale Sets Uniform Orchestration Supported on Azure-Edition images? ++* The [Windows Server 2022 Azure Edition Images](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftwindowsserver.windowsserver?tab=PlansAndPrice) provide the best in class operating system that includes the innovation built into Windows Server 2022 images plus additional features. Since Azure Edition images support Hotpatching, VM scale sets (VMSS) with Uniform Orchestration can't be created on these images. The blockade on using VMSS Uniform Orchestration on these images will be lifted once [Auto Guest Patching](https://learn.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching?toc=https%3A%2F%2Flearn.microsoft.com%2F%2Fazure%2Fvirtual-machine-scale-sets%2Ftoc.json&bc=https%3A%2F%2Flearn.microsoft.com%2F%2Fazure%2Fbread%2Ftoc.json) and Hotpatching are supported. + ## Next steps * Learn about [Azure Update Management](../automation/update-management/overview.md) |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md | To see how all Azure Arc-enabled components are validated, see [Validation progr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-|DataON AZS-6224|1.23.8|1.12.0_2022-10-11|16.0.537.5223| 12.3 (Ubuntu 12.3-1) | +|[DataON AZS-6224](https://www.dataonstorage.com/products-solutions/integrated-systems-for-azure-stack-hci/dataon-integrated-system-azs-6224-for-azure-stack-hci/)|1.24.11| 1.20.0_2023-06-13|16.0.5100.7242|14.5 (Ubuntu 20.04)| ### Dell |
azure-arc | Deploy Ama Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deploy-ama-policy.md | In order for Azure Monitor to work on a machine, it needs to be associated with ## Select a Data Collection Rule -Data Collection Rules (DCRs) define specify what data should be collected, how to transform that data, and where to send that data. You need to select (or create) a DCR and specify it within the ARM template used for deploying AMA. - Data Collection Rules define the data collection process in Azure Monitor. They specify what data should be collected and where that data should be sent. You'll need to select or create a DCR to be associated with your Policy definition. 1. From your browser, go to the [Azure portal](https://portal.azure.com). |
azure-arc | Ssh Arc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md | Title: (Preview) SSH access to Azure Arc-enabled servers -description: Leverage SSH remoting to access and manage Azure Arc-enabled servers. Previously updated : 04/12/2023+ Title: SSH access to Azure Arc-enabled servers +description: Use SSH remoting to access and manage Azure Arc-enabled servers. Last updated : 07/01/2023 SSH for Arc-enabled servers enables SSH based connections to Arc-enabled servers This functionality can be used interactively, automated, or with existing SSH based tooling, allowing existing management tools to have a greater impact on Azure Arc-enabled servers. -> [!IMPORTANT] -> SSH for Arc-enabled servers is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Key benefits SSH access to Arc-enabled servers provides the following key benefits: - No public IP address or open SSH ports required SSH access to Arc-enabled servers provides the following key benefits: - Support for other OpenSSH based tooling with config file support ## Prerequisites-To leverage this functionality, please ensure the following: +To enable this functionality, ensure the following: + - Ensure the Arc-enabled server has a hybrid agent version of "1.31.xxxx" or higher. Run: ```azcmagent show``` on your Arc-enabled Server. + - Ensure the Arc-enabled server has the "sshd" service enabled. For Linux machines `openssh-server` can be installed via a package manager and needs to be enabled. SSHD needs to be [enabled on Windows](/windows-server/administration/openssh/openssh_install_firstuse). + - Ensure you have the Owner or Contributer role assigned. Authenticating with Azure AD credentials has additional requirements:+ - `aadsshlogin` and `aadsshlogin-selinux` (as appropriate) must be installed on the Arc-enabled server. These packages are installed with the `Azure AD based SSH Login ΓÇô Azure Arc` VM extension. - Configure role assignments for the VM. Two Azure roles are used to authorize VM login: - **Virtual Machine Administrator Login**: Users who have this role assigned can log in to an Azure virtual machine with administrator privileges. - **Virtual Machine User Login**: Users who have this role assigned can log in to an Azure virtual machine with regular user privileges. SSH access to Arc-enabled servers is currently supported in all regions supporte ## Getting started -### Install local command line tool -This functionality is currently packaged in an Azure CLI extension and an Azure PowerShell module. -#### [Install Azure CLI extension](#tab/azure-cli) +### Register the HybridConnectivity resource provider +> [!NOTE] +> This is a one-time operation that needs to be performed on each subscription. -```az extension add --name ssh``` +Check if the HybridConnectivity resource provider (RP) has been registered: -> [!NOTE] -> The Azure CLI extension version must be greater than 1.1.0. +```az provider show -n Microsoft.HybridConnectivity``` -#### [Install Azure PowerShell module](#tab/azure-powershell) +If the RP hasn't been registered, run the following: -```Install-Module -Name AzPreview -Scope CurrentUser -Repository PSGallery -Force``` +```az provider register -n Microsoft.HybridConnectivity``` -+This operation can take 2-5 minutes to complete. Before moving on, check that the RP has been registered. -### Enable functionality on your Arc-enabled server -In order to use the SSH connect feature, you must enable connections on the hybrid agent. +### Create default connectivity endpoint +> [!NOTE] +> The following step will not need to be run for most users as it should complete automatically at first connection. +> This step must be completed for each Arc-enabled server. +#### [Create the default endpoint with Azure CLI:](#tab/azure-cli) +```bash +az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 --body '{"properties": {"type": "default"}}' +``` > [!NOTE]-> The following actions must be completed in an elevated terminal session. +> If using Azure CLI from PowerShell, the following should be used. +```powershell +az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 --body '{\"properties\":{\"type\":\"default\"}}' +``` -View your current incoming connections: +Validate endpoint creation: + ```bash +az rest --method get --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 + ``` + +#### [Create the default endpoint with Azure PowerShell:](#tab/azure-powershell) + ```powershell +Invoke-AzRestMethod -Method put -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 -Payload '{"properties": {"type": "default"}}' +``` -```azcmagent config list``` +Validate endpoint creation: + ```powershell + Invoke-AzRestMethod -Method get -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15 + ``` + + + ### Install local command line tool +This functionality is currently packaged in an Azure CLI extension and an Azure PowerShell module. +#### [Install Azure CLI extension](#tab/azure-cli) -If you have existing ports, you'll need to include them in the following command. +```az extension add --name ssh``` -To add access to SSH connections, run the following: +> [!NOTE] +> The Azure CLI extension version must be greater than 2.0.0. -```azcmagent config set incomingconnections.ports 22<,other open ports,...>``` +#### [Install Azure PowerShell module](#tab/azure-powershell) -If you're using a non-default port for your SSH connection, replace port 22 with your desired port in the previous command. +```powershell +Install-Module -Name Az.Ssh -Scope CurrentUser -Repository PSGallery +Install-Module -Name Az.Ssh.ArcProxy -Scope CurrentUser -Repository PSGallery +``` -> [!NOTE] -> The following steps will not need to be run for most users. +++### Enable functionality on your Arc-enabled server +In order to use the SSH connect feature, you must update the Service Configuration in the Connectivity Endpoint on the Arc-Enabled Server to allow SSH connection to a specific port. You may only allow connection to a single port. The CLI tools attempt to update the allowed port at runtime, but the port can be manually configured with the following: -### Register the HybridConnectivity resource provider > [!NOTE]-> This is a one-time operation that needs to be performed on each subscription. +> There may be a delay after updating the Service Configuration until you are able to connect. -Check if the HybridConnectivity resource provider (RP) has been registered: +#### [Azure CLI](#tab/azure-cli) -```az provider show -n Microsoft.HybridConnectivity``` +```az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 --body '{\"properties\": {\"serviceName\": \"SSH\", \"port\": \"22\"}}'``` -If the RP hasn't been registered, run the following: +#### [Azure PowerShell](#tab/azure-powershell) -```az provider register -n Microsoft.HybridConnectivity``` +```Invoke-AzRestMethod -Method put -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 -Payload '{"properties": {"serviceName": "SSH", "port": 22}}'``` -This operation can take 2-5 minutes to complete. Before moving on, check that the RP has been registered. + -### Create default connectivity endpoint -> [!NOTE] -> The following actions must be completed for each Arc-enabled server. +If you're using a nondefault port for your SSH connection, replace port 22 with your desired port in the previous command. ++### Optional: Install Azure AD login extension +The `Azure AD based SSH Login ΓÇô Azure Arc` VM extension can be added from the extensions menu of the Arc server. The Azure AD login extension can also be installed locally via a package manager via: `apt-get install aadsshlogin` or the following command. ++```az connectedmachine extension create --machine-name <arc enabled server name> --resource-group <resourcegroup> --publisher Microsoft.Azure.ActiveDirectory --name AADSSHLogin --type AADSSHLoginForLinux --location <location>``` -Create the default endpoint in PowerShell: - ```powershell - az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview --body '{"properties": {"type": "default"}}' - ``` -Create the default endpoint in Bash: -```bash -az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview --body '{"properties": {"type": "default"}}' -``` -Validate endpoint creation: - ``` - az rest --method get --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview - ``` ## Examples To view examples, view the Az CLI documentation page for [az ssh](/cli/azure/ssh) or the Azure PowerShell documentation page for [Az.Ssh](/powershell/module/az.ssh).++## Next steps ++- Learn about [OpenSSH for Windows](/windows-server/administration/openssh/openssh_overview) +- Learn about troubleshooting [SSH access to Azure Arc-enabled servers](ssh-arc-troubleshoot.md). +- Learn about troubleshooting [agent connection issues](troubleshoot-agent-onboard.md). |
azure-arc | Ssh Arc Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-troubleshoot.md | Title: Troubleshoot SSH access to Azure Arc-enabled servers issues + Title: Troubleshoot SSH access to Azure Arc-enabled servers description: Learn how to troubleshoot and resolve issues with SSH access to Arc-enabled servers. Previously updated : 05/04/2023 Last updated : 07/01/2023 -> [!IMPORTANT] -> SSH for Arc-enabled servers is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Client-side issues These issues are due to errors that occur on the machine that the user is connecting from. -### Incorrect Azure subscription --This problem occurs when the active subscription for Azure CLI isn't the same as the server that is being connected to. Possible errors: --- `Unable to determine the target machine type as Azure VM or Arc Server`-- `Unable to determine that the target machine is an Arc Server`-- `Unable to determine that the target machine is an Azure VM`-- `The resource \<name\> in the resource group \<resource group\> was not found`--Resolution: --- Run ```az account set -s <AzureSubscriptionId>``` where `AzureSubscriptionId` corresponds to the subscription that contains the target resource.- ### Unable to locate client binaries This issue occurs when the client side SSH binaries required to connect aren't found. Possible errors: This issue occurs when the client side SSH binaries required to connect aren't f Resolution: - Provide the path to the folder that contains the SSH client executables by using the ```--ssh-client-folder``` parameter.+- Ensure that the folder is in the PATH environment variable for Azure PowerShell ++### Azure PowerShell module version mismatch +This issue occurs when the installed Azure PowerShell submodule, Az.Ssh.ArcProxy, isn't supported by the installed version of Az.Ssh. Error: ++- `This version of Az.Ssh only supports version 1.x.x of the Az.Ssh.ArcProxy PowerShell Module. The Az.Ssh.ArcProxy module {ModulePath} version is {ModuleVersion}, and it is not supported by this version of the Az.Ssh module. Check that this version of Az.Ssh is the latest available.` ++Resolution: ++- Update the Az.Ssh and Az.Ssh.ArcProxy modules ++### Az.Ssh.ArcProxy not installed +This issue occurs when the proxy module isn't found on the client machine. Error: ++- `Failed to find the PowerShell module Az.Ssh.ArcProxy installed in this machine. You must have the Az.Ssh.Proxy PowerShell module installed in the client machine in order to connect to Azure Arc resources. You can find the module in the PowerShell Gallery (see: https://aka.ms/PowerShellGallery-Az.Ssh.ArcProxy).` ++Resolution: ++- Install the module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.Ssh.ArcProxy): `Install-Module -Name Az.Ssh.ArcProxy` ++### User doesn't have permissions to execute proxy +This issue happens when the user doesn't have permissions to execute the SSH proxy that is used to connect. Errors: ++- `/bin/bash: line 1: exec: /usr/local/share/powershell/Modules/Az.Ssh.ArcProxy/1.0.0/sshProxy_linux_amd64_1.3.022941: cannot execute: Permission denied` +- `CreateProcessW failed error:5 posix_spawnp: Input/output error` ++Resolution: ++- Ensure that the user has permissions to execute the proxy file. ## Server-side issues -### SSH traffic not allowed on the server +### Unable to connect after the public preview +If the user had participated in the public preview and has updated their Arc agent and the Azure CLI/PowerShell to the general availability releases, then the connectivity may fail. ++Resolution: ++- Re-enable the functionality on the [Azure Arc-enabled servers](ssh-arc-overview.md). +### SSH traffic not allowed on the server This issue occurs when SSHD isn't running on the server, or SSH traffic isn't allowed on the server. Error: - `{"level":"fatal","msg":"sshproxy: error copying information from the connection: read tcp 192.168.1.180:60887-\u003e40.122.115.96:443: wsarecv: An existing connection was forcibly closed by the remote host.","time":"2022-02-24T13:50:40-05:00"}`+- `{"level":"fatal","msg":"sshproxy: error connecting to the address: 503 connection to localhost:22 failed: dial tcp [::1]:22: connectex: No connection could be made because the target machine actively refused it.. websocket: bad handshake","proxyVersion":"1.3.022941"}` +- `SSH connection is not enabled in the target port {Port}. ` Resolution:+ - Ensure that the SSHD service is running on the Arc-enabled server. + - Ensure that the functionality is enabled on your Arc-enabled server on port 22 (or other nondefault port) -- Ensure that the SSHD service is running on the Arc-enabled server.-- Ensure that port 22 (or other nondefault port) is listed in allowed incoming connections. Run `azcmagent config list` on the Arc-enabled server in an elevated session. The ssh port (22) isn't set by default, so you must add it. This setting is used by other services, like admin center, so just add port 22 without deleting previously added ports.+#### [Azure CLI](#tab/azure-cli) - ```powershell - # Set 22 port: - azcmagent config list - azcmagent config get incomingconnections.ports - azcmagent config set incomingconnections.ports 22 - azcmagent config - - # Add multiple ports: - azcmagent config set incomingconnections.ports 22,6516 - ``` +```az rest --method put --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 --body '{\"properties\": {\"serviceName\": \"SSH\", \"port\": \"22\"}}'``` ++#### [Azure PowerShell](#tab/azure-powershell) +```Invoke-AzRestMethod -Method put -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 -Payload '{"properties": {"serviceName": "SSH", "port": "22"}}'``` +++ ## Azure permissions issues+### Incorrect role assignments to enable SSH connectivity +This issue occurs when the current user doesn't have the proper role assignment to make contributions to the target resource. Error: ++- `Client is not authorized to create a Default connectivity endpoint for {Name} in the Resource Group {ResourceGroupName}. This is a one-time operation that must be performed by an account with Owner or Contributor role to allow connections to target resource` -### Incorrect role assignments +Resolution: +- Ensure that you have the Owner or Contributor role on the resource or contact the owner/contributor of the resource to set up SSH connectivity. +### Incorrect role assignments to connect This issue occurs when the current user doesn't have the proper role assignment on the target resource, specifically a lack of `read` permissions. Possible errors: - `Unable to determine the target machine type as Azure VM or Arc Server` This issue occurs when the current user doesn't have the proper role assignment - `Request for Azure Relay Information Failed: (AuthorizationFailed) The client '\<user name\>' with object id '\<ID\>' does not have authorization to perform action 'Microsoft.HybridConnectivity/endpoints/listCredentials/action' over scope '/subscriptions/\<Subscription ID\>/resourceGroups/\<Resource Group\>/providers/Microsoft.HybridCompute/machines/\<Machine Name\>/providers/Microsoft.HybridConnectivity/endpoints/default' or the scope is invalid. If access was recently granted, please refresh your credentials.` Resolution:--- Ensure that you have Contributor or Owner permissions on the resource you're connecting to.-- If using Azure AD login, ensure you have the Virtual Machine User Login or the Virtual Machine Administrator Login roles+- Ensure that you have the Virtual Machine Local user Login role on the resource you're connecting to. If using Azure AD login, ensure you have the Virtual Machine User Login or the Virtual Machine Administrator Login roles and that the Azure AD SSH Login extension is installed on the Arc-Enabled Server. ### HybridConnectivity RP not registered Resolution: - Confirm success by running ```az provider show -n Microsoft.HybridConnectivity```, verify that `registrationState` is set to `Registered` - Restart the hybrid agent on the Arc-enabled server -## Disable SSH to Arc-enabled servers -To disable the functionality, complete the following actions: + ## Disable SSH to Arc-enabled servers + + This functionality can be disabled by completing the following actions: ++ #### [Azure CLI](#tab/azure-cli) + + - Remove the SSH port and functionality from the Arc-enabled server: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 --body '{\"properties\": {\"serviceName\": \"SSH\", \"port\": \"22\"}}'``` ++ - Delete the default connectivity endpoint: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15``` -- Remove the SSH port from the allowed incoming ports: ```azcmagent config set incomingconnections.ports <other open ports,...>```-- Delete the default connectivity endpoint: ```az rest --method delete --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<Arc-enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview```+#### [Azure PowerShell](#tab/azure-powershell) ++ - Remove the SSH port and functionality from the Arc-enabled server: ```Invoke-AzRestMethod -Method delete -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15 -Payload '{"properties": {"serviceName": "SSH", "port": "22"}}'``` ++ - Delete the default connectivity endpoint: ```Invoke-AzRestMethod -Method delete -Path /subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2023-03-15``` ++ ## Next steps - Learn about SSH access to [Azure Arc-enabled servers](ssh-arc-overview.md). - Learn about troubleshooting [agent connection issues](troubleshoot-agent-onboard.md).+ |
azure-cache-for-redis | Cache How To Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md | In contrast, for clustered caches, we recommend using the metrics with the suffi - The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. - Connected Clients - The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections.+- Connected Clients Using AAD Token (preview) + - The number of client connections to the cache authenticated using Azure AD token during the specified reporting interval. - Connections Created Per Second - The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. This metric isn't available in Enterprise or Enterprise Flash tier caches. - Connections Closed Per Second In contrast, for clustered caches, we recommend using the metrics with the suffi - **RDB** ΓÇô when there's an issue related to RDB persistence - **Import** ΓÇô when there's an issue related to Import RDB - **Export** ΓÇô when there's an issue related to Export RDB+ - **AADAuthenticationFailure** (preview) - when there's an authentication failure using Azure AD Access token + - **AADTokenExpired** (preview) - when an Azure AD access token used for authentication is not renewed and it expires. - Evicted Keys - The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit. - This number maps to `evicted_keys` from the Redis INFO command. |
azure-functions | Functions Host Json | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md | Title: host.json reference for Azure Functions 2.x description: Reference documentation for the Azure Functions host.json file with the v2 runtime. Previously updated : 11/16/2022 Last updated : 07/10/2023 # host.json reference for Azure Functions 2.x and later Controls the logging behaviors of the function app, including Application Insigh |Property |Default | Description | ||||-|fileLoggingMode|debugOnly|Defines what level of file logging is enabled. Options are `never`, `always`, `debugOnly`. | +|fileLoggingMode|debugOnly|Determines the file logging behavior when running in Azure. Options are `never`, `always`, and `debugOnly`. This setting isn't used when running locally. When possible, you should use Application Insights when debugging your functions in Azure. Using `always` negatively impacts your app's cold start behavior and data throughput. The default `debguOnly` setting generates log files when you are debugging using the Azure portal. | |logLevel|n/a|Object that defines the log category filtering for functions in the app. This setting lets you filter logging for specific functions. For more information, see [Configure log levels](configure-monitoring.md#configure-log-levels). | |console|n/a| The [console](#console) logging setting. | |applicationInsights|n/a| The [applicationInsights](#applicationinsights) setting. | |
azure-monitor | Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md | If your primary email doesn't receive notifications, configure the email address You may have a limited number of email actions per action group. To check which limits apply to your situation, see [Azure Monitor service limits](../service-limits.md). +> [!NOTE] +> +> Action Groups uses two different email providers to ensure email notification delivery. The primary email provider is very resilient and quick but occasionally suffers outages. In this case, the secondary email provider handles email requests. The secondary provider is only a fallback solution. Due to provider differences, an email sent from our secondary provider may have a degraded email experience. The degradation results in slightly different email formatting and content. Since email templates differ in the two systems, maintaining parity across the two systems is not feasible. + When you set up the Resource Manager role: 1. Assign an entity of type **User** to the role. |
azure-monitor | Alerts Create New Alert Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md | -# Create a new alert rule +# Create or edit an alert rule -This article shows you how to create an alert rule. To learn more about alerts, see the [alerts overview](alerts-overview.md). +This article shows you how to create a new alert rule or edit an existing alert rule. To learn more about alerts, see the [alerts overview](alerts-overview.md). You create an alert rule by combining: - The resources to be monitored. - The signal or data from the resource. - Conditions. -You then define these elements for the resulting alert actions by using: +You then define these elements for the alert actions: - [Action groups](./action-groups.md)+ - [Alert processing rules](alerts-action-rules.md) Alerts triggered by these alert rules contain a payload that uses the [common alert schema](alerts-common-schema.md).-## Create a new alert rule in the Azure portal +## Create or edit an alert rule in the Azure portal ++There are several ways that you can create a new alert rule. ++To create a new alert rule from the portal home page: 1. In the [portal](https://portal.azure.com/), select **Monitor** > **Alerts**.-1. Open the **+ Create** menu and select **Alert rule**. +1. Open the **+ Create** menu, and select **Alert rule**. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-new-alert-rule.png" alt-text="Screenshot that shows steps to create a new alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-new-alert-rule.png" alt-text="Screenshot that shows steps to create a new alert rule."::: -### Select a scope for the alert rule +To create a new alert rule from a specific resource: -1. On the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, or **resource location**. +1. In the [portal](https://portal.azure.com/), navigate to the resource. +1. Select **Alerts** from the left pane, and then select **+ Create** > **Alert rule**. - > [!NOTE] - > If you select a Log analytics workspace resource, keep in mind that if the workspace receives telemetry from resources in more than one subscription, alerts are sent about those resources from different subscriptions. + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-new-alert-rule-2.png" alt-text="Screenshot that shows steps to create a new alert rule from a selected resource."::: - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot that shows the select resource pane for creating a new alert rule."::: +To edit an existing alert rule: +1. In the [portal](https://portal.azure.com/), either from the home page or from a specific resource, select **Alerts** from the left pane. +1. Select **Alert rules**. +1. Select the alert rule you want to edit, and then select **Edit**. ++ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-edit-alert-rule.png" alt-text="Screenshot that shows steps to edit an existing alert rule."::: +1. Select any of the tabs for the alert rule to edit the settings. +### Set the alert rule scope ++1. On the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, or **resource location**. 1. Select **Apply**. 1. Select **Next: Condition** at the bottom of the page. -### Set the conditions for the alert rule + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot that shows the select resource pane for creating a new alert rule."::: +### Set the alert rule conditions -1. On the **Condition** tab, when you select the **Signal name** field, the most commonly used signals are displayed in the drop-down list. Select one of these popular signals, or select **See all signals** if you want to choose a different signal for the condition. +1. On the **Condition** tab, when you select the **Signal name** field, the most commonly used signals are displayed in the drop-down list. Select one of these popular signals, or select **See all signals** if you want to choose a different signal for the condition. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-popular-signals.png" alt-text="Screenshot that shows popular signals when creating an alert rule."::: Alerts triggered by these alert rules contain a payload that uses the [common al |Aggregation type|Select the aggregation function to apply on the data points: Sum, Count, Average, Min, or Max.| |Threshold value|If you selected a **static** threshold, enter the threshold value for the condition logic.| |Unit|If the selected metric signal supports different units, such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.|- |Threshold sensitivity|If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern that's required to trigger an alert. <br> - **High**: Thresholds are tight and close to the metric series pattern. An alert rule is triggered on the smallest deviation, resulting in more alerts. <br> - **Medium**: Thresholds are less tight and more balanced. There will be fewer alerts than with high sensitivity (default). <br> - **Low**: Thresholds are loose, allowing greater deviation from the metric series pattern. Alert rules are only triggered on large deviations, resulting in fewer alerts.| + |Threshold sensitivity|If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern that's required to trigger an alert. <br> - **High**: Thresholds are tight and close to the metric series pattern. An alert rule is triggered on the smallest deviation, resulting in more alerts. <br> - **Medium**: Thresholds are less tight and more balanced. There are fewer alerts than with high sensitivity (default). <br> - **Low**: Thresholds are loose, allowing greater deviation from the metric series pattern. Alert rules are only triggered on large deviations, resulting in fewer alerts.| |Aggregation granularity| Select the interval that's used to group the data points by using the aggregation type function. Choose an **Aggregation granularity** (period) that's greater than the **Frequency of evaluation** to reduce the likelihood of missing the first evaluation period of an added time series.| |Frequency of evaluation|Select how often the alert rule is to be run. Select a frequency that's smaller than the aggregation granularity to generate a sliding window for the evaluation.| Alerts triggered by these alert rules contain a payload that uses the [common al 1. Select **Done**. #### [Log alert](#tab/log) - > [!NOTE] - > If you're creating a new log alert rule, note that the current alert rule wizard is different from the earlier experience. For more information, see [Changes to the log alert rule creation experience](#changes-to-the-log-alert-rule-creation-experience). - 1. On the **Logs** pane, write a query that returns the log events for which you want to create an alert. - To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule."::: + To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries. + 1. (Optional) If you're querying an ADX cluster, Log Analytics can't automatically identify the column with the event timestamp, so we recommend that you add a time range filter to the query. For example: ```azurecli adx(cluster).table | where MyTS >= ago(5m) and MyTS <= now() ``` ++ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log alert rule."::: + 1. Select **Run** to run the alert. 1. The **Preview** section shows you the query results. When you're finished editing your query, select **Continue Editing Alert**. 1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last five minutes. If the system detects summarized query results, the rule is automatically updated with that information. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log alert rule."::: - 1. In the **Measurement** section, select values for these fields: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot that shows the Measurement tab when creating a new log alert rule."::: + |Field |Description | |||- |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. <br> **Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. | + |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. | |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value by using the aggregation granularity. Examples are Total, Average, Minimum, or Maximum. | |Aggregation granularity| The interval for aggregating multiple records to one numeric value.| - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot that shows the Measurement tab when creating a new log alert rule."::: -- 1. (Optional) In the **Split by dimensions** section, you can use dimensions to monitor the values of multiple instances of a resource with one rule. Splitting by dimensions allows you to create resource-centric alerts at scale for a subscription or resource group. When you split by dimensions, alerts are split into separate alerts by grouping combinations of numerical or string columns to monitor for the same condition on multiple Azure resources. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually. Notifications are sent for each instance. + + 1. <a name="dimensions"></a>(Optional) In the **Split by dimensions** section, you can use dimensions to help provide context for the triggered alert. - Splitting on the **Azure Resource ID** column makes the specified resource the target of the alert. + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot that shows the splitting by dimensions section of a new log alert rule."::: - If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. The alert payload includes the combination that triggered the alert. + Dimensions are columns from your query results that contain additional data. When you use dimensions, the alert rule groups the query results by the dimension values and evaluates the results of each group separately. If the condition is met, the rule fires an alert for that group. The alert payload includes the combination that triggered the alert. - You can select up to six more splittings for any columns that contain text or numbers. - - > [!NOTE] - > Dimensions can **only** be number or string columns. If for example you want to use a dynamic column as a dimension, you need to convert it to a string first. + You can apply up to six dimensions per alert rule. Dimensions can only be string or numeric columns. If you want to use a column that isn't a number or string type as a dimension, you must convert it to a string or numeric value in your query. If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. - You can also decide *not* to split when you want a condition applied to multiple resources in the scope. An example would be if you want to fire an alert if at least five machines in the resource group scope have CPU usage over 80 percent. + For example: + - You could use dimensions to monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually, and notifications are sent for each instance where the CPU usage exceeds the configured value. + - You could decide not to split by dimensions when you want a condition applied to multiple resources in the scope. For example, you wouldn't use dimensions if you want to fire an alert if at least five machines in the resource group scope have CPU usage above the configured value. Select values for these fields: + - **Resource ID column**: In general, if your alert rule scope is a workspace, the alerts are fired on the workspace. If you want a separate alert for each affected Azure resource, you can: + - use the ARM **Azure Resource ID** column as a dimension + - specify it as a dimension in the Azure Resource ID property, which makes the resource returned by your query the target of the alert, so alerts are fired on the resource returned by your query, such as a virtual machine or a storage account, as opposed to in the workspace. When you use this option, if the workspace gets data from resources in more than one subscription, alerts can be triggered on resources from a subscription that is different from the alert rule subscription. + |Field |Description | |||- |Resource ID column|Splitting on the **Azure Resource ID** column makes the specified resource the target of the alert. If detected, the **ResourceID** column is selected automatically and changes the context of the fired alert to the record's resource. | |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.| |Operator|The operator used on the dimension name and value. | |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. | |Include all future values| Select this field to include any future values added to the selected dimension. | - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot that shows the splitting by dimensions section of a new log alert rule."::: - 1. In the **Alert logic** section, select values for these fields: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log alert rule."::: + |Field |Description | ||| |Operator| The query results are transformed into a number. In this field, select the operator to use to compare the number against the threshold.| |Threshold value| A number value for the threshold. | |Frequency of evaluation|How often the query is run. Can be set from a minute to a day.| - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log alert rule."::: 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. Your application business policy determines this setting. + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot that shows the Advanced options section of a new log alert rule."::: + Select values for these fields under **Number of violations to trigger the alert**: |Field |Description | Alerts triggered by these alert rules contain a payload that uses the [common al |Evaluation period|The time period within which the number of violations occur. | |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data.<br> If the query requires more data than the alert evaluation, and there's no **ago** command in the query, you can change the time range manually.| - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot that shows the Advanced options section of a new log alert rule."::: - > [!NOTE] > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**. If you don't, the rule creation will fail because it won't meet the policy requirements. Alerts triggered by these alert rules contain a payload that uses the [common al From this point on, you can select the **Review + create** button at any time. -### Set the actions for the alert rule +### Set the alert rule actions 1. On the **Actions** tab, select or create the required [action groups](./action-groups.md). Alerts triggered by these alert rules contain a payload that uses the [common al The format for extracting a dynamic value from the alert payload is: `${<path to schema field>}`. For example: ${data.essentials.monitorCondition}. - Use the [common alert schema](alerts-common-schema.md) format to specify the field in the payload, whether or not the action groups configured for the alert rule use the common schema. + Use the [common alert schema](alerts-common-schema.md) format to specify the field in the payload, whether or not the action groups configured for the alert rule use the common schema. ++ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule."::: In the following examples, values in the **custom properties** are used to utilize data from a payload that uses the common alert schema: Alerts triggered by these alert rules contain a payload that uses the [common al - "Alert Resolved reason: Percentage CPU GreaterThan5 Resolved. The value is 3.585" - ΓÇ£Alert Fired reason": "Percentage CPU GreaterThan5 Fired. The value is 10.585" - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule."::: -- > [!NOTE] > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for log alerts. -### Set the details for the alert rule +### Set the alert rule details 1. On the **Details** tab, define the **Project details**. - Select the **Subscription**. Alerts triggered by these alert rules contain a payload that uses the [common al #### [Metric alert](#tab/metric) + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new alert rule."::: + 1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. (Optional) If you're creating a metric alert rule that monitors a custom metric with the scope defined as one of the following regions and you want to make sure that the data processing for the alert rule takes place within that region, you can select to process the alert rule in one of these regions: Alerts triggered by these alert rules contain a payload that uses the [common al |Field |Description | ||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|- |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met.<br> If you don't select this checkbox, metric alerts are stateless. Stateless alerts fire each time the condition is met, even if alert already fired.<br> The frequency of notifications for stateless metric alerts differs based on the alert rule's configured frequency:<br>**Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent somewhere between one and six minutes.<br>**Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and double the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent somewhere between 15 to 30 minutes.| -- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new alert rule."::: + |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met.<br> If you don't select this checkbox, metric alerts are stateless. Stateless alerts fire each time the condition is met, even if alert already fired.<br> The frequency of notifications for stateless metric alerts differs based on the alert rule's configured frequency:<br>**Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent somewhere between one and six minutes.<br>**Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and double the value of the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent somewhere between 15 to 30 minutes.| #### [Log alert](#tab/log) + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule."::: + 1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**. Alerts triggered by these alert rules contain a payload that uses the [common al |System assigned managed identity| Azure creates a new, dedicated identity for this alert rule. This identity has no permissions and is automatically deleted when the rule is deleted. After creating the rule, you must assign permissions to this identity to access the workspace and data sources needed for the query. For more information about assigning permissions, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). | |User assigned managed identity|Before you create the alert rule, you [create an identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) and assign it appropriate permissions for the log query. This is a regular Azure identity. You can use one identity in multiple alert rules. The identity isn't deleted when the rule is deleted. When you select this type of identity, a pane opens for you to select the associated identity for the rule. | - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule."::: - 1. (Optional) In the **Advanced options** section, you can set several options: |Field |Description | Alerts triggered by these alert rules contain a payload that uses the [common al #### [Activity log alert](#tab/activity-log) + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: + 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: - #### [Resource Health alert](#tab/resource-health) + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: + 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.+ #### [Service Health alert](#tab/service-health) :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule.":::- #### [Service Health alert](#tab/service-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.-- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: ### Finish creating the alert rule You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit ``` #### [Log alert](#tab/log) + Azure CLI support is only available for the `scheduledQueryRules` API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates. If you use the legacy [Log Analytics Alert API](./api-alerts.md), switch to use the CLI. [Learn more about switching](./alerts-log-api-switch.md). + To create a log alert rule that monitors the count of system event errors: ```azurecli az monitor scheduled-query create -g {ResourceGroup} -n {nameofthealert} --scopes {vm_id} --condition "count \'union Event, Syslog | where TimeGenerated > a(1h) | where EventLevelName == \"Error\" or SeverityLevel== \"err\"\' > 2" --description {descriptionofthealert} ``` - > [!NOTE] - > Azure CLI support is only available for the `scheduledQueryRules` API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described in the following sections. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you must switch to use the CLI. [Learn more about switching](./alerts-log-api-switch.md). - #### [Activity log alert](#tab/activity-log) To create a new activity log alert rule, use the following commands: You can also create an activity log alert on future events similar to an activit If you want to make the alert rule more general, modify the scope and condition accordingly. See steps 3-9 in the section "Create a new alert rule in the Azure portal." -1. Follow the rest of the steps from [Create a new alert rule in the Azure portal](#create-a-new-alert-rule-in-the-azure-portal). --## Changes to the log alert rule creation experience --The current alert rule wizard is different from the earlier experience: --- Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1,000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action:- - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, which gives you context for why the alert fired and how to fix the issue. - - When you need to investigate in the logs, use the link in the alert to the search results in logs. - - If you need the raw search results or for any other advanced customizations, use Azure Logic Apps. -- The new alert rule wizard doesn't support customization of the JSON payload.- - Use custom properties in the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to add static parameters and associated values to the webhook actions triggered by the alert. - - For more advanced customizations, use Logic Apps. -- The new alert rule wizard doesn't support customization of the email subject.- - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](alerts-unified-log.md#split-by-alert-dimensions) to trigger an alert of the desired resource by using the resource ID column. - - For more advanced customizations, use Logic Apps. +1. Follow the rest of the steps from [Create a new alert rule in the Azure portal](#create-or-edit-an-alert-rule-in-the-azure-portal). ## Next steps [View and manage your alert instances](alerts-manage-alert-instances.md) |
azure-monitor | Alerts Dynamic Thresholds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-dynamic-thresholds.md | We recommend configuring alert rules with dynamic thresholds on these metrics: - Virtual machine CPU percentage - Application Insights HTTP request execution time -When configuring alert rules in the [Azure portal](https://portal.azure.com), follow the procedure to [Create a new alert rule in the Azure portal](alerts-create-new-alert-rule.md#create-a-new-alert-rule-in-the-azure-portal), with these settings +When configuring alert rules in the [Azure portal](https://portal.azure.com), follow the procedure to [Create a new alert rule in the Azure portal](alerts-create-new-alert-rule.md#create-or-edit-an-alert-rule-in-the-azure-portal), with these settings 1. In the **Conditions** tab, 1. In the **Thresholds** field, select **Dynamic**. 1. In the **Aggregation type**, we recommend that you don't select **Maximum**. |
azure-monitor | Alerts Log Api Switch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md | Title: Upgrade legacy rules management to the current Azure Monitor Log Alerts API description: Learn how to switch to the log alerts management to ScheduledQueryRules API Previously updated : 2/23/2022 Last updated : 07/09/2023 # Upgrade to the Log Alerts API from the legacy Log Analytics alerts API In the past, users used the [legacy Log Analytics Alert API](/azure/azure-monito - Ability to create a [cross workspace log alert](/azure/azure-monitor/logs/cross-workspace-query) that spans several external resources like Log Analytics workspaces or Application Insights resources for switched rules. - Users can specify dimensions to split the alerts for switched rules. - Log alerts have extended period of up to two days of data (previously limited to one day) for switched rules.+- +## Changes to the log alert rule creation experience ++The current alert rule wizard is different from the earlier experience: ++- Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1,000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action: + - We recommend using [dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, which gives you context for why the alert fired and how to fix the issue. + - When you need to investigate in the logs, use the link in the alert to the search results in logs. + - If you need the raw search results or for any other advanced customizations, [use Azure Logic Apps](alerts-logic-apps.md). +- The new alert rule wizard doesn't support customization of the JSON payload. + - Use custom properties in the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to add static parameters and associated values to the webhook actions triggered by the alert. + - For more advanced customizations, [use Azure Logic Apps](alerts-logic-apps.md). +- The new alert rule wizard doesn't support customization of the email subject. + - Customers often use the custom email subject to indicate the resource on which the alert fired, instead of using the Log Analytics workspace. Use the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to trigger an alert of the desired resource by using the resource ID column. + - For more advanced customizations, [use Azure Logic Apps](alerts-logic-apps.md). + ## Impact If the Log Analytics workspace wasn't switched, the response is: ## Next steps -- Learn about the [Azure Monitor - Log Alerts](/azure/azure-monitor/alerts/alerts-unified-log).+- Learn about the [Azure Monitor - Log Alerts](/azure/azure-monitor/alerts/alerts-types). - Learn how to [manage your log alerts using the API](/azure/azure-monitor/alerts/alerts-log-create-templates). - Learn how to [manage log alerts using PowerShell](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell). - Learn more about the [Azure Alerts experience](/azure/azure-monitor/alerts/alerts-overview). |
azure-monitor | Alerts Metric Multiple Time Series Single Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-multiple-time-series-single-rule.md | Title: Monitor multiple time series in a single metric alert rule description: Alert at scale by using a single alert rule for multiple time series. Previously updated : 2/23/2022 Last updated : 07/09/2023 For example: - **Target resource**: *VM-a* - Condition1- - **Signa**: *Percentage CPU* + - **Signal**: *Percentage CPU* - **Operator**: *Greater Than* - **Threshold**: *80* - Condition2 |
azure-monitor | Itsmc Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard.md | Title: Investigate errors by using the ITSMC dashboard description: Learn how to use the IT Service Management Connector dashboard to investigate errors. Previously updated : 06/19/2022 Last updated : 07/09/2023 |
azure-monitor | App Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md | Title: Application Map in Azure Application Insights | Microsoft Docs description: Monitor complex application topologies with Application Map and Intelligent view. Previously updated : 11/15/2022 Last updated : 07/10/2023 ms.devlang: csharp, java, javascript, python |
azure-monitor | Codeless Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md | Title: Autoinstrumentation for Azure Monitor Application Insights description: Overview of autoinstrumentation for Azure Monitor Application Insights codeless application performance management. Previously updated : 05/12/2023 Last updated : 07/10/2023 |
azure-monitor | Java Get Started Supplemental | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md | Title: Application Insights with containers description: This article shows you how to set-up Application Insights Previously updated : 06/19/2023 Last updated : 07/10/2023 ms.devlang: java |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 06/19/2023 Last updated : 07/10/2023 ms.devlang: java |
azure-monitor | Monitor Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md | Title: Monitor applications running on Azure Functions with Application Insights description: Azure Monitor integrates with your Azure Functions application, allowing performance monitoring and quickly identifying problems. Previously updated : 06/23/2023 Last updated : 07/10/2023 |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | Title: Configure Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides configuration guidance for .NET, Java, Node.js, and Python applications. Previously updated : 06/23/2023 Last updated : 07/10/2023 ms.devlang: csharp, javascript, typescript, python |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 06/22/2023 Last updated : 07/10/2023 ms.devlang: csharp, javascript, typescript, python |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | Title: Telemetry sampling in Azure Application Insights | Microsoft Docs description: How to keep the volume of telemetry under control. Previously updated : 06/23/2023 Last updated : 07/10/2023 |
azure-monitor | Sdk Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md | Title: Connection strings in Application Insights | Microsoft Docs description: This article shows how to use connection strings. Previously updated : 06/23/2023 Last updated : 07/10/2023 |
azure-monitor | Usage Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md | Title: Usage analysis with Application Insights | Azure Monitor description: Understand your users and what they do with your app. Previously updated : 06/23/2023 Last updated : 07/10/2023 |
azure-monitor | Log Analytics Workspace Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-health.md | To view your Log Analytics workspace health and set up health status alerts: :::image type="content" source="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" lightbox="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" alt-text="Screenshot that shows the Create alert rule wizard for Log Analytics workspace latency issues."::: - 1. Follow the rest of the steps in [Create a new alert rule in the Azure portal](../alerts/alerts-create-new-alert-rule.md#create-a-new-alert-rule-in-the-azure-portal). + 1. Follow the rest of the steps in [Create a new alert rule in the Azure portal](../alerts/alerts-create-new-alert-rule.md#create-or-edit-an-alert-rule-in-the-azure-portal). ## View Log Analytics workspace health metrics |
azure-monitor | Query Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-audit.md | An audit record is created each time a query is run. If you send the data to a L |AzureAutomation|[Azure Automation.](../../automation/overview.md)| |AzureMonitorLogsConnector|[Azure Monitor Logs Connector](../../connectors/connectors-azure-monitor-logs.md).| |csharpsdk|[Log Analytics Query API.](../logs/api/overview.md)|-|Draft-Monitor|[Log alert creation in the Azure portal.](../alerts/alerts-create-new-alert-rule.md?tabs=metric#create-a-new-alert-rule-in-the-azure-portal)| +|Draft-Monitor|[Log alert creation in the Azure portal.](../alerts/alerts-create-new-alert-rule.md?tabs=log)| |Grafana|[Grafana connector.](../visualize/grafana-plugin.md)| |IbizaExtension|Experiences of Log Analytics in the Azure portal.| |infraInsights/container|[Container insights.](../containers/container-insights-overview.md)| |
azure-monitor | Profiler Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md | Title: Troubleshoot Application Insights Profiler description: Walk through troubleshooting steps and information to enable and use Application Insights Profiler. Previously updated : 05/11/2023 Last updated : 07/10/2023 If the data you're trying to view is older than two weeks, try limiting your tim Check that a firewall or proxies aren't blocking your access to [this webpage](https://gateway.azureserviceprofiler.net). -## Is Profiler running? +## Are you seeing timeouts or do you need to check to see if Profiler is running? Profiling data is uploaded only when it can be attached to a request that happened while Profiler was running. Profiler collects data for two minutes each hour. You can also trigger Profiler by [starting a profiling session](./profiler-settings.md#profile-now). Search for trace messages and custom events sent by Profiler to your Application - Profiler started and sent custom events when it detected requests that happened while Profiler was running. If the `ServiceProfilerSample` custom event is displayed, it means that a profile was captured and is available in the **Application Insights Performance** pane. - If no records are displayed, Profiler isn't running. Make sure you've [enabled Profiler on your Azure service](./profiler.md). + If no records are displayed, Profiler isn't running or has timed out. Make sure you've [enabled Profiler on your Azure service](./profiler.md). ## Double counting in parallel threads |
azure-monitor | Snapshot Debugger Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-upgrade.md | -To provide the best possible security for your data, Microsoft is moving away from TLS 1.0 and TLS 1.1 because these protocols are vulnerable to determined attackers. If you're using an older version of the site extension, it requires an upgrade to continue working. This article outlines the steps needed to upgrade your instance of Snapshot Debugger to the latest version. +> [!IMPORTANT] +> [Microsoft is moving away from TLS 1.0 and TLS 1.1](/lifecycle/announcements/transport-layer-security-1x-disablement) due to vulnerabilities. If you're using an older version of the site extension, you need to upgrade your instance of Snapshot Debugger to the latest version. Depending on how you enabled the Snapshot Debugger, you can follow two primary upgrade paths: -* Via site extension -* Via an SDK/NuGet added to your application --This article discusses both upgrade paths. +- Via site extension +- Via an SDK/NuGet added to your application -## Upgrade the site extension +# [Site extension](#tab/site-ext) > [!IMPORTANT]-> Older versions of Application Insights used a private site extension called *Application Insights extension for Azure App Service*. The current Application Insights experience is enabled by setting App Settings to light up a preinstalled site extension. +> Older versions of Application Insights used a private site extension called *Application Insights extension for Azure App Service*. +> The current Application Insights experience is enabled by setting App Settings to light up a preinstalled site extension. > To avoid conflicts, which might cause your site to stop working, delete the private site extension first. See step 4 in the following procedure. If you enabled the Snapshot Debugger by using the site extension, you can upgrade by following these steps: If you enabled the Snapshot Debugger by using the site extension, you can upgrad :::image type="content" source="./media/snapshot-debugger-upgrade/app-service-resource.png" alt-text="Screenshot that shows an individual App Service resource named DiagService01."::: -1. After you've moved to your resource, select the **Extensions** pane. Wait for the list of extensions to populate. +1. Select the **Extensions** pane. Wait for the list of extensions to populate. :::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-site-extension-to-be-deleted.png" alt-text="Screenshot that shows App Service Extensions showing the Application Insights extension for Azure App Service installed."::: If you enabled the Snapshot Debugger by using the site extension, you can upgrad The site is now upgraded and is ready to use. -## Upgrade Snapshot Debugger by using SDK/NuGet -If the application is using a version of `Microsoft.ApplicationInsights.SnapshotCollector` earlier than version 1.3.1, you must upgrade it to a [newer version](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) to continue working. +# [SDK/NuGet](#tab/sdk-nuget) ++If your application is using a version of `Microsoft.ApplicationInsights.SnapshotCollector` earlier than version 1.3.1, upgrade it to a [newer version](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) to continue working. ++++## Next steps ++- [Learn how to view snapshots](./snapshot-debugger-data.md) +- [Troubleshoot issues you encounter in Snapshot Debugger](./snapshot-debugger-troubleshoot.md) |
azure-monitor | Snapshot Debugger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md | Title: Application Insights Snapshot Debugger for .NET apps -description: Debug snapshots are automatically collected when exceptions are thrown in production .NET apps. + Title: Debug exceptions in .NET applications using Snapshot Debugger +description: Use Snapshot Debugger to automatically collect snapshots and debug exceptions in .NET apps. reviewer: cweining - Previously updated : 04/14/2023+ Last updated : 07/10/2023 -# Debug snapshots on exceptions in .NET apps +# Debug exceptions in .NET applications using Snapshot Debugger -When an exception occurs, you can automatically collect a debug snapshot from your live web application. The debug snapshot shows the state of source code and variables at the moment the exception was thrown. +With Snapshot Debugger, you can automatically collect a debug snapshot when an exception occurs in your live .NET application. The debug snapshot shows the state of source code and variables at the moment the exception was thrown. The Snapshot Debugger in [Application Insights](../app/app-insights-overview.md): The Snapshot Debugger in [Application Insights](../app/app-insights-overview.md) - Collects snapshots on your top-throwing exceptions. - Provides information you need to diagnose issues in production. -To use the Snapshot Debugger, you: +## How Snapshot Debugger works -- Include the [Snapshot Collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application.-- Configure collection parameters in [`ApplicationInsights.config`](../app/configuration-with-applicationinsights-config.md).--## How snapshots work --The Snapshot Debugger is implemented as an [Application Insights telemetry processor](../app/configuration-with-applicationinsights-config.md#telemetry-processors-aspnet). When your application runs, the Snapshot Debugger telemetry processor is added to your application's system-generated logs pipeline. +The Snapshot Debugger is implemented as an [Application Insights telemetry processor](../app/configuration-with-applicationinsights-config.md#telemetry-processors-aspnet). When your application runs, the Snapshot Debugger telemetry processor is added to your application's system-generated logs pipeline. The Snapshot Debugger process is as follows: -Each time your application calls [TrackException](../app/asp-net-exceptions.md#exceptions), the Snapshot Debugger computes a problem ID from the type of exception being thrown and the throwing method. -Each time your application calls `TrackException`, a counter is incremented for the appropriate problem ID. When the counter reaches the `ThresholdForSnapshotting` value, the problem ID is added to a collection plan. --The Snapshot Debugger also monitors exceptions as they're thrown by subscribing to the [AppDomain.CurrentDomain.FirstChanceException](/dotnet/api/system.appdomain.firstchanceexception) event. When that event fires, the problem ID of the exception is computed and compared against the problem IDs in the collection plan. --If there's a match, a snapshot of the running process is created. The snapshot is assigned a unique identifier and the exception is stamped with that identifier. After the `FirstChanceException` handler returns, the thrown exception is processed as normal. Eventually, the exception reaches the `TrackException` method again. It's reported to Application Insights, along with the snapshot identifier. +1. Each time your application calls [`TrackException`](../app/asp-net-exceptions.md#exceptions): + 1. The Snapshot Debugger computes a problem ID from the type of exception being thrown and the throwing method. + 1. A counter is incremented for the appropriate problem ID. + 1. When the counter reaches the `ThresholdForSnapshotting` value, the problem ID is added to a collection plan. +1. The Snapshot Debugger also monitors exceptions as they're thrown by subscribing to the [`AppDomain.CurrentDomain.FirstChanceException`](/dotnet/api/system.appdomain.firstchanceexception) event. + 1. When this event fires, the problem ID of the exception is computed and compared against the problem IDs in the collection plan. +1. If there's a match between problem IDs, a snapshot of the running process is created. +1. The snapshot is assigned a unique identifier and the exception is stamped with that identifier. +1. After the `FirstChanceException` handler returns, the thrown exception is processed as normal. +1. Eventually, the exception reaches the `TrackException` method again. It's reported to Application Insights, along with the snapshot identifier. The main process continues to run and serve traffic to users with little interruption. Meanwhile, the snapshot is handed off to the Snapshot Uploader process. The Snapshot Uploader creates a minidump and uploads it to Application Insights along with any relevant symbol (*.pdb*) files. -Snapshot creation tips: - * A process snapshot is a suspended clone of the running process. - * Creating the snapshot takes about 10 milliseconds to 20 milliseconds. - * The default value for `ThresholdForSnapshotting` is 1. This value is also the minimum. Your app has to trigger the same exception *twice* before a snapshot is created. - * Set `IsEnabledInDeveloperMode` to `true` if you want to generate snapshots while you debug in Visual Studio. - * The snapshot creation rate is limited by the `SnapshotsPerTenMinutesLimit` setting. By default, the limit is one snapshot every 10 minutes. - * No more than 50 snapshots per day can be uploaded. +> [!TIP] +> Snapshot creation tips: +> - A process snapshot is a suspended clone of the running process. +> - Creating the snapshot takes about 10 milliseconds to 20 milliseconds. +> - The default value for `ThresholdForSnapshotting` is 1. This value is also the minimum. Your app has to trigger the same exception *twice* before a snapshot is created. +> - Set `IsEnabledInDeveloperMode` to `true` if you want to generate snapshots while you debug in Visual Studio. +> - The snapshot creation rate is limited by the `SnapshotsPerTenMinutesLimit` setting. By default, the limit is one snapshot every 10 minutes. +> - No more than 50 snapshots per day can be uploaded. ## Supported applications and environments The following environments are supported: If you enabled the Snapshot Debugger but you aren't seeing snapshots, see the [Troubleshooting guide](snapshot-debugger-troubleshoot.md). -## Required permissions +## Requirements -Access to snapshots is protected by Azure role-based access control. To inspect a snapshot, you must first be added to the [Application Insights Snapshot Debugger](../../role-based-access-control/role-assignments-portal.md) role. Subscription owners can assign this role to individual users or groups for the target **Application Insights Snapshot**. +### Packages and configurations ++- Include the [Snapshot Collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application. +- Configure collection parameters in [`ApplicationInsights.config`](../app/configuration-with-applicationinsights-config.md). ++### Permissions ++Since access to snapshots is protected by Azure role-based access control, you must be added to the [Application Insights Snapshot Debugger](../../role-based-access-control/role-assignments-portal.md) role. Subscription owners can assign this role to individual users or groups for the target **Application Insights Snapshot**. For more information, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md). Debug snapshots are stored for 15 days. The default data retention policy is set ### Publish symbols -The Snapshot Debugger requires symbol files on the production server to decode variables and to provide a debugging experience in Visual Studio. +The Snapshot Debugger requires symbol files on the production server to: +- Decode variables +- Provide a debugging experience in Visual Studio ++By default, Visual Studio 2017 versions 15.2+ publishes symbols for release builds when it publishes to App Service. -Version 15.2 (or above) of Visual Studio 2017 publishes symbols for release builds by default when it publishes to App Service. In prior versions, you must add the following line to your publish profile `.pubxml` file so that symbols are published in release mode: +In prior versions, you must add the following line to your publish profile `.pubxml` file so that symbols are published in release mode: ```xml <ExcludeGeneratedDebugSymbol>False</ExcludeGeneratedDebugSymbol> ``` -For Azure Compute and other types, make sure that the symbol files are in the same folder of the main application .dll (typically, `wwwroot/bin`). Or they must be available on the current path. +For Azure Compute and other types, make sure that the symbol files are either: +- In the same folder of the main application `.dll` (typically, `wwwroot/bin`), or +- Available on the current path. For more information on the different symbol options that are available, see the [Visual Studio documentation](/visualstudio/ide/reference/advanced-build-settings-dialog-box-csharp). For best results, we recommend that you use *Full*, *Portable*, or *Embedded*. ### Optimized builds -In some cases, local variables can't be viewed in release builds because of optimizations that are applied by the JIT compiler. +In some cases, local variables can't be viewed in release builds because of optimizations applied by the JIT compiler. -However, in App Service, the Snapshot Collector can deoptimize throwing methods that are part of its collection plan. +However, in App Service, the Snapshot Debugger can deoptimize throwing methods that are part of its collection plan. > [!TIP] > Install the Application Insights Site extension in your instance of App Service to get deoptimization support. ## Release notes for Microsoft.ApplicationInsights.SnapshotCollector -This article contains the release notes for the `Microsoft.ApplicationInsights.SnapshotCollector` NuGet package for .NET applications, which is used by the Application Insights Snapshot Debugger. +This section contains the release notes for the `Microsoft.ApplicationInsights.SnapshotCollector` NuGet package for .NET applications, which is used by the Application Insights Snapshot Debugger. [Learn](./snapshot-debugger.md) more about the Application Insights Snapshot Debugger for .NET applications. Addressed multiple improvements and added support for Azure Active Directory (Az - Added back `MinidumpWithThreadInfo` when writing dumps. - Added `CompatibilityVersion` to improve synchronization between the Snapshot Collector agent and the Snapshot Uploader on breaking changes. - Changed `SnapshotUploader` LogFile naming algorithm to avoid excessive file I/O in App Service.-- Added pid, role name, and process start time to uploaded blob metadata.-- Used `System.Diagnostics.Process` where possible in Snapshot Collector and Snapshot Uploader.+- Added `pid`, `role name`, and `process start time` to uploaded blob metadata. +- Used `System.Diagnostics.Process` in Snapshot Collector and Snapshot Uploader. #### New features Added Azure AD authentication to `SnapshotCollector`. To learn more about Azure AD authentication in Application Insights, see [Azure AD authentication for Application Insights](../app/azure-ad-authentication.md). Switched to using `HttpClient` for all targets except `net45` because `WebReques - Deoptimization support (via ReJIT on attach) for .NET Core 3.0 applications. - Added symbols to NuGet package. - Set more metadata when you upload minidumps.-- Added an `Initialized` property to `SnapshotCollectorTelemetryProcessor`. It's a `CancellationToken`, which is canceled when the Snapshot Collector is completely initialized and connected to the service endpoint.+- Added an `Initialized` property to `SnapshotCollectorTelemetryProcessor`. It's a `CancellationToken`, which is canceled when the Snapshot Collector is initialized and connected to the service endpoint. - Snapshots can now be captured for exceptions in dynamically generated methods. An example is the compiled expression trees generated by Entity Framework queries. #### Bug fixes Switched to using `HttpClient` for all targets except `net45` because `WebReques - Handle `InvalidOperationException` when you're deoptimizing dynamic methods (for example, Entity Framework). ### [1.3.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.5)-- Added support for sovereign clouds (older versions won't work in sovereign clouds).+- Added support for sovereign clouds (older versions don't work in sovereign clouds). - Adding Snapshot Collector made easier by using `AddSnapshotCollector()`. For more information, see [Enable Snapshot Debugger for .NET apps in Azure App Service](./snapshot-debugger-app-service.md). - Use the FISMA MD5 setting for verifying blob blocks. This setting avoids the default .NET MD5 crypto algorithm, which is unavailable when the OS is set to FIPS-compliant mode.-- Ignore .NET Framework frames when deoptimizing function calls. This behavior can be controlled by the `DeoptimizeIgnoredModules` configuration setting.+- Ignore .NET Framework frames when deoptimizing function calls. Control this behavior with the `DeoptimizeIgnoredModules` configuration setting. - Added the `DeoptimizeMethodCount` configuration setting that allows deoptimization of more than one function call. ### [1.3.4](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.4) Fixed bug that was causing *SnapshotUploader.exe* to stop responding and not upl ### [1.3.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.0) #### Changes-- For applications that target .NET Framework, Snapshot Collector now depends on Microsoft.ApplicationInsights version 2.3.0 or above.-It used to be 2.2.0 or above. +- For applications that target .NET Framework, Snapshot Collector now depends on Microsoft.ApplicationInsights version 2.3.0 or later. +It used to be 2.2.0 or later. We believe this change won't be an issue for most applications. Let us know if this change prevents you from using the latest Snapshot Collector. - Use exponential back-off delays in the Snapshot Uploader when retrying failed uploads. - Use `ServerTelemetryChannel` (if available) for more reliable reporting of telemetry.-- Use `SdkInternalOperationsMonitor` on the initial connection to the Snapshot Debugger service so that it's ignored by dependency tracking.+- Use `SdkInternalOperationsMonitor` on the initial connection to the Snapshot Debugger service so that dependency tracking ignores it. - Improved telemetry around initial connection to Snapshot Debugger. - Report more telemetry for the: - App Service version. |
azure-monitor | Usage Estimated Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md | Several other features don't have a direct cost, but instead you pay for the ing | Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application Insights resources. For most customers, this category typically incurs the bulk of Azure Monitor charges. There's no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for logs can vary significantly on the configuration that you choose. For information on how charges for logs data are calculated and the different pricing tiers available, see [Azure Monitor logs pricing details](logs/cost-logs.md). | | Platform logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there's a charge for the workspace data ingestion and collection. | | Metrics | There's no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There's a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |-| Prometheus Metrics | The service is currently free to use, with billing set to begin on 8/1/2023. Pricing for Azure Monitor managed service for Prometheus consists of data ingestion priced at $0.16/10 million samples ingested and metric queries priced at $0.001/10 million samples processed. Data is retained for 18 months at no extra charge. | +| Prometheus Metrics | The service is currently free to use, with billing set to begin on 8/1/2023. Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](essentials/prometheus-metrics-enable.md) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. | | Alerts | Charges are based on the type and number of signals used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [Log alerts](alerts/alerts-types.md#log-alerts) configured for [at-scale monitoring](alerts/alerts-types.md#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. | | Web tests | There's a cost for [standard web tests](app/availability-standard-tests.md) and [multistep web tests](/previous-versions/azure/azure-monitor/app/availability-multistep) in Application Insights. Multistep web tests have been deprecated. |
azure-netapp-files | Backup Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md | Last updated 06/01/2023 + # Understand Azure NetApp Files backup Azure NetApp Files backup expands the data protection capabilities of Azure NetApp Files by providing fully managed backup solution for long-term recovery, archive, and compliance. Backups created by the service are stored in Azure storage, independent of volume snapshots that are available for near-term recovery or cloning. Backups taken by the service can be restored to new Azure NetApp Files volumes within the region. Azure NetApp Files backup supports both policy-based (scheduled) backups and manual (on-demand) backups. For more information, see [How Azure NetApp Files snapshots work](snapshots-introduction.md). If you choose to restore a backup of, for example, 600 GiB to a new volume, you' ## Next steps -* [Requirements and considerations for Azure NetApp Files](backup-requirements-considerations.md) +* [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [Configure policy-based backups](backup-configure-policy-based.md) * [Configure manual backups](backup-configure-manual.md) If you choose to restore a backup of, for example, 600 GiB to a new volume, you' * [Volume backup metrics](azure-netapp-files-metrics.md#volume-backup-metrics) * [Azure NetApp Files backup FAQs](faq-backup.md) * [How Azure NetApp Files snapshots work](snapshots-introduction.md)++ |
azure-netapp-files | Backup Restore New Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md | Restoring a backup creates a new volume with the same protocol type. This articl * Restoring a backup to a new volume is not dependent on the networking type used by the source volume. You can restore the backup of a volume configured with Basic networking to a volume configured with Standard networking and vice versa. -> [!CAUTION] +* See [Restoring volume backups from vaulted snapshots](snapshots-introduction.md#restoring-volume-backups-from-vaulted-snapshots) for more information. +++> [!IMPORTANT] > Running multiple concurrent volume restores using Azure NetApp Files backup may increase the time it takes for each individual, in-progress restore to complete. As such, if time is a factor to you, you should prioritize and sequentialize the most important volume restores and wait until the restores are complete before starting another, lower priority, volume restores. See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup. See [Requirements and considerations for Azure NetApp Files backup](backup-requi * [Delete backups of a volume](backup-delete.md) * [Volume backup metrics](azure-netapp-files-metrics.md#volume-backup-metrics) * [Azure NetApp Files backup FAQs](faq-backup.md)+* [How Azure NetApp Files snapshots work](snapshots-introduction.md) |
azure-netapp-files | Double Encryption At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md | Azure NetApp Files double encryption at rest is supported for the following regi * For the cost of using Azure NetApp Files double encryption at rest, see the [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) page. * You can't convert volumes in a single-encryption capacity pool to use double encryption at rest. However, you can copy data in a single-encryption volume to a volume created in a capacity pool that is configured with double encryption. * For capacity pools created with double encryption at rest, volume names in the capacity pool are visible only to volume owners for maximum security.-* Using double encryption at rest might have performance impacts based on the workload type and frequency. The performance impact can range from a minimal 1-2% to possibly 15% or higher, depending on the workload profile. +* Using double encryption at rest might have performance impacts based on the workload type and frequency. The performance impact can range from a minimal 1-2% to a higher percentage, depending on the workload profile. ## Next steps |
azure-portal | Azure Portal Dashboard Share Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboard-share-access.md | Title: Share Azure portal dashboards by using Azure role-based access control description: This article explains how to share a dashboard in the Azure portal by using Azure role-based access control. Previously updated : 03/16/2023 Last updated : 07/10/2023 # Share Azure dashboards by using Azure role-based access control For example, any users who have the [Owner](/azure/role-based-access-control/bui Users with the [Reader](/azure/role-based-access-control/built-in-roles#reader) role for the subscription (or a custom role with `Microsoft.Portal/Dashboards/Read` permission) can list and view dashboards within that subscription, but they can't modify or delete them. These users are able to make private copies of dashboards for themselves. They can also make local edits to a published dashboard for their own use, such as when troubleshooting an issue, but they can't publish those changes back to the server. -To expand access to a dashboard beyond the access granted at the subscription level, assign permissions to an individual dashboard, or to a resource group that contains several dashboards. For example, if a user should have limited permissions across the subscription, but needs to be able to edit one particular dashboard, you can assign a different role with more permissions (such as [Contributor](/azure/role-based-access-control/built-in-roles#contributor)) for that dashboard only. +To expand access to a dashboard beyond the access granted at the subscription level, you can assign permissions to an individual dashboard, or to a resource group that contains several dashboards. For example, if a user should have limited permissions across the subscription, but needs to be able to edit one particular dashboard, you can assign a different role with more permissions (such as [Contributor](/azure/role-based-access-control/built-in-roles#contributor)) for that dashboard only. ++> [!IMPORTANT] +> Since individual tiles within a dashboard can enforce their own access control requirements, some users with access to view or edit a dashboard may not be able to see information within specific tiles. To ensure that users can see data within a certain tile, be sure that they have the appropriate permissions for the underlying resources accessed by that tile. ## Publish a dashboard For each dashboard that you have published, you can assign Azure RBAC built-in r 1. Select **Review + assign** to complete the assignment. +> [!TIP] +> As noted above, individual tiles within a dashboard can enforce their own access control requirements based on the resources that the tile displays. If users need to see data for a specific tile, be sure that they have the appropriate permissions for the underlying resources accessed by that tile. + ## Next steps * View the list of [Azure built-in roles](../role-based-access-control/built-in-roles.md). |
azure-vmware | Configure Port Mirroring Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-port-mirroring-azure-vmware-solution.md | After deploying Azure VMware Solution, you can configure port mirroring from the In this how-to, you'll configure port mirroring to monitor network traffic, which involves forwarding a copy of each packet from one network switch port to another. >[!IMPORTANT]- >Port Mirroring is intended to be used as a temporary investigative tool and not a permanent network data collection feature. This is because NSX-T Data Center does not have the resoures to port mirror all traffic continuously. The IPFIX feature should be used if a continuous meta-data network flow logging solution is required. + >Port Mirroring is intended to be used as a temporary investigative tool and not a permanent network data collection feature. This is because NSX-T Data Center does not have the resources to port mirror all traffic continuously. The IPFIX feature should be used if a continuous meta-data network flow logging solution is required. ## Prerequisites |
batch | Simplified Node Communication Pool No Public Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md | client-request-id: 00000000-0000-0000-0000-000000000000 "sku": "22_04-lts" }, "nodeAgentSKUId": "batch.node.ubuntu 22.04"- } + }, "networkConfiguration": { "subnetId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Network/virtualNetworks/<your_vnet_name>/subnets/<your_subnet_name>", "publicIPAddressConfiguration": { |
chaos-studio | Chaos Studio Permissions Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-permissions-security.md | Chaos Studio has three levels of security to help you control how and when fault * First, a chaos experiment is an Azure resource that's deployed to a region, resource group, and subscription. Users must have appropriate Azure Resource Manager permissions to create, update, start, cancel, delete, or view an experiment. - Each permission is a Resource Manager operation that can be granularly assigned to an identity or assigned as part of a role with wildcard permissions. For example, the Contributor role in Azure has */write permission at the assigned scope, which includes `Microsoft.Chaos/experiments/write permission`. + Each permission is a Resource Manager operation that can be granularly assigned to an identity or assigned as part of a role with wildcard permissions. For example, the Contributor role in Azure has `*/write` permission at the assigned scope, which includes `Microsoft.Chaos/experiments/write` permission. When you attempt to control the ability to inject faults against a resource, the most important operation to restrict is `Microsoft.Chaos/experiments/start/action`. This operation starts a chaos experiment that injects faults. To assign these permissions granularly, you can [create a custom role](../role-b All user interactions with Chaos Studio happen through Azure Resource Manager. If a user starts an experiment, the experiment might interact with endpoints other than Resource Manager, depending on the fault: -* **Service-direct faults**: Most service-direct faults are executed through Resource Manager. Target resources don't require any allowlisted network endpoints. -* **Service-direct AKS Chaos Mesh faults**: Service-direct faults for Azure Kubernetes Service (AKS) that use Chaos Mesh require access that the AKS cluster have a publicly exposed Kubernetes API server. To learn how to limit AKS network access to a set of IP ranges, see [Secure access to the API server using authorized IP address ranges in AKS](../aks/api-server-authorized-ip-ranges.md). -* **Agent-based faults**: Agent-based faults require agent access to the Chaos Studio agent service. A VM or virtual machine scale set must have outbound access to the agent service endpoint for the agent to connect successfully. The agent service endpoint is `https://acs-prod-<region>.chaosagent.trafficmanager.net`. You must replace the `<region>` placeholder with the region where your VM is deployed. An example is `https://acs-prod-eastus.chaosagent.trafficmanager.net` for a VM in East US. +* **Service-direct faults**: Most service-direct faults are executed through Azure Resource Manager and don't require any allowlisted network endpoints. +* **Service-direct AKS Chaos Mesh faults:** Service-direct faults for Azure Kubernetes Service that use Chaos Mesh require access to the AKS cluster's Kubernetes API server. + * [Learn how to limit AKS network access to a set of IP ranges here](../aks/api-server-authorized-ip-ranges.md). You can obtain Chaos Studio's IP ranges by querying the `ChaosStudio` [service tag with the Service Tag Discovery API or downloadable JSON files](../virtual-network/service-tags-overview.md). + * Currently, Chaos Studio can't execute Chaos Mesh faults if the AKS cluster has [local accounts disabled](../aks/manage-local-accounts-managed-azure-ad.md). +* **Agent-based faults**: To use agent-based faults, the agent needs access to the Chaos Studio agent service. A VM or virtual machine scale set must have outbound access to the agent service endpoint for the agent to connect successfully. The agent service endpoint is `https://acs-prod-<region>.chaosagent.trafficmanager.net`. You must replace the `<region>` placeholder with the region where your VM is deployed. An example is `https://acs-prod-eastus.chaosagent.trafficmanager.net` for a VM in East US. Chaos Studio doesn't support Azure Private Link for agent-based scenarios. ## Service tags A [service tag](../virtual-network/service-tags-overview.md) is a group of IP address prefixes that can be assigned to inbound and outbound rules for network security groups. It automatically handles updates to the group of IP address prefixes without any intervention. -You can use service tags to explicitly allow inbound traffic from Chaos Studio without the need to know the IP addresses of the platform. Currently, you can enable service tags via PowerShell. Support will soon be added to the Chaos Studio user interface. +You can use service tags to explicitly allow inbound traffic from Chaos Studio without the need to know the IP addresses of the platform. Chaos Studio's service tag is `ChaosStudio`. A limitation of service tags is that they can only be used with applications that have a public IP address. If a resource only has a private IP address, service tags can't route traffic to it. |
chaos-studio | Chaos Studio Tutorial Aks Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md | Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source cha - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - An AKS cluster with Linux node pools. If you don't have an AKS cluster, see the AKS quickstart that uses the [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or the [Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md). -> [!WARNING] -> AKS Chaos Mesh faults are only supported on Linux node pools. +## Limitations ++* You can use Chaos Mesh faults with private clusters by configuring [VNet Injection in Chaos Studio](chaos-studio-private-networking.md). Any commands issued to the private cluster, including the steps in this article to set up Chaos Mesh, need to follow the [private cluster guidance](../aks/private-clusters.md). Recommended methods include connecting from a VM in the same virtual network or using the [AKS command invoke](../aks/command-invoke.md) feature. +* AKS Chaos Mesh faults are only supported on Linux node pools. +* Currently, Chaos Mesh faults don't work if the AKS cluster has [local accounts disabled](../aks/manage-local-accounts-managed-azure-ad.md). +* If your AKS cluster is configured to only allow authorized IP ranges, you need to allow Chaos Studio's IP ranges. You can find them by querying the `ChaosStudio` [service tag with the Service Tag Discovery API or downloadable JSON files](../virtual-network/service-tags-overview.md). ## Open Azure Cloud Shell Now you can create your experiment. A chaos experiment defines the actions you w namespaces: - default ```- 1. Remove any YAML outside of the `spec`, including the spec property name. Remove the indentation of the spec details. + 1. Remove any YAML outside of the `spec`, including the spec property name. Remove the indentation of the spec details. The `duration` parameter isn't necessary, but is used if provided. In this case, remove it. ```yaml action: pod-failure mode: all- duration: '600s' selector: namespaces: - default Now you can create your experiment. A chaos experiment defines the actions you w 1. Use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minimize it. ```json- {"action":"pod-failure","mode":"all","duration":"600s","selector":{"namespaces":["default"]}} + {"action":"pod-failure","mode":"all","selector":{"namespaces":["default"]}} ``` 1. Use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. ```json- {\"action\":\"pod-failure\",\"mode\":\"all\",\"duration\":\"600s\",\"selector\":{\"namespaces\":[\"default\"]}} + {\"action\":\"pod-failure\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]}} ``` 1. Create your experiment JSON by starting with the following JSON sample. Modify the JSON to correspond to the experiment you want to run by using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update), the [fault library](chaos-studio-fault-library.md), and the `jsonSpec` created in the previous step. Now you can create your experiment. A chaos experiment defines the actions you w "parameters": [ { "key": "jsonSpec",- "value": "{\"action\":\"pod-failure\",\"mode\":\"all\",\"duration\":\"600s\",\"selector\":{\"namespaces\":[\"default\"]}}" + "value": "{\"action\":\"pod-failure\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]}}" } ], "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1" Now you can create your experiment. A chaos experiment defines the actions you w "targets": [ { "type": "ChaosTarget",- "id": "/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.ContainerService/managedClusters/myCluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh" + "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRG/providers/Microsoft.ContainerService/managedClusters/myCluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh" } ] } |
chaos-studio | Chaos Studio Tutorial Aks Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md | Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source cha - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - An AKS cluster with a Linux node pool. If you don't have an AKS cluster, see the AKS quickstart that uses the [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or the [Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md). -> [!WARNING] -> AKS Chaos Mesh faults are only supported on Linux node pools. - ## Limitations -Previously, Chaos Mesh faults didn't work with private clusters. You can now use Chaos Mesh faults with private clusters by configuring [virtual network injection in Chaos Studio](chaos-studio-private-networking.md). +* You can use Chaos Mesh faults with private clusters by configuring [VNet Injection in Chaos Studio](chaos-studio-private-networking.md). Any commands issued to the private cluster, including the steps in this article to set up Chaos Mesh, need to follow the [private cluster guidance](../aks/private-clusters.md). Recommended methods include connecting from a VM in the same virtual network or using the [AKS command invoke](../aks/command-invoke.md) feature. +* AKS Chaos Mesh faults are only supported on Linux node pools. +* Currently, Chaos Mesh faults don't work if the AKS cluster has [local accounts disabled](../aks/manage-local-accounts-managed-azure-ad.md). +* If your AKS cluster is configured to only allow authorized IP ranges, you need to allow Chaos Studio's IP ranges. You can find them by querying the `ChaosStudio` [service tag with the Service Tag Discovery API or downloadable JSON files](../virtual-network/service-tags-overview.md). ## Set up Chaos Mesh on your AKS cluster Now you can create your experiment. A chaos experiment defines the actions you w namespaces: - default ```- 1. Remove any YAML outside of the `spec` (including the spec property name) and remove the indentation of the spec details. + 1. Remove any YAML outside of the `spec` (including the spec property name) and remove the indentation of the spec details. The `duration` parameter isn't necessary, but is used if provided. In this case, remove it. ```yaml action: pod-failure mode: all- duration: '600s' selector: namespaces: - default Now you can create your experiment. A chaos experiment defines the actions you w 1. Use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minimize it. ```json- {"action":"pod-failure","mode":"all","duration":"600s","selector":{"namespaces":["default"]}} + {"action":"pod-failure","mode":"all","selector":{"namespaces":["default"]}} ``` 1. Paste the minimized JSON into the **jsonSpec** field in the portal. |
cognitive-services | Batch Transcription Get | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md | You should receive a response body in the following format: }, "properties": { "diarizationEnabled": false,- "wordLevelTimestampsEnabled": true, - "displayFormWordLevelTimestampsEnabled": false, + "wordLevelTimestampsEnabled": false, + "displayFormWordLevelTimestampsEnabled": true, "channels": [ 0, 1 You should receive a response body in the following format: }, "properties": { "diarizationEnabled": false,- "wordLevelTimestampsEnabled": true, - "displayFormWordLevelTimestampsEnabled": false, + "wordLevelTimestampsEnabled": false, + "displayFormWordLevelTimestampsEnabled": true, "channels": [ 0, 1 The contents of each transcription result file are formatted as JSON, as shown i ```json { "source": "...",- "timestamp": "2022-09-16T09:30:21Z", - "durationInTicks": 41200000, - "duration": "PT4.12S", + "timestamp": "2023-07-10T14:28:16Z", + "durationInTicks": 25800000, + "duration": "PT2.58S", "combinedRecognizedPhrases": [ { "channel": 0, The contents of each transcription result file are formatted as JSON, as shown i "recognizedPhrases": [ { "recognitionStatus": "Success",- "speaker": 1, "channel": 0,- "offset": "PT0.07S", - "duration": "PT1.59S", - "offsetInTicks": 700000.0, - "durationInTicks": 15900000.0, -+ "offset": "PT0.76S", + "duration": "PT1.32S", + "offsetInTicks": 7600000.0, + "durationInTicks": 13200000.0, "nBest": [ {- "confidence": 0.898652852, + "confidence": 0.5643338, "lexical": "hello world", "itn": "hello world", "maskedITN": "hello world", "display": "Hello world.",-- "words": [ + "displayWords": [ {- "word": "hello", - "offset": "PT0.09S", - "duration": "PT0.48S", - "offsetInTicks": 900000.0, - "durationInTicks": 4800000.0, - "confidence": 0.987572 + "displayText": "Hello", + "offset": "PT0.76S", + "duration": "PT0.76S", + "offsetInTicks": 7600000.0, + "durationInTicks": 7600000.0 }, {- "word": "world", - "offset": "PT0.59S", - "duration": "PT0.16S", - "offsetInTicks": 5900000.0, - "durationInTicks": 1600000.0, - "confidence": 0.906032 + "displayText": "world.", + "offset": "PT1.52S", + "duration": "PT0.56S", + "offsetInTicks": 15200000.0, + "durationInTicks": 5600000.0 } ]+ }, + { + "confidence": 0.1769063, + "lexical": "helloworld", + "itn": "helloworld", + "maskedITN": "helloworld", + "display": "helloworld" + }, + { + "confidence": 0.49964225, + "lexical": "hello worlds", + "itn": "hello worlds", + "maskedITN": "hello worlds", + "display": "hello worlds" + }, + { + "confidence": 0.4995761, + "lexical": "hello worm", + "itn": "hello worm", + "maskedITN": "hello worm", + "display": "hello worm" + }, + { + "confidence": 0.49418187, + "lexical": "hello word", + "itn": "hello word", + "maskedITN": "hello word", + "display": "hello word" } ] } Depending in part on the request parameters set when you created the transcripti |`combinedRecognizedPhrases`|The concatenated results of all phrases for the channel.| |`confidence`|The confidence value for the recognition.| |`display`|The display form of the recognized text. Added punctuation and capitalization are included.|-|`displayPhraseElements`|A list of results with display text for each word of the phrase. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.| +|`displayWords`|The timestamps for each word of the transcription. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.| |`duration`|The audio duration. The value is an ISO 8601 encoded duration.| |`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).| |`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.| |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md | In our quickstart, you learn how to rapidly get started using Document Translati ## Supported document formats +The [Get supported document formats method](reference/get-supported-document-formats.md) returns a list of document formats supported by the Document Translation service. The list includes the common file extension, and the content-type if using the upload API. + Document Translation supports the following document file types: | File type| File extension|Description| |
communication-services | Sdk Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md | Publishing locations for individual SDK packages are detailed below. #### .NET -Except for Calling, Communication Services packages target .NET Standard 2.0, which supports the platforms listed below. --Support via .NET Framework 4.6.1 --- Windows 10, 8.1, 8 and 7-- Windows Server 2012 R2, 2012 and 2008 R2 SP1--Support via .NET Core 2.0: --- Windows 10 (1607+), 7 SP1+, 8.1-- Windows Server 2008 R2 SP1+-- Max OS X 10.12+-- Linux multiple versions/distributions-- UWP 10.0.16299 (RS3) September 2017-- Unity 2018.1-- Mono 5.4-- Xamarin iOS 10.14-- Xamarin Mac 3.8--The Calling package supports UWP apps build with .NET Native or C++/WinRT on: --- Windows 10 10.0.17763-- Windows Server 2019 10.0.17763+Calling supports the platforms listed below. ++- UWP with .NET Native or C++/WinRT + - Windows 10/11 10.0.17763 - 10.0.22621.0 + - Windows Server 2019/2022 10.0.17763 - 10.0.22621.0 +- WinUI3 with .NET 6 + - Windows 10/11 10.0.17763.0 - net6.0-windows10.0.22621.0 + - Windows Server 2019/2022 10.0.17763.0 - net6.0-windows10.0.22621.0 + +All other Communication Services packages target .NET Standard 2.0, which supports the platforms listed below. ++- Support via .NET Framework 4.6.1 + - Windows 10, 8.1, 8 and 7 + - Windows Server 2012 R2, 2012 and 2008 R2 SP1 +- Support via .NET Core 2.0: + - Windows 10 (1607+), 7 SP1+, 8.1 + - Windows Server 2008 R2 SP1+ + - Max OS X 10.12+ + - Linux multiple versions/distributions + - UWP 10.0.16299 (RS3) September 2017 + - Unity 2018.1 + - Mono 5.4 + - Xamarin iOS 10.14 + - Xamarin Mac 3.8 ## REST APIs |
communication-services | Apply For Short Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/apply-for-short-code.md | To begin provisioning a short code, go to your Communication Services resource o :::image type="content" source="./media/apply-for-short-code/manage-phone-azure-portal-start1.png"alt-text="Screenshot showing a Communication Services resource's main page."::: ## Apply for a short code-Navigate to the Short Codes blade in the resource menu and click on "Get" button to launch the short code program brief application wizard. For detailed guidance on how to fill out the program brief application check the [program brief filling guidelines](../../concepts/sms/program-brief-guidelines.md). +Navigate to the Short Codes blade in the resource menu and click on "Get" button to launch the short code program brief application wizard. For detailed guidance on how to fill out the program brief application check the [program brief filling guidelines](../../concepts/sms/program-brief-guidelines.md). This quickstart covers the application for US short code. ++For **UK short code**, fill out [UK short code application form](https://forms.office.com/r/VtfKFenZLF). ++For **CA short code**, download and fill out [CA short code application form](https://www.txt.ca/wp-content/uploads/2022/10/DownloadShortcodeApplicationPdf_100522.pdf) and email the form to *acstns@microsoft.com*. Include "CA Short Code application" in subject line and details on subscription ID and Azure Communication Services resource ID in the body of the email. ## Pre-requisites The wizard on the short codes blade will walk you through a series of questions about the program as well as a description of content which will be shared with the carriers for them to review and approve your short code program brief. Review the pre-requisites tab for a list of the program content deliverables you'll need to attach with your application. Azure communication service offers an opt-out management service for short codes :::image type="content" source="./media/apply-for-short-code/templates-03.png" alt-text="Screenshot showing template 3 details section."::: ### Review -Once completed, review the short code request details, fees, SMS laws and industry standards and submit the completed application through the Azure Portal. +Once completed, review the short code request details, fees, SMS laws and industry standards and submit the completed application through the Azure portal. :::image type="content" source="./media/apply-for-short-code/review.png" alt-text="Screenshot showing template details section."::: This program brief will now be automatically sent to the Azure Communication Ser The following documents may be interesting to you: -- Familiarize yourself with the [SMS SDK](../../concepts/sms/sdk-features.md)+- Familiarize yourself with the [SMS SDK](../../concepts/sms/sdk-features.md) |
communications-gateway | Prepare For Live Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic.md | If you don't have the API Bridge, you must provide your onboarding team with pro Your onboarding team must arrange synthetic testing of your deployment. This synthetic testing is a series of automated tests lasting at least seven days. It verifies the most important metrics for quality of service and availability. +After launch, synthetic traffic will be sent through your deployment using your test numbers. This traffic is used to continuously check the health of your deployment. + ## 11. Schedule launch Your launch date is the date that you'll appear to enterprises in the Teams Admin Center. Your onboarding team must arrange this date by making a request to Microsoft Teams. |
communications-gateway | Prepare To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md | You must own globally routable numbers that you can use for testing, as follows. |Type of testing|Numbers required | |||-|Automated validation testing by Microsoft Teams test suites|Minimum: 3. Recommended: 6 (to run tests simultaneously).| +|Automated validation testing by Microsoft Teams test suites|Minimum: 6. Recommended: 9 (to run tests simultaneously).| |Manual test calls made by you and/or Microsoft staff during integration testing |Minimum: 1| +After deployment, the automated validation testing numbers use synthetic traffic to continuously check the health of your deployment. + We strongly recommend that you have a support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier). ## 1. Add the Project Synergy application to your Azure tenancy Collect all of the values in the following table for all the test lines that you |The purpose of the test line: **Manual** (for manual test calls by you and/or Microsoft staff during integration testing) or **Automated** (for automated validation with Microsoft Teams test suites).|**Testing purpose**| > [!IMPORTANT]-> You must configure at least three automated test lines. We recommend six automated test lines (to allow simultaneous tests). +> You must configure at least six automated test lines. We recommend nine automated test lines (to allow simultaneous tests). ## 7. Decide if you want tags |
container-apps | Dapr Component Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-component-connection.md | -Using a combination of [Service Connector](../service-connector/overview.md) and [Dapr](https://docs.dapr.io/), you can author Dapr components via an improved component creation feature in the Azure Container Apps portal. +You can easily connect Dapr APIs to backing Azure services using a combination of [Service Connector](../service-connector/overview.md) and [Dapr](https://docs.dapr.io/). This feature creates Dapr components on your behalf with valid metadata and authenticated identity to access the Azure service. -With this new component creation feature, you no longer need to know or remember the Dapr open source metadata concepts. Instead, entering the component information in the easy component creation pane automatically maps your entries to the required component metadata. --By managing component creation for you, this feature: -- Simplifies the process for developers -- Reduces the likelihood for misconfiguration--This experience makes authentication easier. When using Managed Identity, Azure Container Apps, Dapr, and Service Connector ensure the selected identification is assigned to all containers apps in scope and target services. --This guide demonstrates creating a Dapr component by: -- Selecting pub/sub as component type -- Specifying Azure Service Bus as the component-- Providing required metadata to help the tool map to the right Azure Service Bus-- Providing optional metadata to customize the component+In this guide, you'll connect Dapr Pub/Sub API to an Azure Service Bus by: +> [!div class="checklist"] +> - Select pub/sub as the API +> - Specify Azure Service Bus as the service and required properties like namespace, queue name, and identity +> - Use your Azure Service Bus pub/sub component! ## Prerequisites - An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free). Once the component has been added to the Container Apps environment, the portal You can then check the YAML/Bicep artifact into a repo and recreate it outside of the portal experience. +> [!NOTE] +> When using Managed Identity, the selected identification is assigned to all containers apps in scope and target services. + ## Manage Dapr components 1. In your Container Apps environment, go to **Settings** > **Dapr components**. You can then check the YAML/Bicep artifact into a repo and recreate it outside o :::image type="content" source="media/dapr-component-connection/manage-dapr-component.png" alt-text="Screenshot of the Azure platform showing existing Dapr Components."::: ++ ## Next steps Learn more about: |
container-apps | Dapr Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-github-actions.md | Title: Tutorial - Deploy a Dapr application with GitHub Actions for Azure Container Apps description: Learn about multiple revision management by deploying a Dapr application with GitHub Actions and Azure Container Apps. --++ Previously updated : 09/02/2022- Last updated : 07/10/2023+ # Tutorial: Deploy a Dapr application with GitHub Actions for Azure Container Apps [GitHub Actions](https://docs.github.com/en/actions) gives you the flexibility to build an automated software development lifecycle workflow. In this tutorial, you'll see how revision-scope changes to a Container App using [Dapr](https://docs.dapr.io) can be deployed using a GitHub Actions workflow. -Dapr is an open source project that helps developers with the inherent challenges presented by distributed applications, such as state management and service invocation. Azure Container Apps provides a managed experience of the core Dapr APIs. +Dapr is an open source project that helps developers with the inherent challenges presented by distributed applications, such as state management and service invocation. [Azure Container Apps provides a managed experience of the core Dapr APIs.](./dapr-overview.md) -In this tutorial, you'll: +In this tutorial, you: > [!div class="checklist"]-> - Configure a GitHub Actions workflow for deploying the end-to-end solution to Azure Container Apps. +> - Configure a GitHub Actions workflow for deploying the end-to-end Dapr solution to Azure Container Apps. > - Modify the source code with a [revision-scope change](revisions.md#revision-scope-changes) to trigger the Build and Deploy GitHub workflow. > - Learn how revisions are created for container apps in multi-revision mode. -The [sample solution](https://github.com/Azure-Samples/container-apps-store-api-microservice) consists of three Dapr-enabled microservices and uses Dapr APIs for service-to-service communication and state management. +The [sample solution](https://github.com/Azure-Samples/container-apps-store-api-microservice): +- Consists of three Dapr-enabled microservices +- Uses Dapr APIs for service-to-service communication and state management :::image type="content" source="media/dapr-github-actions/arch.png" alt-text="Diagram demonstrating microservices app."::: The [sample solution](https://github.com/Azure-Samples/container-apps-store-api- ## Prerequisites -- An Azure account with an active subscription.- - [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- [An Azure account with an active subscription.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Contributor or Owner permissions on the Azure subscription.-- A GitHub account. - - If you don't have one, sign up for [free](https://github.com/join). +- [A GitHub account](https://github.com/join). - Install [Git](https://github.com/git-guides/install-git). - Install the [Azure CLI](/cli/azure/install-azure-cli). After the workflow successfully completes, verify the application is running in ## Modify the source code to trigger a new revision -Container Apps run in single-revision mode by default. In the Container Apps bicep module, we explicitly set the revision mode to multiple. This means that once the source code is changed and committed, the GitHub build/deploy workflow builds and pushes a new container image to GitHub Container Registry. Changing the container image is considered a [revision-scope](revisions.md#revision-scope-changes) change and results in a new container app revision. +Container Apps run in single-revision mode by default. In the Container Apps bicep module, the revision mode is explicitly set to "multiple". Multiple revision mode means that once the source code is changed and committed, the GitHub build/deploy workflow builds and pushes a new container image to GitHub Container Registry. Changing the container image is considered a [revision-scope](revisions.md#revision-scope-changes) change and results in a new container app revision. > [!NOTE] > [Application-scope](revisions.md#application-scope-changes) changes do not create a new revision. |
container-instances | Container Instances Gpu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md | Support will be added for additional regions over time. To use GPUs in a container instance, specify a *GPU resource* with the following information: * **Count** - The number of GPUs: **1**, **2**, or **4**.-* **SKU** - The GPU SKU: **K80**, **P100**, or **V100**. Each SKU maps to the NVIDIA Tesla GPU in one the following Azure GPU-enabled VM families: +* **SKU** - The GPU SKU: **V100**. Each SKU maps to the NVIDIA Tesla GPU in one the following Azure GPU-enabled VM families: | SKU | VM family | | | |- | K80 | [NC](../virtual-machines/nc-series.md) | - | P100 | [NCv2](../virtual-machines/ncv2-series.md) | | V100 | [NCv3](../virtual-machines/ncv3-series.md) | [!INCLUDE [container-instances-gpu-limits](../../includes/container-instances-gpu-limits.md)] To use GPUs in a container instance, specify a *GPU resource* with the following When deploying GPU resources, set CPU and memory resources appropriate for the workload, up to the maximum values shown in the preceding table. These values are currently larger than the CPU and memory resources available in container groups without GPU resources. > [!IMPORTANT]-> Default [subscription limits](container-instances-quotas.md) (quotas) for GPU resources differ by SKU. The default CPU limits for the P100 and V100 SKUs are initially set to 0. To request an increase in an available region, please submit an [Azure support request][azure-support]. +> Default [subscription limits](container-instances-quotas.md) (quotas) for GPU resources differ by SKU. The default CPU limits for V100 SKUs are initially set to 0. To request an increase in an available region, please submit an [Azure support request][azure-support]. ### Things to know When deploying GPU resources, set CPU and memory resources appropriate for the w ## YAML example -One way to add GPU resources is to deploy a container group by using a [YAML file](container-instances-multi-container-yaml.md). Copy the following YAML into a new file named *gpu-deploy-aci.yaml*, then save the file. This YAML creates a container group named *gpucontainergroup* specifying a container instance with a K80 GPU. The instance runs a sample CUDA vector addition application. The resource requests are sufficient to run the workload. +One way to add GPU resources is to deploy a container group by using a [YAML file](container-instances-multi-container-yaml.md). Copy the following YAML into a new file named *gpu-deploy-aci.yaml*, then save the file. This YAML creates a container group named *gpucontainergroup* specifying a container instance with a V100 GPU. The instance runs a sample CUDA vector addition application. The resource requests are sufficient to run the workload. > [!NOTE] > The following example uses a public container image. To improve reliability, import and manage the image in a private Azure container registry, and update your YAML to use your privately managed base image. [Learn more about working with public images](../container-registry/buffer-gate-public-content.md). properties: memoryInGB: 1.5 gpu: count: 1- sku: K80 + sku: V100 osType: Linux restartPolicy: OnFailure ``` Output: ```output 2018-10-25 18:31:10.155010: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2018-10-25 18:31:10.305937: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:-name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235 +name: Tesla V100 major: 3 minor: 7 memoryClockRate(GHz): 0.8235 pciBusID: ccb6:00:00.0 totalMemory: 11.92GiB freeMemory: 11.85GiB-2018-10-25 18:31:10.305981: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla K80, pci bus id: ccb6:00:00.0, compute capability: 3.7) +2018-10-25 18:31:10.305981: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla V100, pci bus id: ccb6:00:00.0, compute capability: 3.7) 2018-10-25 18:31:14.941723: I tensorflow/stream_executor/dso_loader.cc:139] successfully opened CUDA library libcupti.so.8.0 locally Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes. Extracting /tmp/tensorflow/input_data/train-images-idx3-ubyte.gz |
container-instances | Container Instances Resource And Quota Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-resource-and-quota-limits.md | For a general list of available regions for Azure Container Instances, see [avai The following regions and maximum resources are available to container groups with [supported and preview](./container-instances-faq.yml) Windows Server containers. +#### Windows Server 2022 LTSC ++| 3B Max CPU | 3B Max Memory (GB) | Storage (GB) | Availability Zone support | +| :-: | :--: | :-: | +| 4 | 16 | 20 | Y | + #### Windows Server 2019 LTSC > [!NOTE] The following maximum resources are available to a container group deployed with | GPU SKUs | GPU count | Max CPU | Max Memory (GB) | Storage (GB) | | | | | | | +| V100 | 1 | 6 | 112 | 50 | +| V100 | 2 | 12 | 224 | 50 | +| V100 | 4 | 24 | 448 | 50 | +<! | K80 | 1 | 6 | 56 | 50 | | K80 | 2 | 12 | 112 | 50 | | K80 | 4 | 24 | 224 | 50 | | P100, V100 | 1 | 6 | 112 | 50 | | P100, V100 | 2 | 12 | 224 | 50 | | P100, V100 | 4 | 24 | 448 | 50 | +> ## Next steps |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md | adobe-target: true Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds. -Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Azure Cosmos DB offers single-digit millisecond response times, automatic and instant scalability, along with guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security. +Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Azure Cosmos DB offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security. App development is faster and more productive thanks to: |
cost-management-billing | Ea Portal Agreements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-agreements.md | Title: Azure EA agreements and amendments description: This article explains how Azure EA agreements and amendments affect your Azure EA portal use. Previously updated : 04/24/2023 Last updated : 07/10/2023 The article describes how Azure EA agreements and amendments might affect your a ## Enrollment provisioning status -The start date of a new Azure Prepayment (previously called monetary commitment) is defined by the date that the regional operations center processed it. Since Azure Prepayment orders via the Azure portal are processed in the UTC time zone, you may experience some delay if your Azure Prepayment purchase order was processed in a different region. The coverage start date on the purchase order shows the start of the Azure Prepayment. The coverage start date is when the Azure Prepayment appears in the Azure portal. +The date that the regional operations center processes the new Azure Prepayment (previously called monetary commitment) defines the new start date. Since Azure Prepayment orders via the Azure portal are processed in the UTC time zone, you may experience some delay if your Azure Prepayment purchase order was processed in a different region. The coverage start date on the purchase order shows the start of the Azure Prepayment. The coverage start date is when the Azure Prepayment appears in the Azure portal. ## Support for enterprise customers An enrollment has one of the following status values. Each value determines how EA credit expires when the EA enrollment ends for all programs except the EU program. -**Expired** - The EA enrollment expires when it reaches the enterprise agreement end date and is opted out of the extended term. Sign a new enrollment contract as soon as possible. Although your service won't be disabled immediately, there's a risk of it getting disabled. +**Expired** - The EA enrollment expires when it reaches the enterprise agreement end date and is opted out of the extended term. Sign a new enrollment contract as soon as possible. Although your service isn't disabled immediately, there's a risk of it getting disabled. As of August 1, 2019, new opt-out forms aren't accepted for Azure commercial customers. Instead, all enrollments go into indefinite extended term. If you want to stop using Azure services, close your subscription in the [Azure portal](https://portal.azure.com). Or, your partner can submit a termination request. There's no change for customers with government agreement types. As of August 1, 2019, new opt-out forms aren't accepted for Azure commercial cus In the Azure portal, Partner Price Markup helps to enable better cost reporting for customers. The Azure portal shows usage and prices configured by partners for their customers. -Markup allows partner administrators to add a percentage markup to their indirect enterprise agreements. Percentage markup applies to all Microsoft first party service information in the Azure portal such as: meter rates, Azure Prepayment, and orders. After the markup is published by the partner, the customer sees Azure costs in the Azure portal. For example, usage summary, price lists, and downloaded usage reports. +Markup allows partner administrators to add a percentage markup to their indirect enterprise agreements. Percentage markup applies to all Microsoft first party service information in the Azure portal such as: meter rates, Azure Prepayment, and orders. After the partner publishes the markup, the customer sees Azure costs in the Azure portal. For example, usage summary, price lists, and downloaded usage reports. Starting in September 2019, partners can apply markup anytime during a term. They don't need to wait until the term next anniversary to apply markup. -Microsoft won't access or utilize the provided markup and associated prices for any purpose unless explicitly authorized by the channel partner. +Microsoft doesn't access or utilize the provided markup and associated prices for any purpose unless explicitly authorized by the channel partner. ### How the calculation works -The LSP provides a single percentage number in the Azure portal.  All commercial information on the portal will be uplifted by the percentage provided by the LSP. Example: +The Licensing Solution Partners (LSP) provides a single percentage number in the Azure portal. All commercial information on the portal gets uplifted with the percentage provided by the LSP. Example: - Customer signs an EA with Azure Prepayment of USD 100,000. - The meter rate for Service A is USD 10 / Hour. - LSP sets markup percentage of 10% on the EA Portal.-- The example below is how the customer will see the commercial information:+- The following example is how the customer sees the commercial information: - Monetary Balance: USD 110,000. - Meter rate for Service A: USD 11 / Hour. - Usage/hosting information for service A when used for 100 hours: USD 1,100. Don't use the markup feature if: - You use different rates between Azure Prepayment and meter rates. - You use different rates for different meters. -If you're using different rates for different meters, we recommend developing a custom solution based on the API key. The API key can be provided by the customer to pull consumption data and provide reports. +If you're using different rates for different meters, we recommend developing a custom solution based on the API key. The customer can provide the API key to pull consumption data and provide reports. ### Other important information This feature is meant to provide an estimation of the Azure cost to the end cust Make sure to review the commercial information - monetary balance information, price list, etc. before publishing the marked-up prices to end customer. +#### Partner markup view limitations ++- If the user has both Partner admin and EA admin role, the partner admin role takes precedence and prices are displayed without markup. +- If partners want to see the cost with markup and download reports with markup in they Azure portal, they should only have the EA admin role. +- The Partner admin sees prices without markup in the downloaded usage files. However, the Partner admin can download the Charges by service file that includes markup details. + #### Azure savings plan purchases For [Azure Savings plan](../savings-plan/savings-plan-compute-overview.md) purchases, in some situations, indirect EA end customers could see minor variances in their utilization percentage when they view their [cost reports](../savings-plan/utilization-cost-reports.md) in Cost Management. Actual purchase and usage charges are always computed in partner prices and not in customer prices (for example, with markup). Subsequent markdown and uplift could result in floating point numbers exceeding eight decimal point precision. Azure rounds calculations to eight decimal precision, which can cause minor variances in the utilization numbers for end customers. -Let's look at an example. For an Azure Savings Plan commitment amount of 3.33/hour entered by the customer, if the markup is 13%, after the markdown to arrive at partner price and the subsequent markup in the cost and usage reports, there's minor variance in numbers: +Let's look at an example. Assume that a customer enters an Azure Savings Plan commitment amount of 3.33/hour. If the markup is 13%, after the markdown to arrive at the partner price and the subsequent markup in the cost and usage reports, there's minor variance in numbers: - Customer entered value: 3.33/hour - Mark up: 13% Let's look at an example. For an Azure Savings Plan commitment amount of 3.33/ho ### How to add a price markup -**You can add price markup on Azure portal with the following steps:** +You can add price markup on Azure portal with the following steps: -**In the Azure portal,** --- Sign in as a partner administrator. -- Search for Cost Management + Billing and select it. -- In the left navigation menu, select Billing scopes and then select the billing account that you want to work with. -- In the left navigation menu, select Billing Profile and then select the billing profile that you want to work with. -- In the left navigation menu, select Markup.-- To add markup, click on “set markup”.-- Enter the markup percentage and select Preview. -- Review the credit and usage charges before and after markup update. -- Accept the disclaimer and click on Publish to published the markup.-- End customer should be able to view credits and charges details. --**You can add price markup on Azure Enterprise portal with the following steps:** --**Step One: Add price markup** --1. From the Enterprise Portal, select **Reports** on the left navigation. -1. Under _Usage Summary_, select the blue **Markup** wording. +1. In the Azure portal, sign in as a partner administrator. +1. Search for **Cost Management + Billing** and select it. +1. In the left navigation menu, select **Billing scopes** and then select the billing account that you want to work with. +1. In the left navigation menu, select **Billing Profile** and then select the billing profile that you want to work with. +1. In the left navigation menu, select **Markup**. +1. To add markup, select **Set markup**. +1. Enter the markup percentage and select **Preview**. +1. Review the credit and usage charges before and after markup update. +1. Accept the disclaimer and select **Publish** to publish the markup. +1. The customer can now view credits and charges details. ++You can add price markup in the Azure Enterprise portal with the following steps: ++#### First step - Add price markup ++1. In the Enterprise Portal, select **Reports** in the left navigation menu. +1. Under _Usage Summary_, select the blue **Markup** link. 1. Enter the markup percentage (between 0 to 100) and select **Preview**. +#### Second step - Review and validate -**Step Two: Review and validate** --Review the markup price in the _Usage Summary_ for the Prepayment term in the customer view. The Microsoft price will still be available in the partner view. The views can be toggled using the partner markup "people" toggle at the top right. +Review the markup price in the _Usage Summary_ for the Prepayment term in the customer view. The Microsoft price is still available in the partner view. The views can be toggled using the partner markup **People** toggle at the top right. 1. Review the prices in the price sheet. 1. Changes can be made before publishing by selecting **Edit** on _View Usage Summary > Customer View_ tab.-   -Both the service prices and the Prepayment balances will be marked up by the same percentages. If you have different percentages for monetary balance and meter rates, or different percentages for different services, then don't use this feature. -**Step Three: Publish** +Both the service prices and the Prepayment balances get marked up by the same percentages. If you have different percentages for monetary balance and meter rates, or different percentages for different services, then don't use this feature. ++#### Third step - Publish After pricing is reviewed and validated, select **Publish**.-   -Pricing with markup will be available to enterprise administrators immediately after selecting publish. Edits can't be made to markup. You must disable markup and begin from the first step. ++Pricing with markup is available to enterprise administrators immediately after selecting publish. Edits can't be made to markup. You must disable markup and begin from the first step. ### Which enrollments have a markup enabled? -To check if an enrollment has a markup published, select **Manage** on the left navigation, and select the **Enrollment** tab. Select the enrollment box to check, and view the markup status under _Enrollment Detail_. It will display the current status of the markup feature for that EA as Disabled, Preview, or Published. +To check if an enrollment has a markup published, select **Manage** in the left navigation menu, then select the **Enrollment** tab. Select the enrollment box to check, and view the markup status under _Enrollment Detail_. It displays the current status of the markup feature for that EA as Disabled, Preview, or Published. -**To check markup status of an enrollment on Azure portal, follow the below steps:** +To check markup status of an enrollment on Azure portal, follow the below steps: -- In the Azure portal, sign in as a partner administrator. -- Search for Cost Management + Billing and select it. -- Select Billing scopes and then select the billing account that you want to work with.-- In the left navigation menu, select Billing scopes and then select the billing account that you want to work with. -- In the left navigation menu, select Billing Profile -- You can view the markup status of an enrollment +1. In the Azure portal, sign in as a partner administrator. +1. Search for **Cost Management + Billing** and select it. +1. Select **Billing scopes** and then select the billing account that you want to work with. +1. In the left navigation menu, select **Billing scopes** and then select the billing account that you want to work with. +1. In the left navigation menu, select **Billing Profile**. +1. You can view the markup status of an enrollment. ### How can the customer download usage estimates? -Once partner markup is published, the indirect customer will have access to balance and charge .csv monthly files and usage detail .csv files. The usage detail files will include resource rate and extended cost. +Once partner markup is published, the indirect customer has access to the balance and charge CSV monthly files and usage detail files. The usage detail files include the resource rate and extended cost. ### How can I as partner apply markup to existing EA customer(s) that was earlier with another partner?-Partners can use the markup feature (on Azure EA portal or Azure portal) after a Change of Channel Partner is processed; no need to wait for the next anniversary term. -+Partners can use the markup feature (on Azure EA portal or Azure portal) after a Change of Channel Partner is processed; there's no need to wait for the next anniversary term. ## Resource Prepayment and requesting quota increases Partners can use the markup feature (on Azure EA portal or Azure portal) after a | Microsoft Azure Storage | Five storage accounts, each of a maximum size of 100 TB each. | You can increase the number of storage accounts to up to 20 per subscription. If you require more storage accounts, add more subscriptions. | | SQL Azure | 149 databases of either type (for example, Web Edition or Business Edition). | | | Access Control | 50 Namespaces per account. 100 million Access Control transactions per month | |-| Service Bus | 50 Namespaces per account. 40 Service Bus connections | Customers purchasing Service Bus connections through connection packs will have quotas equal to the midpoint between the connection pack they purchased and the next highest connection pack amount. Customers choosing a 500 Pack will have a quota of 750. | +| Service Bus | 50 Namespaces per account. 40 Service Bus connections | Customers purchasing Service Bus connections through connection packs have quotas equal to the midpoint between the connection pack they purchased and the next highest connection pack amount. Customers choosing a 500 Pack have a quota of 750. | ## Resource Prepayment -Microsoft will provide services to you up to at least the level of the associated usage included in the monthly Prepayment that you purchased (the Service Prepayment). However, all other increases in usage levels of service resources are subject to the availability of these service resources. For example, adding to the number of compute instances running, or increasing the amount of storage in use. +Microsoft provides services to you up to at least the level of the associated usage included in the monthly Prepayment that you purchased (the Service Prepayment). However, all other increases in usage levels of service resources are subject to the availability of these service resources. For example, adding to the number of compute instances running, or increasing the amount of storage in use. -Quotas described above aren't Service Prepayment. You can determine the number of simultaneous small compute instances, or their equivalent, that Microsoft provides as part of a Service Prepayment. Divide the number of committed small compute instance hours purchased for a month by the number of hours in the shortest month of the year. For example, February – 672 hours. +Quotas described previously aren't Service Prepayment. You can determine the number of simultaneous small compute instances, or their equivalent, that Microsoft provides as part of a Service Prepayment. Divide the number of committed small compute instance hours purchased for a month by the number of hours in the shortest month of the year. For example, February – 672 hours. ## Requesting a quota increase EA customer can view price sheet in Azure portal. See [view price sheet in Azure 1. Select **+Add Subscription**. 1. Select **Purchase**. -The first time you add a subscription to an account, you'll need to provide your contact information. When you add more subscriptions later, your contact information will be populated for you. +The first time you add a subscription to an account, you need to provide your contact information. When you add more subscriptions later, your contact information is populated for you. -The first time you add a subscription to your account, you'll be asked to accept the MOSA agreement and a Rate Plan. These sections aren't Applicable to Enterprise Agreement Customers, but are currently necessary to create your subscription. Your Microsoft Azure Enterprise Agreement Enrollment Amendment supersedes the above items and your contractual relationship won't change. Select the box indicating you accept the terms. +The first time you add a subscription to your account, you're asked to accept the MOSA agreement and a Rate Plan. These sections aren't Applicable to Enterprise Agreement Customers, but are currently necessary to create your subscription. Your Microsoft Azure Enterprise Agreement Enrollment Amendment supersedes the above items and your contractual relationship doesn't change. Select the box indicating you accept the terms. **Step Two: Update subscription name** -All new subscriptions will be added with the default *Microsoft Azure Enterprise* subscription name. It's important to update the subscription name to differentiate it from the other subscriptions within your Enterprise Enrollment and ensure that it's recognizable on reports at the enterprise level. +All new subscriptions are added with the default *Microsoft Azure Enterprise* subscription name. It's important to update the subscription name to differentiate it from the other subscriptions within your Enterprise Enrollment and ensure that it's recognizable on reports at the enterprise level. Select **Subscriptions**, select the subscription you created, and then select **Edit Subscription Details.** -Update the subscription name and service administrator and select the checkmark. The subscription name will appear on reports and it will also be the name of the project associated with the subscription on the development portal. +Update the subscription name and service administrator and select the checkmark. The subscription name appears on reports and it's also the name of the project associated with the subscription on the development portal. + New subscriptions may take up to 24 hours to propagate in the subscriptions list. Only account owners can view and manage subscriptions. Direct customer can create and edit subscription in Azure portal. See [manage su **Account owner showing in pending status** -When new Account Owners (AO) are added to the enrollment for the first time, they'll always show as `pending` under status. When you receive the activation welcome email, the AO can sign in to activate their account. This activation will update their account status from `pending` to `active`. +When new Account Owners (AO) are added to the enrollment for the first time, they always have `pending` under status. When you receive the activation welcome email, the AO can sign in to activate their account. This activation updates their account status from `pending` to `active`. **Usages being charged after Plan SKUs are purchased** This scenario occurs when the customer has deployed services under the wrong enrollment number or selected the wrong services. -To validate if you're deploying under the right enrollment, you can check your included units information via the price sheet. Sign in as an Enterprise Administrator and select **Reports** on the left navigation and select **Price Sheet** tab. Select the Download symbol in the top-right corner and find the corresponding Plan SKU part numbers with filter on column "Included Quantity" and select values greater than "0". +To validate if you're deploying under the right enrollment, you can check your included units information via the price sheet. Sign in as an Enterprise Administrator and select **Reports** on the left navigation and select **Price Sheet** tab. Select the Download symbol in the top-right corner and find the corresponding Plan SKU part numbers with filter on column **Included Quantity** and select values greater than "0." Ensure that your OMS plan is showing on the price sheet under included units. If there are no included units for OMS plan on your enrollment, your OMS plan may be under another enrollment. Contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport). |
data-factory | Create Self Hosted Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md | Installation of the self-hosted integration runtime on a domain controller isn't - Copy-activity runs happen with a specific frequency. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is moved. When multiple copy jobs are in progress, you see resource usage go up during peak times. - Tasks might fail during extraction of data in Parquet, ORC, or Avro formats. For more on Parquet, see [Parquet format in Azure Data Factory](./format-parquet.md#using-self-hosted-integration-runtime). File creation runs on the self-hosted integration machine. To work as expected, file creation requires the following prerequisites: - [Visual C++ 2010 Redistributable](https://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe) Package (x64)- - Java Runtime (JRE) version 11 from a JRE provider such as [Eclipse Temurin](https://adoptium.net/temurin/releases/?version=11). Ensure that the JAVA_HOME environment variable is set to the JDK folder (and not just the JRE folder) you may also need to add the bin folder to your system's PATH environment variable. + - Java Runtime (JRE) version 11 from a JRE provider such as [Microsoft OpenJDK 11](https://aka.ms/download-jdk/microsoft-jdk-11.0.19-windows-x64.msi) or [Eclipse Temurin 11](https://adoptium.net/temurin/releases/?version=11). Ensure that the *JAVA_HOME* system environment variable is set to the JDK folder (not just the JRE folder) you may also need to add the bin folder to your system's PATH environment variable. >[!NOTE] >It might be necessary to adjust the Java settings if memory errors occur, as described in the [Parquet format](./format-parquet.md#using-self-hosted-integration-runtime) documentation. |
defender-for-cloud | Connect Azure Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-azure-subscription.md | + + Title: Enable Microsoft Defender for Cloud on your Azure subscription +description: Learn how to enable Microsoft Defender for Cloud's enhanced security features. + Last updated : 07/10/2023++++# Enable Microsoft Defender for Cloud ++In this guide, you'll learn how to enable Microsoft Defender for Cloud on your Azure subscription. ++Microsoft Defender for Cloud is a cloud-native application protection platform (CNAPP) with a set of security measures and practices designed to protect your cloud-based applications end-to-end by combining the following capabilities: ++- A development security operations (DevSecOps) solution that unifies security management at the code level across multicloud and multiple-pipeline environments +- A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches +- A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads ++Defender for Cloud includes Foundational CSPM capabilities for free, complemented by additional paid plans required to secure all aspects of your cloud resources. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). ++Defender for Cloud helps you find and fix security vulnerabilities. Defender for Cloud also applies access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. ++## Prerequisites ++- To view information related to a resource in Defender for Cloud, you must be assigned the Owner, Contributor, or Reader role for the subscription or for the resource group that the resource is located in. ++## Enable Defender for Cloud on your Azure subscription ++> [!TIP] +> To enable Defender for Cloud on all subscriptions within a management group, see [Enable Defender for Cloud on multiple Azure subscriptions](onboard-management-group.md). ++1. Sign into the [Azure portal](https://azure.microsoft.com/features/azure-portal/). ++1. Search for and select **Microsoft Defender for Cloud**. ++ :::image type="content" source="media/get-started/defender-for-cloud-search.png" alt-text="Screenshot of the Azure portal with Microsoft Defender for Cloud entered in the search bar and highlighted in the drop down menu." lightbox="media/get-started/defender-for-cloud-search.png"::: ++ The Defender for Cloud's overview page opens. ++ :::image type="content" source="./media/get-started/overview.png" alt-text="Screenshot of Defender for Cloud's overview dashboard." lightbox="./media/get-started/overview.png"::: ++Defender for Cloud is now enabled on your subscription and you have access to the basic features provided by Defender for Cloud. These features include: ++- The [Foundational Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) plan. +- [Recommendations](security-policy-concept.md#what-is-a-security-recommendation). +- Access to the [Asset inventory](asset-inventory.md). +- [Workbooks](custom-dashboards-azure-workbooks.md). +- [Secure score](secure-score-security-controls.md). +- [Regulatory compliance](update-regulatory-compliance-packages.md) with the [Microsoft cloud security benchmark](concept-regulatory-compliance.md). ++The Defender for Cloud overview page provides a unified view into the security posture of your hybrid cloud workloads, helping you discover and assess the security of your workloads and to identify and mitigate risks. Learn more in [Microsoft Defender for Cloud's overview page](overview-page.md). ++You can view and filter your list of subscriptions from the subscriptions menu to have Defender for Cloud adjust the overview page display to reflect the security posture to the selected subscriptions. ++Within minutes of launching Defender for Cloud for the first time, you might see: ++- **Recommendations** for ways to improve the security of your connected resources. +- An inventory of your resources that Defender for Cloud assesses along with the security posture of each. ++## Enable all paid plans on your subscription ++To enable all of the Defender for Cloud's protections, you need to enable the other paid plans for each of the workloads that you want to protect. ++> [!NOTE] +> +> - You can enable **Microsoft Defender for Storage accounts** at either the subscription level or resource level. +> - You can enable **Microsoft Defender for SQL** at either the subscription level or resource level. +> - You can enable **Microsoft Defender for open-source relational databases** at the resource level only. +> - The Microsoft Defender plans available at the workspace level are: **Microsoft Defender for Servers**, **Microsoft Defender for SQL servers on machines**. ++When you enabled Defender plans on an entire Azure subscription, the protections are applied to all other resources in the subscription. ++**To enable additional paid plans on a subscription**: ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Search for and select **Microsoft Defender for Cloud**. ++1. In the Defender for Cloud menu, select **Environment settings**. ++ :::image type="content" source="media/get-started/environmental-settings.png" alt-text="Screenshot that shows where to navigate to, to select environmental settings from."::: ++1. Select the subscription or workspace that you want to protect. ++1. Select **Enable all** to enable all of the plans for Defender for Cloud. ++ :::image type="content" source="media/get-started/enable-all-plans.png" alt-text="Screenshot that shows where the enable button is located on the plans page." lightbox="media/get-started/enable-all-plans.png"::: ++1. Select **Save**. ++All of the plans are turned on and the monitoring components required by each plan are deployed to the protected resources. ++If you want to disable any of the plans, toggle the individual plan to **off**. The extensions used by the plan aren't uninstalled but, after a short time, the extensions stop collecting data. ++> [!TIP] +> To enable Defender for Cloud on all subscriptions within a management group, see [Enable Defender for Cloud on multiple Azure subscriptions](onboard-management-group.md). ++## Next steps ++In this guide, you enabled Defender for Cloud on your Azure subscription. The next step is to set up your hybrid and multicloud environments. ++> [!div class="nextstepaction"] +> [Quickstart: Connect your non-Azure machines to Microsoft Defender for Cloud with Azure Arc](quickstart-onboard-machines.md) +> +> [Quickstart: Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) +> +> [Quickstart: Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md) +> +> [Quickstart: Connect your non-Azure machines to Microsoft Defender for Cloud with Defender for Endpoint](onboard-machines-with-defender-for-endpoint.md) |
defender-for-cloud | Continuous Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md | To view the event schemas of the exported data types, visit the [Log Analytics t ## Export data to an Azure Event Hubs or Log Analytics workspace in another tenant -You can **not** export data to an Azure Event Hubs or Log Analytics workspace in a different tenant, without using [Azure Lighthouse](../lighthouse/overview.md). When collecting data into a tenant, you can analyze the data from one central location. +You ***cannot*** configure data to be exported to a log analytics workspace in another tenant when using Azure Policy to assign the configuration. This process only works with the REST API, and the configuration is unsupported in the Azure portal (due to requiring multitenant context). Azure Lighthouse ***does not*** resolve this issue with Policy, although you can use Lighthouse as the authentication method. -To export data to an Azure Event Hubs or Log Analytics workspace in a different tenant **with Azure Lighthouse**: +When collecting data into a tenant, you can analyze the data from one central location. -1. In the tenant that has the Azure Event Hubs or Log Analytics workspace, [invite a user](../active-directory/external-identities/what-is-b2b.md#easily-invite-guest-users-from-the-azure-portal) from the tenant that hosts the continuous export configuration. -1. For a Log Analytics workspace: After the user accepts the invitation to join the tenant, assign the user in the workspace tenant one of these roles: Owner, Contributor, Log Analytics Contributor, Sentinel Contributor, Monitoring Contributor -1. Configure the continuous export configuration and select the event hub or Analytics workspace to send the data to. +To export data to an Azure Event Hubs or Log Analytics workspace in a different tenant: -You can also configure export to another tenant through the REST API. For more information, see the automations [REST API](/rest/api/defenderforcloud/automations/create-or-update?tabs=HTTP). +1. In the tenant that has the Azure Event Hubs or Log Analytics workspace, [invite a user](../active-directory/external-identities/what-is-b2b.md#easily-invite-guest-users-from-the-azure-portal) from the tenant that hosts the continuous export configuration, or alternatively configure Azure Lighthouse for the source and destination tenant. +1. If using Azure AD B2B Guest access, ensure that the user accepts the invitation to access the tenant as a guest. +1. If you're using a Log Analytics Workspace, assign the user in the workspace tenant one of these roles: Owner, Contributor, Log Analytics Contributor, Sentinel Contributor, or Monitoring Contributor. +1. Create and submit the request to the Azure REST API to configure the required resources. You'll need to manage the bearer tokens in both the context of the local (workspace) and the remote (continuous export) tenant. ## Continuously export to an event hub behind a firewall |
defender-for-cloud | Enable All Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-all-plans.md | - Title: Enable all of the paid plan on your subscription - Microsoft Defender for Cloud- -description: Learn how to enable all of Microsoft Defender for Cloud's paid plans on your subscription. - Previously updated : 06/22/2023----# Protect all of your resources with Defender for Cloud --In this deployment guide, you learn how to enable all of Microsoft Defender for Cloud's paid plans to your environments. --## Enable all paid plans on your subscription --To enable all of the Defender for Cloud's protections, you need to enable the other paid plans for each of the workloads that you want to protect. --> [!NOTE] -> - You can enable **Microsoft Defender for Storage accounts** at either the subscription level or resource level. -> - You can enable **Microsoft Defender for SQL** at either the subscription level or resource level. -> - You can enable **Microsoft Defender for open-source relational databases** at the resource level only. -> - The Microsoft Defender plans available at the workspace level are: **Microsoft Defender for Servers**, **Microsoft Defender for SQL servers on machines**. --When you enabled Defender plans on an entire Azure subscription, the protections are applied to all other resources in the subscription. --**To enable additional paid plans on a subscription**: --1. Sign in to the [Azure portal](https://portal.azure.com). --1. Search for and select **Microsoft Defender for Cloud**. --1. In the Defender for Cloud menu, select **Environment settings**. -- :::image type="content" source="media/get-started/environmental-settings.png" alt-text="Screenshot that shows where to navigate to, to select environmental settings from."::: - -1. Select the subscription or workspace that you want to protect. --1. Select **Enable all** to enable all of the plans for Defender for Cloud. -- :::image type="content" source="media/get-started/enable-all-plans.png" alt-text="Screenshot that shows where the enable button is located on the plans page." lightbox="media/get-started/enable-all-plans.png"::: - -1. Select **Save**. --All of the plans are turned on and the monitoring components required by each plan are deployed to the protected resources. --If you want to disable any of the plans, toggle the individual plan to **off**. The extensions used by the plan aren't uninstalled but, after a short time, the extensions stop collecting data. --> [!TIP] -> To access Defender for Cloud on all subscriptions within a management group, see [Enable Defender for Cloud on multiple Azure subscriptions](onboard-management-group.md). --## Next steps --Learn more about [Microsoft Defender for Cloud's overview page](overview-page.md). |
defender-for-cloud | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/get-started.md | - Title: Enable Microsoft Defender for Cloud on your Azure subscription -description: Learn how to enable Microsoft Defender for Cloud's enhanced security features. - Previously updated : 06/22/2023----# Enable Microsoft Defender for Cloud --In this quickstart guide, you learn how to enable Microsoft Defender for Cloud on your Azure subscription. --Defender for Cloud provides unified security management and threat protection across your hybrid and multicloud workloads. While the free features offer limited security for your Azure resources only, you can also enable other paid plans that add extra protection for your resources that exist on your on-premises and other clouds. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). --Defender for Cloud helps you find and fix security vulnerabilities. Defender for Cloud also applies access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. --## Prerequisites --- To get started with Defender for Cloud, you must have a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).---## Enable Defender for Cloud on your Azure subscription --> [!TIP] -> To enable Defender for Cloud on all subscriptions within a management group, see [Enable Defender for Cloud on multiple Azure subscriptions](onboard-management-group.md). --1. Sign into the [Azure portal](https://azure.microsoft.com/features/azure-portal/). --1. Search for and select **Microsoft Defender for Cloud**. -- :::image type="content" source="media/get-started/defender-for-cloud-search.png" alt-text="Screenshot of the Azure portal with Microsoft Defender for Cloud entered in the search bar and highlighted in the drop down menu." lightbox="media/get-started/defender-for-cloud-search.png"::: -- The Defender for Cloud's overview page opens. -- :::image type="content" source="./media/get-started/overview.png" alt-text="Defender for Cloud's overview dashboard" lightbox="./media/get-started/overview.png"::: --Defender for Cloud is now enabled on your subscription and you have access to the basic features provided by Defender for Cloud. These features include: --- The [Foundational Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) plan.-- [Recommendations](security-policy-concept.md#what-is-a-security-recommendation).-- Access to the [Asset inventory](asset-inventory.md).-- [Workbooks](custom-dashboards-azure-workbooks.md).-- [Secure score](secure-score-security-controls.md).-- [Regulatory compliance](update-regulatory-compliance-packages.md) with the [Microsoft cloud security benchmark](concept-regulatory-compliance.md).--The Defender for Cloud overview page provides a unified view into the security posture of your hybrid cloud workloads, helping you discover and assess the security of your workloads and to identify and mitigate risks. Learn more in [Microsoft Defender for Cloud's overview page](overview-page.md). --You can view and filter your list of subscriptions from the subscriptions menu to have Defender for Cloud adjust the overview page display to reflect the security posture to the selected subscriptions. --Within minutes of launching Defender for Cloud for the first time, you might see: --- **Recommendations** for ways to improve the security of your connected resources.-- An inventory of your resources that Defender for Cloud assesses along with the security posture of each.--## Next steps --In this quickstart, you enabled Defender for Cloud on your Azure subscription. The next step is to set up your hybrid and multicloud environments. --> [!div class="nextstepaction"] -> [Quickstart: Connect your non-Azure machines to Microsoft Defender for Cloud with Azure Arc](quickstart-onboard-machines.md) -> -> [Quickstart: Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) -> -> [Quickstart: Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md) -> -> [Quickstart: Connect your non-Azure machines to Microsoft Defender for Cloud with Defender for Endpoint](onboard-machines-with-defender-for-endpoint.md) |
defender-for-cloud | Tutorial Enable Container Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-aws.md | Last updated 06/29/2023 Defender for Containers in Microsoft Defender for Cloud is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. -Defender for Containers assists you with the three core aspects of container security: --- [**Environment hardening**](defender-for-containers-introduction.md#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, or Amazon EKS. Defender for Containers continuously assesses clusters to provide visibility into misconfigurations and guidelines to help mitigate identified threats.--- [**Vulnerability assessment**](defender-for-containers-introduction.md#vulnerability-assessment) - Vulnerability assessment and management tools for images stored in ACR registries and running in Azure Kubernetes Service.--- [**Run-time threat protection for nodes and clusters**](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities.- Learn more about [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md). You can learn more about Defender for Container's pricing on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). To protect your EKS clusters, you need to enable the Containers plan on the rele :::image type="content" source="media/tutorial-enable-containers-aws/aws-containers-enabled.png" alt-text="Screenshot of enabling Defender for Containers for an AWS connector." lightbox="media/tutorial-enable-containers-aws/aws-containers-enabled.png"::: -1. (Optional) To change the retention period for your audit logs, select **Settings**, enter the required timeframe, and select **Save**. +1. (Optional) To change the retention period for your audit logs, select **Settings**, enter the required time frame, and select **Save**. - :::image type="content" source="media/tutorial-enable-containers-aws/retention-period.png" alt-text="Screenshot of adjusting the retention period for EKS control pane logs." lightbox="media/tutorial-enable-containers-aws/retention-period.png"::: + :::image type="content" source="media/tutorial-enable-containers-aws/retention-period.png" alt-text="Screenshot of adjusting the retention period for EKS control pane logs." lightbox="media/tutorial-enable-containers-aws/retention-period.png"::: > [!Note] > If you disable this configuration, then the `Threat detection (control plane)` feature will be disabled. Learn more about [features availability](supported-machines-endpoint-solutions-clouds-containers.md). Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy exten 1. Select **Fix**. -1. Defender for Cloud generates a script in the language of your choice: +1. Defender for Cloud generates a script in the language of your choice: - For Linux, select **Bash**. - For Windows, select **PowerShell**. |
defender-for-cloud | Tutorial Enable Container Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-gcp.md | Last updated 06/29/2023 Defender for Containers in Microsoft Defender for Cloud is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. -Defender for Containers assists you with the three core aspects of container security: --- [**Environment hardening**](defender-for-containers-introduction.md#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, Amazon EKS or GCP. Defender for Containers continuously assesses clusters to provide visibility into misconfigurations and guidelines to help mitigate identified threats.--- [**Vulnerability assessment**](defender-for-containers-introduction.md#vulnerability-assessment) - Vulnerability assessment and management tools for images stored in ACR registries and running in Azure Kubernetes Service.--- [**Run-time threat protection for nodes and clusters**](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities.- Learn more about [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md). You can learn more about Defender for Container's pricing on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). You can learn more about Defender for Container's pricing on the [pricing page]( If you disabled any of the default auto provisioning configurations to Off, during the [GCP connector onboarding process](quickstart-onboard-gcp.md#configure-the-containers-plan), or afterwards. You need to manually install Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extensions to each of your GKE clusters to get the full security value out of Defender for Containers. There are two dedicated Defender for Cloud recommendations you can use to install the extensions (and Arc if necessary):-- `GKE clusters should have Microsoft Defender's extension for Azure Arc installed`-- `GKE clusters should have the Azure Policy extension installed`++- `GKE clusters should have Microsoft Defender's extension for Azure Arc installed` +- `GKE clusters should have the Azure Policy extension installed` **To deploy the solution to specific clusters**: There are two dedicated Defender for Cloud recommendations you can use to instal :::image type="content" source="media/tutorial-enable-containers-gcp/fix-button.png" alt-text="Screenshot showing the location of the fix button."::: -1. Defender for Cloud generates a script in the language of your choice: +1. Defender for Cloud generates a script in the language of your choice: - For Linux, select **Bash**. - For Windows, select **PowerShell**. |
defender-for-cloud | Tutorial Enable Containers Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-containers-arc.md | Title: Protect your on-premises device with the Defender for Containers - Microsoft Defender for Cloud + Title: Protect your on-premises Kubernetes clusters with Defender for Containers - Microsoft Defender for Cloud -description: Learn how to enable the Defender for Containers plan on your on-premises device for Microsoft Defender for Cloud. +description: Learn how to enable the Defender for Containers plan on your on-premises devices for Microsoft Defender for Cloud. Last updated 06/27/2023 -# Protect your on-premises device with the Defender for Containers +# Protect your on-premises Kubernetes clusters with Defender for Containers Defender for Containers in Microsoft Defender for Cloud is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. -Defender for Containers assists you with the three core aspects of container security: --- [**Environment hardening**](defender-for-containers-introduction.md#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, or Amazon EKS. Defender for Containers continuously assesses clusters to provide visibility into misconfigurations and guidelines to help mitigate identified threats.--- [**Vulnerability assessment**](defender-for-containers-introduction.md#vulnerability-assessment) - Vulnerability assessment and management tools for images stored in ACR registries and running in Azure Kubernetes Service.--- [**Run-time threat protection for nodes and clusters**](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities.- Learn more about [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md). You can learn more about Defender for Container's pricing on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). You can learn more about Defender for Container's pricing on the [pricing page]( - You must [enable Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription. -- Connect your [non-Azure machines](quickstart-onboard-machines.md).--- Ensure the following [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md) are validated.+- Ensure the following [Azure Arc-enabled Kubernetes network requirements](../azure-arc/kubernetes/network-requirements.md) are validated and [connect the Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md). - Validate the following endpoints are configured for outbound access so that the Defender extension can connect to Microsoft Defender for Cloud to send security data and events: You can learn more about Defender for Container's pricing on the [pricing page]( - [Connect the Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) -- Complete the [prerequisites listed under the generic cluster extensions documentation](../azure-arc/kubernetes/extensions.md).- ## Enable the Defender for Containers plan By default, when enabling the plan through the Azure portal, Microsoft Defender for Containers is configured to automatically install required components to provide the protections offered by plan, including the assignment of a default workspace. If you would prefer to [assign a custom workspace](defender-for-containers-enabl 1. Select **Save**. -## Deploy the Defender extension in Azure +## Deploy the Defender extension on Arc-enabled Kubernetes clusters that were onboarded to an Azure subscription You can enable the Defender for Containers plan and deploy all of the relevant components in different ways. We walk you through the steps to accomplish this using the Azure portal. Learn how to [deploy the Defender extension](/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc&tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api#deploy-the-defender-extension) with REST API, Azure CLI or with a Resource Manager template. |
defender-for-cloud | Tutorial Enable Containers Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-containers-azure.md | Last updated 06/29/2023 Defender for Containers in Microsoft Defender for Cloud is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. -Defender for Containers assists you with the three core aspects of container security: --- [**Environment hardening**](defender-for-containers-introduction.md#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, or Amazon EKS. Defender for Containers continuously assesses clusters to provide visibility into misconfigurations and guidelines to help mitigate identified threats.--- [**Vulnerability assessment**](defender-for-containers-introduction.md#vulnerability-assessment) - Vulnerability assessment and management tools for images stored in ACR registries and running in Azure Kubernetes Service.--- [**Run-time threat protection for nodes and clusters**](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities.- Learn more about [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md). You can learn more about Defender for Container's pricing on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). |
defender-for-cloud | Tutorial Enable Storage Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-storage-plan.md | Defender for Storage in Microsoft Defender for Cloud is an Azure-native layer of Learn more about the [Defender for Storage plan](defender-for-storage-introduction.md). -You can learn more about Defender for Storage's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +You can learn more about Defender for Storage's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). ## Prerequisites You can learn more about Defender for Storage's pricing on [the pricing page](ht ## Enable the Storage plan -Defender for Storage continually analyzes the telemetry stream generated by the [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/) and Azure Files services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud, together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations. +Defender for Storage continually analyzes the telemetry stream generated by the [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/) and Azure Files services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud, together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations. -**To enable Defender for App Service on your subscription**: +**To enable Defender for Storage on your subscription**: 1. Sign in to the [Azure portal](https://portal.azure.com). Defender for Storage continually analyzes the telemetry stream generated by the - [Overview of Microsoft Defender for Storage](defender-for-storage-introduction.md) - +- [Additional configurations for Defender for Storage](../storage/common/azure-defender-storage-configure.md?toc=/azure/defender-for-cloud/toc.json) |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | If you're looking for the latest release notes, you can find them in the [What's | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | July 2023 | | [General availability release of agentless container posture in Defender CSPM](#general-availability-ga-release-of-agentless-container-posture-in-defender-cspm) | July 2023 | | [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | July 2023 |-| [Recommendation set to be released for GA: Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](#recommendation-set-to-be-released-for-ga-running-container-images-should-have-vulnerability-findings-resolved-powered-by-microsoft-defender-vulnerability-management) | July 2023 | | [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | September 2023 | ### Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled". Customers will have until July 31, 2023 to resolve this issue. After this date, The new Agentless Container Posture capabilities are set for General Availability (GA) as part of the Defender CSPM (Cloud Security Posture Management) plan. -With this release, the recommendation `Container registry images should have vulnerability findings resolved (powered by MDVM)` is set for General Availability (GA): --|Recommendation | Description | Assessment Key| -|--|--|--| -| Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to  improving your security posture, significantly reducing the attack surface for your containerized workloads. |dbd0cb49-b563-45e7-9724-889e799fa648 <br> is replaced by c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 --Customers with both Defender for Containers plan and Defender CSPM plan should [disable the Qualys recommendation](tutorial-security-policy.md#disable-a-security-recommendation), to avoid multiple reports for the same images with potential impact on secure score. If you're currently using the sub-assesment API or Azure Resource Graph or continuous export, you should also update your requests to the new schema used by the MDVM recommendation prior to disabling the Qualys recommendation and using MDVM results instead. --If you are also using our public preview offering for Windows containers vulnerability assessment powered by Qualys, and you would like to continue using it, you need to [disable Linux findings](defender-for-containers-vulnerability-assessment-azure.md#disable-specific-findings) using disable rules rather than disable the registry recommendation. - Learn more about [Agentless Containers Posture in Defender CSPM](concept-agentless-containers.md). ### Business model and pricing updates for Defender for Cloud plans Existing customers of Defender for Key-Vault, Defender for Azure Resource Manage For more information on all of these plans, check out the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h) -### Recommendation set to be released for GA: Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)  --**Estimated date for change: July 2023** --The recommendation `Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)` is set to be released as GA (General Availability): --|Recommendation | Description | Assessment Key| -|--|--|--| -| Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 -- Customers with both Defender for the Containers plan and Defender CSPM plan should [disable the Qualys running containers recommendation](tutorial-security-policy.md#disable-a-security-recommendation), to avoid multiple reports for the same images with potential impact on the secure score. --If you're currently using the sub-assesment API or Azure Resource Graph or continuous export, you should also update your requests to the new schema used by the MDVM recommendation prior to disabling the Qualys recommendation and use MDVM results instead. --If you are also using our public preview offering for Windows containers vulnerability assessment powered by Qualys, and you would like to continue using it, you need to [disable Linux findings](defender-for-containers-vulnerability-assessment-azure.md#disable-specific-findings) using disable rules rather than disable the runtime recommendation. --Learn more about [Agentless Containers Posture in Defender CSPM](concept-agentless-containers.md). - ### Change to the Log Analytics daily cap Azure monitor offers the capability to [set a daily cap](../azure-monitor/logs/daily-cap.md) on the data that is ingested on your Log analytics workspaces. However, Defender for Cloud security events are currently not supported in those exclusions. |
dev-box | Concept Common Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-common-components.md | -# Components common to Microsoft Dev Box and Azure Deployment Environments +# Components common to Microsoft Dev Box and Azure Deployment Environments > [!TIP] > Welcome to the **Microsoft Dev Box** documentation. If you're looking for information about **Azure Deployment Environments**, follow this link: [*Components common to Azure Deployment Environments and Microsoft Dev Box*](../deployment-environments/concept-common-components.md). |
dev-box | Concept Dev Box Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md | -# Key concepts for Microsoft Dev Box Preview +# Key concepts for Microsoft Dev Box This article describes the key concepts and components of Microsoft Dev Box. As a dev box user, you have control over your own dev boxes. You can create more A dev center is a collection of projects that require similar settings. Dev centers enable dev infrastructure managers to: - Manage the images and SKUs available to the projects by using dev box definitions.-- Configure the networks that the development teams consume by using network connections. +- Configure the networks that the development teams consume by using network connections. ## Project |
dev-box | How To Configure Azure Compute Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md | To learn more about Azure Compute Gallery and how to create galleries, see: - A compute gallery. Images stored in a compute gallery can be used in a dev box definition, provided they meet the requirements listed in the [Compute gallery image requirements](#compute-gallery-image-requirements) section. > [!NOTE]-> Microsoft Dev Box Preview doesn't support community galleries. +> Microsoft Dev Box doesn't support community galleries. ## Compute gallery image requirements The image version must meet the following requirements: ## Provide permissions for services to access a gallery -When you use an Azure Compute Gallery image to create a dev box definition, the Windows 365 service validates the image to ensure that it meets the requirements to be provisioned for a dev box. The Dev Box Preview service replicates the image to the regions specified in the attached network connections, so the images are present in the region that's required for dev box creation. +When you use an Azure Compute Gallery image to create a dev box definition, the Windows 365 service validates the image to ensure that it meets the requirements to be provisioned for a dev box. The Dev Box service replicates the image to the regions specified in the attached network connections, so the images are present in the region that's required for dev box creation. To allow the services to perform these actions, you must provide permissions to your gallery as follows. ### Add a user-assigned identity to the dev center -1. [Follow the steps to create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity). +1. [Follow the steps to create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity). 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, enter **dev box**. In the list of results, select **Dev centers**. 1. Open your dev center. On the left menu, select **Identity**. The gallery is detached from the dev center. The gallery and its images aren't d ## Next steps -- Learn more about [key concepts in Microsoft Dev Box Preview](./concept-dev-box-concepts.md).+- Learn more about [key concepts in Microsoft Dev Box ](./concept-dev-box-concepts.md). |
dev-box | How To Configure Dev Box Azure Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-dev-box-azure-diagnostic-logs.md | Last updated 04/28/2023 # Configure Azure diagnostic logs for a dev center -With Azure diagnostic logs for DevCenter, you can view audit logs for dataplane operations in your dev center. These logs can be routed to any of the following destinations: +With Azure diagnostic logs for DevCenter, you can view audit logs for dataplane operations in your dev center. These logs can be routed to any of the following destinations: * Azure Storage account * Log Analytics workspace This feature is available on all dev centers. -Diagnostics logs allow you to export basic usage information from your dev center to different kinds sources so that you can consume them in a customized way. The dataplane audit logs expose information around CRUD operations for dev boxes within your dev center. Including, for example, start and stop commands executed on dev boxes. Some sample ways you can choose to export this data: +Diagnostics logs allow you to export basic usage information from your dev center to different kinds sources so that you can consume them in a customized way. The dataplane audit logs expose information around CRUD operations for dev boxes within your dev center. Including, for example, start and stop commands executed on dev boxes. Some sample ways you can choose to export this data: * Export data to blob storage, export to CSV. * Export data to Azure Monitor logs and view and query data in your own Log Analytics workspace DevCenter stores data in the following tables. | Table | Description | |:|:|-| DevCenterDiagnosticLogs | Table used to store dataplane request/response information on dev box or environments within the dev center. | +| DevCenterDiagnosticLogs | Table used to store dataplane request/response information on dev box or environments within the dev center. | ### Sample Kusto Queries |
dev-box | How To Configure Dev Box Hibernation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-dev-box-hibernation.md | + + Title: Configure hibernation for Microsoft Dev Box ++description: Learn how to enable, disable and troubleshoot hibernation for your dev boxes. ++++ Last updated : 07/05/2023++#Customer intent: As a platform engineer, I want dev box users to be able to hibernate their dev boxes as part of my cost management strategy and so that dev box users can resume their work where they left off. +++# How to configure Dev Box Hibernation (preview) ++Hibernating dev boxes at the end of the workday can help you save a substantial portion of your VM costs. It eliminates the need for developers to shut down their dev box and lose their open windows and applications. ++With the introduction of Dev Box Hibernation (Preview), you can enable this capability on new dev boxes and hibernate and resume them. This feature provides a convenient way to manage your dev boxes while maintaining your work environment. ++There are two steps in enabling hibernation; you must enable hibernation on your dev box image and enable hibernation on your dev box definition. ++> [!IMPORTANT] +> Dev Box Hibernation is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +++## Key concepts for hibernation-enabled images ++- The following SKUs support hibernation: 8, 16 vCPU SKUs. 32 vCPU SKUs do not support hibernation. ++- You can enable hibernation only on new dev boxes created with hibernation-enabled dev box definitions. You cannot enable hibernation on existing dev boxes. ++- You can hibernate a dev box only using the dev Portal, CLI, PowerShell, SDKs, and API. Hibernating from within the dev box in Windows is not supported. ++- If you use a marketplace image, we recommend using the Visual Studio for dev box images. ++- The Windows 11 Enterprise CloudPC + OS Optimizations image contains optimized power settings, and they cannot be used with hibernation. ++- Once enabled, you cannot disable hibernation on a dev box. However, you can disable hibernation support on the dev box definition so that future dev boxes do not have hibernation. ++- To enable hibernation, you need to enable nested virtualization in your Windows OS. If the "Virtual Machine Platform" feature is not enabled in your DevBox image, DevBox will automatically enable nested virtualization for you if you choose to enable hibernation. ++- Hibernation doesn't support hypervisor-protected code integrity (HVCI)/ Memory Integrity features. Dev box disables this feature automatically. ++- Auto-stop schedules still shutdown the dev boxes. If you want to hibernate your dev box, you can do it through the developer portal or using the CLI. ++### Settings not compatible with hibernation ++These settings are known to be incompatible with hibernation, and aren't supported for hibernation scenarios: ++- **Memory Integrity/Hypervisor Code Integrity.** + + To disable Memory Integrity/Hypervisor Code Integrity: + 1. In the start menu, search for *memory integrity* + 1. Select **Core Isolation** + 1. Under **Memory integrity**, ensure that memory integrity is set to Off. ++- **Guest Virtual Secure Mode based features without Nested Virtualization enabled.** ++ To enable Nested Virtualization: + 1. In the start menu, search for *Turn Windows features on or off* + 1. In Turn Windows features on or off, select **Virtual Machine Platform**, and then select **OK** + +## Enable hibernation on your dev box image ++The Visual Studio and Microsoft 365 images that dev Box provides in the Azure Marketplace are already configured to support hibernation. You don't need to enable hibernation on these images, they're ready to use. ++If you plan to use a custom image from an Azure Compute Gallery, you need to enable hibernation capabilities as you create the new image. To enable hibernation capabilities, set the IsHibernateSupported flag to true. You must set the IsHibernateSupported flag when you create the image, existing images cannot be modified. ++To enable hibernation capabilities, set the `IsHibernateSupported` flag to true: ++```azurecli-interactive +az sig image-definition create / + --resource-group <resourcegroupname> --gallery-name <galleryname> --gallery-image-definition <imageName> --location <location> / + --publisher <publishername> --offer <offername> --sku <skuname> --os-type windows --os-state Generalized / + --features "IsHibernateSupported=true SecurityType=TrustedLaunch" --hyper-v-generation V2 +``` ++For more information about creating a custom image, see [Configure a dev box by using Azure VM Image Builder](how-to-customize-devbox-azure-image-builder.md). ++## Enable hibernation on a dev box definition ++You can enable hibernation as you create a dev box definition, providing that the dev box definition uses a hibernation-enabled custom or marketplace image. You can also update an existing dev box definition that uses a hibernation-enabled custom or marketplace image. ++All new dev boxes created in dev box pools that use a dev box definition with hibernation enabled can hibernate in addition to shutting down. If a pool has dev boxes that were created before hibernation was enabled, they continue to only support shutdown. ++Dev Box validates your image for hibernate support. Your dev box definition may fail validation if hibernation couldn't be successfully enabled using your image. ++### Enable hibernation on an existing dev box definition by using the Azure portal ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. In the search box, enter **dev center**. In the list of results, select **Dev centers**. ++1. Open the dev center that contains the dev box definition that you want to update, and then select **Dev box definitions**. + + :::image type="content" source="./media/how-to-configure-dev-box-hibernation/select-dev-box-definitions.png" alt-text="Screenshot that shows the dev center overview page and the menu option for dev box definitions."::: + +1. Select the dev box definition that you want to update, and then select the edit button. ++ :::image type="content" source="./media/how-to-configure-dev-box-hibernation/update-dev-box-definition.png" alt-text="Screenshot of the list of existing dev box definitions and the edit button."::: ++1. On the Editing \<dev box definition\> page, select **Enable hibernation**. ++ :::image type="content" source="./media/how-to-configure-dev-box-hibernation/dev-box-pool-enable-hibernation.png" alt-text="Screenshot of the page for editing a dev box definition, with Enable hibernation selected.."::: ++1. Select **Save**. ++### Update an existing dev box definition by using the CLI + +```azurecli-interactive +az devcenter admin devbox-definition update --dev-box-definition-name <DevBoxDefinitionName> -ΓÇôdev-center-name <devcentername> --resource-group <resourcegroupname> ΓÇô-hibernateSupport enabled +``` ++## Disable hibernation on a dev box definition ++ If you have issues provisioning new VMs after enabling hibernation on a pool or you want to revert to shut down only dev boxes, you can disable hibernation on the dev box definition. ++### Disable hibernation on an existing dev box definition by using the Azure portal ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. In the search box, enter **dev center**. In the list of results, select **Dev centers**. ++1. Open the dev center that contains the dev box definition that you want to update, and then select **Dev box definitions**. + +1. Select the dev box definition that you want to update, and then select the edit button. ++1. On the Editing \<dev box definition\> page, clear **Enable hibernation**. ++ :::image type="content" source="./media/how-to-configure-dev-box-hibernation/dev-box-pool-disable-hibernation.png" alt-text="Screenshot of the page for editing a dev box definition, with Enable hibernation not selected."::: ++1. Select **Save**. ++### Disable hibernation on an existing dev box definition by using the CLI + +```azurecli-interactive +az devcenter admin devbox-definition update --dev-box-definition-name <DevBoxDefinitionName> -ΓÇôdev-center-name <devcentername> --resource-group <resourcegroupname> ΓÇô-hibernateSupport disabled +``` ++## Next steps ++- [Create a dev box pool](how-to-manage-dev-box-pools.md) +- [Configure a dev box by using Azure VM Image Builder](how-to-customize-devbox-azure-image-builder.md) +- [How to hibernate your dev box](how-to-hibernate-your-dev-box.md) +- [CLI Reference for az devcenter admin devbox-definition update](/cli/azure/devcenter/admin/devbox-definition?view=azure-cli-latest&preserve-view=true) |
dev-box | How To Configure Network Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md | Title: Configure network connections -description: Learn how to create, delete, attach, and remove Microsoft Dev Box Preview network connections. +description: Learn how to create, delete, attach, and remove Microsoft Dev Box network connections. To create a network connection, you need an existing virtual network and subnet. An organization can control network ingress and egress by using a firewall, network security groups, and even Microsoft Defender. -If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Microsoft Dev Box Preview service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). +If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Microsoft Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). ## Plan a network connection -The following sections show you how to create and configure a network connection in Microsoft Dev Box Preview. +The following sections show you how to create and configure a network connection in Microsoft Dev Box . ### Types of Active Directory join You need to attach a network connection to a dev center before you can use it in 1. Select the dev center that you created, and then select **Networking**. -1. Select **+ Add**. +1. Select **+ Add**. 1. On the **Add network connection** pane, select the network connection that you created earlier, and then select **Add**. |
dev-box | How To Configure Stop Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-stop-schedule.md | -To save on costs, you can enable an Auto-stop schedule on a dev box pool. Microsoft Dev Box Preview attempts to shut down all dev boxes in that pool at the time specified in the schedule. You can configure one stop time in one timezone for each pool. +To save on costs, you can enable an Auto-stop schedule on a dev box pool. Microsoft Dev Box will attempt to shut down all dev boxes in that pool at the time specified in the schedule. You can configure one stop time in one timezone for each pool. ## Permissions To manage a dev box schedule, you need the following permissions: You can create an auto-stop schedule while creating a new dev box pool, or by mo :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-save-pool.png" alt-text="Screenshot of the edit dev box pool page showing the Auto-stop options."::: -1. Select **Save**. +1. Select **Save**. ### Add an Auto-stop schedule as you create a pool To delete an auto-stop schedule, first navigate to your pool: :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-disable-stop.png" alt-text="Screenshot of the edit dev box pool page showing Auto-stop disabled."::: -1. Select **Save**. Dev boxes in this pool won't automatically shut down. +1. Select **Save**. Dev boxes in this pool won't automatically shut down. ## Manage an auto-stop schedule at the CLI |
dev-box | How To Create Dev Boxes Developer Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-create-dev-boxes-developer-portal.md | Title: Create & configure a dev box by using the developer portal -description: Learn how to create, delete, and connect to Microsoft Dev Box Preview dev boxes by using the developer portal. +description: Learn how to create, delete, and connect to Microsoft Dev Box dev boxes by using the developer portal. -You can preconfigure a dev box to manage all of your tools, services, source code, and prebuilt binaries that are specific to your project. Microsoft Dev Box Preview provides an environment that's ready to build on, so you can run your app in minutes. +You can preconfigure a dev box to manage all of your tools, services, source code, and prebuilt binaries that are specific to your project. Microsoft Dev Box provides an environment that's ready to build on, so you can run your app in minutes. ## Permissions You can delete dev boxes after you finish your tasks. Say you finished fixing yo > [!NOTE] > Ensure that neither you nor your team members need the dev box before deleting. You can't retrieve dev boxes after deletion. ## Next steps |
dev-box | How To Customize Devbox Azure Image Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-devbox-azure-image-builder.md | To reduce the complexity of creating VM images, VM Image Builder: - Removes the need to use complex tooling, processes, and manual steps to create a VM image. VM Image Builder abstracts out all these details and hides Azure-specific requirements, such as the need to generalize the image (Sysprep). And it gives more advanced users the ability to override such requirements. -- Can be integrated with existing image build pipelines for a click-and-go experience. To do so, you can either call VM Image Builder from your pipeline or use an Azure VM Image Builder service DevOps task (preview).+- Can be integrated with existing image build pipelines for a click-and-go experience. To do so, you can either call VM Image Builder from your pipeline or use an Azure VM Image Builder service DevOps task. - Can fetch customization data from various sources, which removes the need to collect them all from one place. $replRegion2="eastus" # Create the gallery New-AzGallery -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location -$SecurityType = @{Name='SecurityType';Value='TrustedLaunch'} +$SecurityType = @{Name='SecurityType';Value='TrustedLaunch'} $features = @($SecurityType) # Create the image definition-New-AzGalleryImageDefinition -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCompany' -Offer 'vscodebox' -Sku '1-0-0' -Feature $features -HyperVGeneration "V2" +New-AzGalleryImageDefinition -GalleryName $galleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCompany' -Offer 'vscodebox' -Sku '1-0-0' -Feature $features -HyperVGeneration "V2" ``` 1. Copy the following Azure Resource Manger template for VM Image Builder. This template indicates the source image and the customizations applied. This template installs Choco and VS Code. It also indicates where the image will be distributed. After your custom image has been provisioned in the gallery, you can configure t ## Set up the Dev Box service with a custom image -After the gallery images are available in the dev center, you can use the custom image with the Microsoft Dev Box Preview service. For more information, see [Quickstart: Configure Microsoft Dev Box Preview](./quickstart-configure-dev-box-service.md). +After the gallery images are available in the dev center, you can use the custom image with the Microsoft Dev Box service. For more information, see [Quickstart: Configure Microsoft Dev Box ](./quickstart-configure-dev-box-service.md). ## Next steps |
dev-box | How To Dev Box User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-dev-box-user.md | -Team members must have access to a specific Microsoft Dev Box Preview project before they can create dev boxes. By using the built-in DevCenter Dev Box User role, you can assign permissions to Active Directory users or groups at the project level. +Team members must have access to a specific Microsoft Dev Box project before they can create dev boxes. By using the built-in DevCenter Dev Box User role, you can assign permissions to Active Directory users or groups at the project level. [!INCLUDE [supported accounts note](./includes/note-supported-accounts.md)] |
dev-box | How To Hibernate Your Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-hibernate-your-dev-box.md | + + Title: Hibernate your Microsoft Dev Box ++description: Learn how to hibernate your dev boxes. ++++ Last updated : 07/05/2023++#Customer intent: As a developer, I want to be able to hibernate my dev boxes so that I can resume work where I left off. +++# How to hibernate your dev box ++Hibernation is a power-saving state that saves your running applications to your hard disk and then shuts down the virtual machine (VM). When you resume the VM, all your previous work is restored. ++You can hibernate your dev box through the developer portal or the CLI. You can't hibernate your dev box from the dev box itself. ++> [!IMPORTANT] +> Dev Box Hibernation is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++## Hibernate your dev box using the developer portal ++Hibernate your dev box through the developer portal: ++1. Sign in to the [developer portal](https://aka.ms/devbox-portal). + +1. On the dev box you want to hibernate, on the more options menu, select **Hibernate**. + +Dev boxes that support hibernation will show the **Hibernate** option. Dev boxes that only support shutdown will show the **Shutdown** option. ++## Resume your dev box using the developer portal ++Resume your Dev box through the developer portal: ++1. Sign in to the [developer portal](https://aka.ms/devbox-portal). + +1. On the dev box you want to resume, on the more options menu, select **Resume**. ++In addition, you can also double select on your dev box in the list of VMs you see in the "Remote Desktop" app. Your dev box automatically starts up and resumes from a hibernating state. ++## Hibernate your dev box using the CLI ++You can use the CLI to hibernate your dev box: ++```azurecli-interactive +az devcenter dev dev-box stop --name <YourDevBoxName> --dev-center-name <YourDevCenterName> --project-name <YourProjectName> --user-id "me" --hibernate false +``` ++To learn more about managing your dev box from the CLI, see: [devcenter reference](/cli/azure/devcenter/dev/dev-box?view=azure-cli-latest&preserve-view=true). ++## Troubleshooting ++**My dev box doesn't resume from hibernated state. Attempts to connect to it fail and I receive an error from the RDP app.** ++If your machine is unresponsive, it may have stalled either while going into hibernation or resuming from hibernation, you can manually reboot your dev box. ++To shut down your dev box, either ++- Developer portal - Go to the [developer portal](https://aka.ms/devbox-portal), select your DevBox, and on the more options menu, select **Shut down**. +- CLI - `az devcenter dev dev-box stop --name <YourDevBoxName> --dev-center-name <YourDevCenterName> --project-name <YourProjectName> --user-id "me" --hibernate false` ++**When my dev box resumes from a hibernated state, all my open windows were gone.** ++Dev Box Hibernation is a preview feature, and you might run into reliability issues. Enable AutoSave on your applications to minimize the impact of session loss. ++**I changed some settings on one of my dev boxes and it no longer hibernates. My other dev boxes hibernate without issues. What could be the problem?** ++Some settings aren't compatible with hibernation and prevent your dev box from hibernating. To learn about these settings, see: [Settings not compatible with hibernation](how-to-configure-dev-box-hibernation.md#settings-not-compatible-with-hibernation). ++ ## Next steps ++- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md) +- [How to configure Dev Box Hibernation (preview)](how-to-configure-dev-box-hibernation.md) |
dev-box | How To Install Dev Box Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-install-dev-box-cli.md | az extension list ### Update the Dev Box CLI extension You can update the Dev Box CLI extension if you already have it installed. -To update a version of the extension that's installed +To update a version of the extension that's installed ``` azurecli az extension update --name devcenter ``` |
dev-box | How To Manage Dev Box Definitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md | -A dev box definition is a Microsoft Dev Box Preview resource that specifies a source image, compute size, and storage size. +A dev box definition is a Microsoft Dev Box resource that specifies a source image, compute size, and storage size. Depending on their task, development teams have different software, configuration, compute, and storage requirements. You can create a new dev box definition to fulfill each team's needs. There's no limit to the number of dev box definitions that you can create, and you can use dev box definitions across multiple projects in a dev center. |
dev-box | How To Manage Dev Box Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-pools.md | To manage a dev box pool, you need the following permissions: ## Create a dev box pool -A dev box pool is a collection of dev boxes that you manage together. You must have a pool before users can create a dev box. +A dev box pool is a collection of dev boxes that you manage together. You must have a pool before users can create a dev box. The following steps show you how to create a dev box pool that's associated with a project. You'll use an existing dev box definition and network connection in the dev center to configure the pool. -If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure Microsoft Dev Box Preview](quickstart-configure-dev-box-service.md) to create them. +If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure Microsoft Dev Box ](quickstart-configure-dev-box-service.md) to create them. 1. Sign in to the [Azure portal](https://portal.azure.com). |
dev-box | How To Manage Dev Box Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md | -A project is the point of access to Microsoft Dev Box Preview for the development team members. A project contains dev box pools, which specify the dev box definitions and network connections used when dev boxes are created. Dev managers can configure the project with dev box pools that specify dev box definitions appropriate for their team's workloads. Dev box users create dev boxes from the dev box pools they have access to through their project memberships. +A project is the point of access to Microsoft Dev Box for the development team members. A project contains dev box pools, which specify the dev box definitions and network connections used when dev boxes are created. Dev managers can configure the project with dev box pools that specify dev box definitions appropriate for their team's workloads. Dev box users create dev boxes from the dev box pools they have access to through their project memberships. Each project is associated with a single dev center. When you associate a project with a dev center, all the settings at the dev center level will be applied to the project automatically. |
dev-box | How To Manage Dev Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md | -#Customer intent: As a dev infrastructure manager, I want to be able to manage dev centers so that I can manage my Microsoft Dev Box Preview implementation. +#Customer intent: As a dev infrastructure manager, I want to be able to manage dev centers so that I can manage my Microsoft Dev Box implementation. # Manage a Microsoft Dev Box dev center To manage a dev center, you need the following permissions: Your development teams' requirements change over time. You can create a new dev center to support organizational changes like a new business requirement or a new regional center. You can create as many or as few dev centers as you need, depending on how you organize and manage your development teams. -To create a dev center: +To create a dev center: 1. Sign in to the [Azure portal](https://portal.azure.com). You can attach existing network connections to a dev center. You must attach a n 1. Select the dev center that you want to attach the network connection to, and then select **Networking**. -1. Select **+ Add**. +1. Select **+ Add**. 1. On the **Add network connection** pane, select the network connection that you created earlier, and then select **Add**. |
dev-box | How To Project Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-project-admin.md | -You can create multiple Microsoft Dev Box Preview projects in the dev center to align with each team's specific requirements. By using the built-in DevCenter Project Admin role, you can delegate project administration to a member of a team. Project admins can use the network connections and dev box definitions configured at the dev center level to create and manage dev box pools within their project. +You can create multiple Microsoft Dev Box projects in the dev center to align with each team's specific requirements. By using the built-in DevCenter Project Admin role, you can delegate project administration to a member of a team. Project admins can use the network connections and dev box definitions configured at the dev center level to create and manage dev box pools within their project. A DevCenter Project Admin can manage a project by: The users can now manage the project and create dev box pools within it. ## Next steps -- [Quickstart: Configure Microsoft Dev Box Preview](quickstart-configure-dev-box-service.md)+- [Quickstart: Configure Microsoft Dev Box](quickstart-configure-dev-box-service.md) |
dev-box | Overview What Is Microsoft Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md | Title: What is Microsoft Dev Box? -description: Learn how Microsoft Dev Box Preview gives self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations. +description: Learn how Microsoft Dev Box gives self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations. Last updated 04/25/2023 adobe-target: true -# What is Microsoft Dev Box Preview? +# What is Microsoft Dev Box? -Microsoft Dev Box Preview gives you self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations called dev boxes. You can set up dev boxes with tools, source code, and prebuilt binaries that are specific to a project, so developers can immediately start work. If you're a developer, you can use dev boxes in your day-to-day workflows. +Microsoft Dev Box gives you self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations called dev boxes. You can set up dev boxes with tools, source code, and prebuilt binaries that are specific to a project, so developers can immediately start work. If you're a developer, you can use dev boxes in your day-to-day workflows. The Dev Box service was designed with three organizational roles in mind: dev infrastructure (infra) admins, developer team leads, and developers. Dev infra admins and IT admins work together to provide developer infrastructure Developer team leads are experienced developers who have in-depth knowledge of their projects. They can be assigned the DevCenter Project Admin role and assist with creating and managing the developer experience. Project admins create and manage pools of dev boxes. -Members of a development team are assigned the DevCenter Dev Box User role. They can then self-serve one or more dev boxes on demand from the dev box pools that have been enabled for a project. Dev box users can work on multiple projects or tasks by creating multiple dev boxes. +Members of a development team are assigned the DevCenter Dev Box User role. They can then self-serve one or more dev boxes on demand from the dev box pools that have been enabled for a project. Dev box users can work on multiple projects or tasks by creating multiple dev boxes. Microsoft Dev Box bridges the gap between development teams and IT, by bringing control of project resources closer to the development team. ## Scenarios for Microsoft Dev Box -Organizations can use Microsoft Dev Box Preview in a range of scenarios. +Organizations can use Microsoft Dev Box in a range of scenarios. ### Dev infra scenarios Dev Box helps dev infra teams provide the appropriate dev boxes for each user's workload. Dev infra admins can: Organizations can even define dev boxes for various roles on a team. You might c ## How does Dev Box work? -This diagram shows the components of the Dev Box Preview service and the relationships between them. +This diagram shows the components of the Dev Box service and the relationships between them. :::image type="content" source="media/overview-what-is-microsoft-dev-box/dev-box-architecture.png" alt-text="Diagram that shows the Dev Box architecture."::: When the configuration of the service is complete, developers can create and man ## Next steps -Start using Microsoft Dev Box Preview: +Start using Microsoft Dev Box : -- [Quickstart: Configure Microsoft Dev Box Preview](./quickstart-configure-dev-box-service.md)+- [Quickstart: Configure Microsoft Dev Box ](./quickstart-configure-dev-box-service.md) - [Quickstart: Create a dev box](./quickstart-create-dev-box.md) |
dev-box | Quickstart Configure Dev Box Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md | Title: 'Quickstart: Configure Microsoft Dev Box' -description: In this quickstart, you learn how to configure the Microsoft Dev Box Preview service to provide dev boxes for users. +description: In this quickstart, you learn how to configure the Microsoft Dev Box service to provide dev boxes for users. Last updated 04/25/2023 #Customer intent: As an enterprise admin, I want to understand how to create and configure dev box components so that I can provide dev box projects for my users. -# Quickstart: Configure Microsoft Dev Box Preview +# Quickstart: Configure Microsoft Dev Box -This quickstart describes how to set up Microsoft Dev Box Preview to enable development teams to self-serve their dev boxes. The setup process involves two distinct phases. In the first phase, dev infra admins configure the necessary Microsoft Dev Box resources through the Azure portal. After this phase is complete, users can proceed to the next phase, creating and managing their dev boxes through the developer portal. This quickstart shows you how to complete the first phase. +This quickstart describes how to set up Microsoft Dev Box to enable development teams to self-serve their dev boxes. The setup process involves two distinct phases. In the first phase, dev infra admins configure the necessary Microsoft Dev Box resources through the Azure portal. After this phase is complete, users can proceed to the next phase, creating and managing their dev boxes through the developer portal. This quickstart shows you how to complete the first phase. The following graphic shows the steps required to configure Microsoft Dev Box in the Azure portal. To complete this quickstart, you need: - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Owner or Contributor role on an Azure subscription or resource group.-- User licenses. To use Dev Box Preview, each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Microsoft Intune, and Azure Active Directory (Azure AD) P1. These licenses are available independently and are included in the following subscriptions:+- User licenses. To use Dev Box , each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Microsoft Intune, and Azure Active Directory (Azure AD) P1. These licenses are available independently and are included in the following subscriptions: - Microsoft 365 F3 - Microsoft 365 E3, Microsoft 365 E5 - Microsoft 365 A3, Microsoft 365 A5 To complete this quickstart, you need: - If your organization routes egress traffic through a firewall, open the appropriate ports. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). ## 1. Create a dev center -Use the following steps to create a dev center so that you can manage your dev box resources: +Use the following steps to create a dev center so that you can manage your dev box resources: 1. Sign in to the [Azure portal](https://portal.azure.com). Use the following steps to create a dev center so that you can manage your dev b ## 2. Configure a network connection -Network connections determine the region in which dev boxes are deployed. They also allow dev boxes to be connected to your existing virtual networks. The following steps show you how to create and configure a network connection in Microsoft Dev Box Preview. +Network connections determine the region in which dev boxes are deployed. They also allow dev boxes to be connected to your existing virtual networks. The following steps show you how to create and configure a network connection in Microsoft Dev Box . ### Create a virtual network and subnet To assign roles: ## Project Admins -Microsoft Dev Box Preview makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their teams, like creating and managing dev box pools. To give users permissions to manage projects, assign the DevCenter Project Admin role to them. +Microsoft Dev Box makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their teams, like creating and managing dev box pools. To give users permissions to manage projects, assign the DevCenter Project Admin role to them. You can assign the DevCenter Project Admin role by using the steps described earlier in [6. Provide access to a dev box project](#6-provide-access-to-a-dev-box-project) and select the Project Admin role instead of the Dev Box User role. For more information, see [Provide access to projects for project admins](how-to-project-admin.md). |
dev-box | Quickstart Create Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md | Last updated 04/25/2023 # Quickstart: Create a dev box by using the developer portal -In this quickstart, you get started with Microsoft Dev Box Preview by creating a dev box through the developer portal. After you create the dev box, you can connect to it with a Remote Desktop session through a browser or through a Remote Desktop app. +In this quickstart, you get started with Microsoft Dev Box by creating a dev box through the developer portal. After you create the dev box, you can connect to it with a Remote Desktop session through a browser or through a Remote Desktop app. You can create and manage multiple dev boxes as a dev box user. Create a dev box for each task that you're working on, and create multiple dev boxes within a single project to help streamline your workflow. When you no longer need your dev box, you can delete it: 1. To confirm the deletion, select **Delete**. - :::image type="content" source="./media/quickstart-create-dev-box/dev-portal-delete-dev-box-confirm.png" alt-text="Screenshot of the Delete button in the confirmation message about deleting a dev box."::: + :::image type="content" source="./media/quickstart-create-dev-box/dev-portal-delete-dev-box-confirm.png" alt-text="Screenshot of the Delete button in the confirmation message about deleting a dev box."::: ## Next steps |
dev-box | Tutorial Connect To Dev Box With Remote Desktop App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md | -After you configure the Microsoft Dev Box Preview service and create dev boxes, you can connect to them by using a browser or by using a Remote Desktop client. +After you configure the Microsoft Dev Box service and create dev boxes, you can connect to them by using a browser or by using a Remote Desktop client. Remote Desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a Remote Desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android. In this tutorial, you learn how to: To complete this tutorial, you must first: -- [Configure Microsoft Dev Box Preview](./quickstart-configure-dev-box-service.md).+- [Configure Microsoft Dev Box ](./quickstart-configure-dev-box-service.md). - [Create a dev box](./quickstart-create-dev-box.md#create-a-dev-box) on the [developer portal](https://aka.ms/devbox-portal). ## Download the client and connect to your dev box The dev box might take a few moments to stop. ## Next steps -To learn about managing Microsoft Dev Box Preview, see: +To learn about managing Microsoft Dev Box , see: - [Provide access to project admins](./how-to-project-admin.md) - [Provide access to dev box users](./how-to-dev-box-user.md) |
dev-box | Tutorial Dev Box Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-dev-box-limits.md | + + Title: "Tutorial: Limit the number of dev boxes in a project to help control costs" #Required; page title displayed in search results. "Tutorial: \<verb\> * \<noun\>". Include the brand. +description: Each dev box incurs compute and storage costs. This tutorial shows you how to set a limit on the number of dev boxes developers can create in a project. #Required; article description that is displayed in search results. Include the word "tutorial". ++++ Last updated : 06/30/2023 #Required; mm/dd/yyyy format.++#CustomerIntent: As a project admin, I want to set a limit on the number of dev boxes a dev box user can create as part of my cost management strategy. +++# Tutorial: Control costs by setting dev box limits on a project ++You can set a limit on the number of dev boxes each developer can create within a project. You can use this functionality to help manage costs, use resources effectively, or prevent dev box creation for a given project. ++In the developer portal, you see the number of dev boxes that you've created in a project, and the total number of dev boxes you can create in the project. If you've used all your available dev boxes in a project, you can't create a new dev box. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Set a dev box limit for your project by using the Azure portal +> * View dev box Limits in the developer portal + +## Prerequisites ++- A Dev Box project in your subscription +- Project Admin permission to that project ++## Set a dev box limit for your project ++The dev box limit is the number of dev boxes each developer can create in a project. For example, if you set the limit to 3, each developer in your team can create 3 dev boxes. ++1. Sign in to the [Azure portal](https://portal.azure.com/). +1. In the search box, enter *projects*. In the list of results, select **Projects**. +1. Select the project that you want to set a limit for. +1. On the left menu, select **Limits**. +1. On the **Limits** page, for **Enable dev box limit**, select **Yes**. + + :::image type="content" source="media/tutorial-dev-box-limits/enable-dev-box-limits.png" alt-text="Screenshot showing the dev box limits options for a project, with Yes highlighted."::: + +1. In **Dev boxes per developer**, enter a dev box limit and then select **Apply**. + + :::image type="content" source="media/tutorial-dev-box-limits/dev-box-limit-number.png" alt-text="Screenshot showing dev box limits for a project enabled, with dev boxes per developer highlighted."::: ++>[!TIP] +> To prevent developers creating more dev boxes in a project, set the dev box limit to 0. This wonΓÇÖt delete existing dev boxes, but it will prevent further creation of dev boxes in the project. ++## View dev box limits in the developer portal +In the developer portal, select a project to see the number of dev boxes you have already created and the total number of dev boxes you can create in that project. +++If youΓÇÖve used all your available dev boxes in a project, you see an error message and you can't create a new dev box: ++*Your project administrator has set a limit of 3 dev boxes per user in Contoso-software-dev. Please delete a dev box in this project, or contact your administrator to increase your limit.* +++## Clean up resources ++If you're not going to continue to use dev box limits, remove the limit with the following steps: ++1. In the search box, enter *projects*. In the list of results, select **Projects**. +1. Select the project that you want to set a limit for. +1. On the left menu, select **Limits**. +1. On the **Limits** page, for **Enable dev box limit**, select **No**. ++## Next steps ++- [Use the CLI to configure dev box limits](/cli/azure/devcenter/admin/project) +- [Manage a dev box project](how-to-manage-dev-box-projects.md) +- [Microsoft Dev Box pricing](https://azure.microsoft.com/pricing/details/dev-box/) |
event-grid | Communication Services Email Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-email-events.md | This section contains an example of what that data would look like for each even "deliveryStatusDetails": { "statusMessage": "Status Message" },- "deliveryAttemptTimeStamp": "2020-09-18T00:22:20.2855749Z", + "deliveryAttemptTimeStamp": "2020-09-18T00:22:20.2855749+00:00", }, "eventType": "Microsoft.Communication.EmailDeliveryReportReceived", "dataVersion": "1.0", "metadataVersion": "1",- "eventTime": "2020-09-18T00:22:20Z" + "eventTime": "2020-09-18T00:22:20+00:00" }] ``` This section contains an example of what that data would look like for each even "eventType": "Microsoft.Communication.EmailEngagementTrackingReportReceived", "dataVersion": "1.0", "metadataVersion": "1",- "eventTime": "2022-09-06T22:34:52.1303612Z" + "eventTime": "2022-09-06T22:34:52.1303612+00:00" }] ``` |
firewall | Firewall Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md | The following Azure Firewall preview features are available publicly for you to As new features are released to preview, some of them will be behind a feature flag. To enable the functionality in your environment, you must enable the feature flag on your subscription. These features are applied at the subscription level for all firewalls (VNet firewalls and SecureHub firewalls). -This article will be updated to reflect the features that are currently in preview with instructions to enable them. When the features move to General Availability (GA), they'll be available to all customers without the need to enable a feature flag. +This article will be updated to reflect the features that are currently in preview with instructions to enable them. When the features move to General Availability (GA), they're available to all customers without the need to enable a feature flag. ## Preview features For more information, see [Azure Firewall Explicit proxy (preview)](explicit-pro ### Resource Health (preview) With the Azure Firewall Resource Health check, you can now diagnose and get support for service problems that affect your Azure Firewall resource. Resource Health allows IT teams to receive proactive notifications on potential health degradations, and recommended mitigation actions per each health event type. The resource health is also available in a dedicated page in the Azure portal resource page.-This preview is automatically enabled on all firewalls and no action is required to enable this functionality. +Starting in August 2023, this preview will be automatically enabled on all firewalls and no action will be required to enable this functionality. For more information, see [Resource Health overview](../service-health/resource-health-overview.md). ## Next steps |
hdinsight | Hdinsight Go Sdk Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-go-sdk-overview.md | |
hdinsight | Hdinsight Hadoop Create Linux Clusters With Secure Transfer Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-with-secure-transfer-storage.md | description: Learn how to create HDInsight clusters with secure transfer enabled Previously updated : 06/08/2022 Last updated : 07/10/2023 # Apache Hadoop clusters with secure transfer storage accounts in Azure HDInsight |
hdinsight | Hdinsight Management Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-management-ip-addresses.md | description: Learn which IP addresses you must allow inbound traffic from, in or Previously updated : 06/22/2022 Last updated : 07/10/2023 # HDInsight management IP addresses |
hdinsight | Hdinsight Supported Node Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-supported-node-configuration.md | keywords: vm sizes, cluster sizes, cluster configuration Previously updated : 06/22/2022 Last updated : 07/10/2023 # What are the default and recommended node configurations for Azure HDInsight? |
hdinsight | Apache Spark Troubleshoot Application Stops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-application-stops.md | Title: Apache Spark Streaming application stops after 24 days in Azure HDInsight description: An Apache Spark Streaming application stops after executing for 24 days and there are no errors in the log files. Previously updated : 06/08/2022 Last updated : 07/10/2023 # Scenario: Apache Spark Streaming application stops after executing for 24 days in Azure HDInsight |
hdinsight | Spark Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/spark-best-practices.md | Title: Apache Spark guidelines on Azure HDInsight description: Learn guidelines for using Apache Spark in Azure HDInsight. Previously updated : 06/22/2022 Last updated : 07/10/2023 # Apache Spark guidelines |
hpc-cache | Hpc Cache Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-overview.md | Click the image above to watch a [short overview of Azure HPC Cache](https://azu ## Use cases -Azure HPC Cache enhances productivity best for workflows like these: +Azure HPC Cache enhances productivity best for workflows such as: -* Read-heavy file access workflow -* Data stored in NFS-accessible storage, Azure Blob, or both -* Compute farms of up to 75,000 CPU cores +* Read-heavy file access workflow. +* Data stored in NFS-accessible storage, Azure Blob, or both. +* Compute farms of up to 75,000 CPU cores. -Azure HPC Cache can be added to a wide variety of workflows across many industries. Any system where a large number of machines need to access a set of files at scale and with low latency will benefit from this service. The sections below give specific examples. +Azure HPC Cache can be added to a wide variety of workflows across many industries. Any system where a large number of machines need to access a set of files at scale and with low latency can benefit from this service. The following sections give specific examples. ### Visual effects (VFX) rendering In media and entertainment, Azure HPC Cache can speed up data access for time-critical rendering projects. VFX rendering workflows often require last-minute processing by large numbers of compute nodes. Data for these workflows are typically located in an on-premises NAS environment. Azure HPC Cache can cache that file data in the cloud to reduce latency and enhance flexibility for on-demand rendering. -Learn more about [High-performance computing for rendering.](https://azure.microsoft.com/solutions/high-performance-computing/rendering/) +For more information, see [High-performance computing for rendering](https://azure.microsoft.com/solutions/high-performance-computing/rendering/). ### Life sciences A research institute that wants to port its genomic analysis workflows into Azur Azure HPC Cache also can be leveraged to improve efficiency in tasks like secondary analysis, pharmacological simulation, or AI-driven image analysis. -Learn more about [High-performance computing for health and life sciences.](https://azure.microsoft.com/solutions/high-performance-computing/health-and-life-sciences/) +For more information, see [High-performance computing for health and life sciences](https://azure.microsoft.com/solutions/high-performance-computing/health-and-life-sciences/). ### Silicon design verification -The silicon design industryΓÇÖs design verification workloads, known as ΓÇ£electronic design automation (EDA) toolsΓÇ¥ are compute-intensive tools that can be run on large-scale virtual machine compute grids. +The silicon design industry's design verification workloads, known as *electronic design automation (EDA) tools* are compute-intensive tools that can be run on large-scale virtual machine compute grids. Azure HPC Cache can provide on-cloud caching of design data, libraries, binaries, and rule database files from on-premises storage systems. This provides local-like response times for directory listings, metadata, and data reads, and eliminates the need for complex data migration, syncing, and copying operations. Azure HPC Cache also can be set up to cache output files being written by the co HPC Cache allows chip designers to scale EDA verification jobs to tens of thousands of cores with ease, and pay minimal attention to storage performance. -Learn more about [High-performance computing for silicon.](https://azure.microsoft.com/solutions/high-performance-computing/silicon/) +For more information, see [High-performance computing for silicon](https://azure.microsoft.com/solutions/high-performance-computing/silicon/). ### Financial services analytics An Azure HPC Cache deployment can help speed up quantitative analysis calculations, risk analysis workloads, and Monte Carlo simulations to give financial services companies better insight to make strategic decisions. -Learn more about [High-performance computing for financial services.](https://azure.microsoft.com/solutions/high-performance-computing/financial-services/) +For more information, see [High-performance computing for financial services](https://azure.microsoft.com/solutions/high-performance-computing/financial-services/). ## Region availability |
iot-central | Tutorial Industrial End To End | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-industrial-end-to-end.md | Title: Tutorial - Explore an Azure IoT Central industrial scenario description: This tutorial shows you how to deploy an end-to-end industrial IoT solution by using IoT Edge, IoT Central, and Azure Data Explorer. Previously updated : 09/15/2022 Last updated : 07/10/2023 In this tutorial, you learn how to: ## Prerequisites -- Azure subscription.-- Local machine to run the **IoT Central Solution Builder** tool. Pre-built binaries are available for Windows and macOS.-- If you need to build the **IoT Central Solution Builder** tool instead of using one of the pre-built binaries, you need a local Git installation.+- Azure subscription that you access using a [work or school account](https://techcommunity.microsoft.com/t5/itops-talk-blog/what-s-the-difference-between-a-personal-microsoft-account-and-a/ba-p/2241897). Currently, you can't use a Microsoft account to deploy the solution with the **IoT Central Solution Builder** tool. +- Local machine to run the **IoT Central Solution Builder** tool. Prebuilt binaries are available for Windows and macOS. +- If you need to build the **IoT Central Solution Builder** tool instead of using one of the prebuilt binaries, you need a local Git installation. - Text editor. If you want to edit the configuration file to customize your solution. In this tutorial, you use the Azure CLI to create an app registration in Azure Active Directory: You can also use the IoT Central UI or CLI to manage the devices and gateways in ### Data export configuration -The solution uses the IoT Central data export capability to export OPC-UA data. Data export continuously sends filtered telemetry received from the OPC-UA server to an Azure Data Explorer environment. The filter ensures that only data from the OPC-UA is exported. The data export uses a [transformation](howto-transform-data-internally.md) to map the raw telemetry into a tabular structure suitable for Azure Data Explorer to ingest. The following snippet shows the transformation query: +The solution uses the IoT Central data export capability to export OPC-UA data. IoT Central data export continuously sends filtered telemetry received from the OPC-UA server to an Azure Data Explorer environment. The filter ensures that only data from the OPC-UA is exported. The data export uses a [transformation](howto-transform-data-internally.md) to map the raw telemetry into a tabular structure suitable for Azure Data Explorer to ingest. The following snippet shows the transformation query: ```jq { |
iot-hub-device-update | Device Update Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-limits.md | The following table shows the enforced throttles for operations that are availab |CreateOrUpdateDeployment| 7/min | |DeleteDeployment| 7/min | |ProcessSubgroupDeployment | 7/min|-+|Delete Update | 510/min*| +|Get File| 510/min*| +|Get Operation Status| 510/min*| +|Get Update| 510/min*| +|Import Update| 510/min*| +|List Files| 510/min*| +|List Names| 510/min*| +|List Providers| 510/min*| +|List Updates| 510/min*| +|List Versions| 510/min*| +|List Operation Statuses| 50/min| +++\* = the number of calls per minute is shared across all the listed operations ++Additionally, the number of concurrent asynchronous import and/or delete operations is limited to 10 total operation jobs. ## Next steps |
load-balancer | Cross Region Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md | Title: Cross-region load balancer (preview) + Title: Cross-region load balancer description: Overview of cross region load balancer tier for Azure Load Balancer. -# Cross-region load balancer (Preview) +# Cross-region (Global) Load Balancer Azure Standard Load Balancer supports cross-region load balancing enabling geo-redundant High Availability scenarios such as: Azure Standard Load Balancer supports cross-region load balancing enabling geo-r * [Client IP preservation](#client-ip-preservation) * [Build on existing load balancer](#build-cross-region-solution-on-existing-azure-load-balancer) solution with no learning curve -> [!IMPORTANT] -> Cross-region load balancer is currently in preview. -> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - The frontend IP configuration of your cross-region load balancer is static and advertised across [most Azure regions](#participating-regions). :::image type="content" source="./media/cross-region-overview/cross-region-load-balancer.png" alt-text="Diagram of cross-region load balancer." border="true"::: Cross-region load balancer routes the traffic to the appropriate regional load b * NAT64 translation isn't supported at this time. The frontend and backend IPs must be of the same type (v4 or v6). -* UDP traffic isn't supported on Cross-region Load Balancer. +* UDP traffic isn't supported on Cross-region Load Balancer for IPv6. * Outbound rules aren't supported on Cross-region Load Balancer. For outbound connections, utilize [outbound rules](./outbound-rules.md) on the regional load balancer or [NAT gateway](../nat-gateway/nat-overview.md). |
load-balancer | Quickstart Load Balancer Standard Public Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md | During the creation of the load balancer, you'll configure: | Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | + | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. | |
load-balancer | Tutorial Cross Region Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-cli.md | In this tutorial, you learn how to: If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -> [!IMPORTANT] -> Cross-region Azure Load Balancer is currently in public preview. -> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - ## Prerequisites - An Azure subscription. |
load-balancer | Tutorial Cross Region Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-portal.md | In this tutorial, you learn how to: If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -> [!IMPORTANT] -> Cross-region Azure Load Balancer is currently in public preview. -> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - ## Prerequisites - An Azure subscription. |
load-balancer | Tutorial Cross Region Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-powershell.md | In this tutorial, you learn how to: If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -> [!IMPORTANT] -> Cross-region Azure Load Balancer is currently in public preview. -> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - ## Prerequisites - An Azure subscription. |
load-balancer | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md | You can also find the latest Azure Load Balancer updates and subscribe to the RS | Type |Name |Description |Date added | | ||||+| Feature | [ AzureΓÇÖs cross-region Load Balancer is now generally available ](https://azure.microsoft.com/updates/azure-s-crossregion-load-balancer-is-now-generally-available/) | Azure Load BalancerΓÇÖs Global tier is a cloud-native global network load balancing solution. With cross-region Load Balancer, you can distribute traffic across multiple Azure regions with ultra-low latency and high performance. Azure cross-region Load Balancer provides customers a static globally anycast IP address. Through this global IP address, you can easily add or remove regional deployments without interruption. Learn more about [cross-region load balancer](cross-region-overview.md) | July 2023 | ++Documentation: Learn more about cross-region load balancer | Feature | [Inbound ICMPv6 pings and traceroute are now supported on Azure Load Balancer (General Availability)](https://azure.microsoft.com/updates/general-availability-inbound-icmpv6-pings-and-traceroute-are-now-supported-on-azure-load-balancer/) | Azure Load Balancer now supports ICMPv6 pings to its frontend and inbound traceroute support to both IPv4 and IPv6 frontends. Learn more about [how to test reachability of your load balancer](load-balancer-test-frontend-reachability.md). | June 2023 | | Feature | [Inbound ICMPv4 pings are now supported on Azure Load Balancer (General Availability)](https://azure.microsoft.com/updates/general-availability-inbound-icmpv4-pings-are-now-supported-on-azure-load-balancer/) | Azure Load Balancer now supports ICMPv4 pings to its frontend, enabling the ability to test reachability of your load balancer. Learn more about [how to test reachability of your load balancer](load-balancer-test-frontend-reachability.md). | May 2023 | | SKU | [Basic Load Balancer is retiring on September 30, 2025](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/) | Basic Load Balancer will retire on 30 September 2025. Make sure to [migrate to Standard SKU](load-balancer-basic-upgrade-guidance.md) before this date. | September 2022 | |
machine-learning | How To Create Compute Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md | Prior to a scheduled shutdown, users will see a notification alerting them that ```python from azure.ai.ml.entities import ComputeInstance, ComputeSchedules, ComputeStartStopSchedule, RecurrenceTrigger, RecurrencePattern-from azure.ai.ml import MLClient from azure.ai.ml.constants import TimeZone+from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential -subscription_id = "sub-id" -resource_group = "rg-name" -workspace = "ws-name" -# get a handle to the workspace +# authenticate +credential = DefaultAzureCredential() ++# Get a handle to the workspace ml_client = MLClient(- DefaultAzureCredential(), subscription_id, resource_group, workspace + credential=credential, + subscription_id="<SUBSCRIPTION_ID>", + resource_group_name="<RESOURCE_GROUP>", + workspace_name="<AML_WORKSPACE_NAME>", ) ci_minimal_name = "ci-name"+ci_start_time = "2023-06-21T11:47:00" #specify your start time in the format yyyy-mm-ddThh:mm:ss -rec_trigger = RecurrenceTrigger(start_time="yyyy-mm-ddThh:mm:ss", time_zone=TimeZone.INDIA_STANDARD_TIME, frequency="week", interval=1, schedule=RecurrencePattern(week_days=["Friday"], hours=15, minutes=[30])) +rec_trigger = RecurrenceTrigger(start_time=ci_start_time, time_zone=TimeZone.INDIA_STANDARD_TIME, frequency="week", interval=1, schedule=RecurrencePattern(week_days=["Friday"], hours=15, minutes=[30])) myschedule = ComputeStartStopSchedule(trigger=rec_trigger, action="start") com_sch = ComputeSchedules(compute_start_stop=[myschedule]) |
machine-learning | How To Deploy Models From Huggingface | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-from-huggingface.md | cat <<EOF > $scoring_file } EOF az ml online-endpoint invoke --name $endpoint_name --request-file $scoring_file-``` +``` ++## Hugging Face Model example code ++Follow this link to find [hugging face model example code](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/huggingface/inference) for various scenarios including token classification, translation, question answering, and zero shot classification. ## Troubleshooting: Deployment errors and unsupported models HuggingFace hub has thousands of models with hundreds being updated each day. On [Gated models](https://huggingface.co/docs/hub/models-gated) require users to agree to share their contact information and accept the model ownersΓÇÖ terms and conditions in order to access the model. Attempting to deploy such models will fail with a `KeyError`. ### Models that need to run remote code-Models typically use code from the transformers SDK but some models run code from the model repo. Such models need to set the parameter `trust_remote_code` to `True`. Such models are not supported from keeping security in mind. Attempting to deploy such models will fail with the following error: `ValueError: Loading <model> requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.` +Models typically use code from the transformers SDK but some models run code from the model repo. Such models need to set the parameter `trust_remote_code` to `True`. Follow this link to learn more about using [remote code](https://huggingface.co/docs/transformers/custom_models#using-a-model-with-custom-code). Such models are not supported from keeping security in mind. Attempting to deploy such models will fail with the following error: `ValueError: Loading <model> requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.` ### Models with incorrect tokenizers Incorrectly specified or missing tokenizer in the model package can result in `OSError: Can't load tokenizer for <model>` error. Since the model weights aren't stored in the `HuggingFace` registry, you cannot **What is a community registry?** Community registries are Azure Machine Learning registries created by trusted Azure Machine Learning partners and available to all Azure Machine Learning users. +**Where can users submit questions and concerns regarding Hugging Face within Azure Machine Learning?** +Submit your questions in the [Azure Machine Learning discussion forum.](https://discuss.huggingface.co/t/about-the-azure-machine-learning-category/40677) + ## Learn more Learn [how to use foundation models in Azure Machine Learning](./how-to-use-foundation-models.md) for fine-tuning, evaluation and deployment using Azure Machine Learning studio UI or code based methods. |
mysql | Concepts Read Replicas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-read-replicas.md | If a source server has no existing replica servers, the source first restarts to When you start the create replica workflow, a blank Azure Database for MySQL server is created. The new server is filled with the data that was on the source server. The creation time depends on the amount of data on the source and the time since the last weekly full backup. The time can range from a few minutes to several hours. > [!NOTE] -> Read replicas are created with the same server configuration as the source. The replica server configuration can be changed after it has been created. The replica server is always created in the same resource group, same location and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the source. +> Read replicas are created with the same server configuration as the source. The replica server configuration can be changed after it has been created. The replica server is always created in the same resource group and same subscription as the source server. If you want to create a replica server to a different resource group or different subscription, you can [move the replica server](../../azure-resource-manager/management/move-resource-group-and-subscription.md) after creation. It is recommended that the replica server's configuration should be kept at equal or greater values than the source to ensure the replica is able to keep up with the source. Learn how to [create a read replica in the Azure portal](how-to-read-replicas-portal.md). |
mysql | Migrate Single Flexible In Place Auto Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md | + + Title: "In-place automigration from Azure Database for MySQL ΓÇô Single Server to Flexible Server" +description: This tutorial describes how to configure notifications, review migration details and FAQs for an Azure Database for MySQL Single Server instance schedule for in-place automigration to Flexible Server. ++ Last updated : 07/10/2023+++++ - mvc + - devx-track-azurecli + - mode-api ++# In-place automigration from Azure Database for MySQL ΓÇô Single Server to Flexible Server +++In-place automigration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used < 10 GiB and no complex features enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. ++The in-place migration provides a highly resilient and self-healing offline migration experience during a planned maintenance window, with less than 5 mins of downtime. It uses backup and restore technology for faster migration time. This migration removes the overhead to manually migrate your server and ensure you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. Following described are the key phases of the migration: ++* Target Flexible Server is deployed, inheriting all feature set and properties (including server parameters and firewall rules) from source Single Server. Source Single Server is set to read-only and backup from source Single Server is copied to the target Flexible Server. +* DNS switch and cutover are performed successfully within the planned maintenance window with minimal downtime, allowing maintenance of the same connection string post-migration. Client applications seamlessly connect to the target flexible server without any user driven manual updates. In addition to both connection string formats (Single and Flexible Server) being supported on migrated Flexible Server, both username formats ΓÇô username@server_name and username are also supported on the migrated Flexible Server. +* The migrated Flexible Server is online and can now be managed via Azure portal/CLI. Stopped Single Server is deleted post days set as it's Backup Retention Period. ++> [!NOTE] +> In-place migration is only for Single Server database workloads with Basic or GP SKU, data storage used < 10 GiB and no complex features enabled. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. ++## Configure migration alerts and review migration schedule ++Servers eligible for in-place automigration are sent an advance notification by the service. ++Following described are the ways to check and configure automigration notifications: ++* Subscription owners for Single Servers scheduled for automigration receive an email notification. +* Configure service health alerts to receive in-place migration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification). +* Check the in-place migration notification on the Azure portal by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal). ++Following described are the ways to review your migration schedule once you have received the in-place automigration notification: ++> [!NOTE] +> The migration schedule will be locked 7 days prior to the scheduled migration window after which youΓÇÖll be unable to reschedule. ++* The Single Server overview page for your instance displays a portal banner with information about your migration schedule. +* For Single Servers scheduled for automigration, a new Migration blade is lighted on the portal. You can review the migration schedule by navigating to the Migration blade of your Single Server instance. +* If you wish to defer the migration, you can defer by a month at a time by navigating to the Migration blade of your single server instance on the Azure portal and rescheduling the migration by selecting another migration window within a month. +* If your Single Server has General Purpose SKU, you have the other option to enable High Availability when reviewing the migration schedule. As High Availability can only be enabled during create time for a MySQL Flexible Server, it's highly recommended that you enable this feature when reviewing the migration schedule. ++## How is the target MySQL Flexible Server auto-provisioned? ++* The compute tier and SKU for the target flexible server is provisioned based on the source single serverΓÇÖs pricing tier and VCores based on the detail in the following table. ++ | Single Server Pricing Tier | Single Server VCores | Flexible Server Tier | Flexible Server SKU Name | + | - | - |:-:|:-:| + | Basic | 1 | Burstable | Standard_B1s | + | Basic | 2 | Burstable | Standard_B2s | + | General Purpose | 4 | GeneralPurpose | Standard_D4ds_v4 | + | General Purpose | 8 | GeneralPurpose | Standard_D8ds_v4 | + | General Purpose | 16 | GeneralPurpose | Standard_D16ds_v4 | + | General Purpose | 32 | GeneralPurpose | Standard_D32ds_v4 | + | General Purpose | 64 | GeneralPurpose | Standard_D64ds_v4 | + | Memory Optimized | 4 | MemoryOptimized | Standard_E4ds_v4 | + | Memory Optimized | 8 | MemoryOptimized | Standard_E8ds_v4 | + | Memory Optimized | 16 | MemoryOptimized | Standard_E16ds_v4 | + | Memory Optimized | 32 | MemoryOptimized | Standard_E32ds_v4 | ++* The MySQL version, region, *storage size, subscription and resource group for the target Flexible Server is same as that of the source Single Server. +*For Single Servers with less than 20 GiB storage, the storage size is set to 20 GiB as that is the minimum storage limit on Azure Database for MySQL - Flexible Server. +* Both username formats ΓÇô username@server_name (Single Server) and username (Flexible Server) are supported on the migrated Flexible Server. +* Both connection string formats ΓÇô Single Server and Flexible Server are supported on the migrated Flexible Server. ++## Post-migration steps ++Copy the following properties from the source Single Server to target Flexible Server post in-place migration operation is completed successfully: ++* Monitoring page settings (Alerts, Metrics, and Diagnostic settings) +* Any Terraform/CLI scripts you host to manage your Single Server instance should be updated with Flexible Server references. ++## Frequently Asked Questions (FAQs) ++**Q. Why am I being auto-migratedΓÇï?** ++**A.** Your Azure Database for MySQL - Single Server instance is eligible for in-place migration to our flagship offering Azure Database for MySQL - Flexible Server. This in-place migration will remove the overhead to manually migrate your server and ensure you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. ++**Q. How does the automigration take place? What all does it migrate?ΓÇï** ++**A.** The Flexible Server is provisioned to match the same VCores and storage as that of your Single Server. Next the source Single Server is put to stopped state, data file snapshot is taken and copied to target Flexible Server. The DNS switch is performed to route all existing connections to target and the target Flexible Server is brought online. The automigration migrates the entire serverΓÇÖs data files (including schema, data, logins) in addition to server parameters (all modified server parameters on source are copied to target, unmodified server parameters take up the default value defined by Flexible Server) and firewall rules. This is an offline migration where you see downtime of up-to 5 minutes or less. ++**Q. How can I set up or view in-place migration alerts?ΓÇï** ++**A.** ++* Configure service health alerts to receive in-place migration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification). +* Check the in-place migration notification on the Azure portal by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal). ++**Q. How can I defer the scheduled migration?ΓÇï** ++**A.** You can review the migration schedule by navigating to the Migration blade of your Single Server instance. If you wish to defer the migration, you can defer by a month at the most by navigating to the Migration blade of your single server instance on the Azure portal and re-scheduling the migration by selecting another migration window within a month. Note that the migration details will be locked 7 days prior to the scheduled migration window after which you're unable to reschedule. This in-place migration can be deferred monthly until 16 September 2024. ++**Q. What are some post-migration activities I need to perform?ΓÇï** ++**A.** ++* Monitoring page settings (Alerts, Metrics, and Diagnostic settings) +* Any Terraform/CLI scripts you host to manage your Single Server instance should be updated with Flexible Server references. ++**Q. What username and connection string would be supported for the migrated Flexible Server? ΓÇïΓÇï** ++**A.** Both username formats - username@server_name (Single Server format) and username (Flexible Server format) will be supported for the migrated Flexible Server, and hence you aren't required to update them to maintain your application continuity post migration. Additionally, both connection string formats (Single and Flexible server format) will also be supported for the migrated Flexible Server. ++**Q. How to enable HA (High Availability) for my auto-migrated server??ΓÇï** ++**A.** By default, automigration sets up migration to a non-HA instance. As HA can only be enabled at server-create time, you should enable HA before the scheduled automigration using the automigration schedule edit option on portal. HA can only be enabled for General purpose\Memory Optimized SKUs on target Flexible Server, as Basic to Burstable SKU migration doesnΓÇÖt support HA configuration. ++**Q. I see a pricing difference on my potential move from MySQL Basic Single Server to MySQL Flexible Server??ΓÇï** ++**A.** Few servers may see a small price increase after migration (estimated costs can be seen by clicking the automigration schedule edit option on the portal), as the minimum storage limit on both offerings is different (5 GiB on Single Server; 20 GiB on Flexible Server) and storage cost (0.1$ on Single Server; 0.115$ on Flexible Server) for Flexible Server is slightly higher than Single Server. For impacted servers, this price increase in Flexible Server provides better throughput and performance compared to Single Server ++## Next steps ++* [Manage an Azure Database for MySQL - Flexible Server using the Azure portal](../flexible-server/how-to-manage-server-portal.md) |
mysql | Migrate Single Flexible Mysql Import Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-mysql-import-cli.md | The [Azure Cloud Shell](../../cloud-shell/overview.md) is a free interactive she To open the Cloud Shell, select **Try it** from the upper right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and select **Enter** to run it. -If you prefer to install and use the CLI locally, this tutorial requires Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). +If you prefer to install and use the CLI locally, this tutorial requires Azure CLI version 2.50.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). ## Prerequisites |
mysql | Whats Happening To Mysql Single Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md | Learn how to migrate from Azure Database for MySQL - Single Server to Azure Data For more information on migrating from Single Server to Flexible Server using other migration tools, visit [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md). +> [!NOTE] +> In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used < 10 GiB and no complex features enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md). + ## Migration Eligibility To upgrade to Azure Database for MySQL Flexible Server, it's important to know when you're eligible to migrate your single server. Find the migration eligibility criteria in the below table. | Single Server configuration not supported in Flexible Server | How and when to migrate? | ||--|-| Single servers with Private Link enabled | Private Link is on the road map for this year. You can also choose to migrate now and perform wNet injection via a point-in-time restore operation to move to private access network connectivity method. | -| Single servers with Cross-Region Read Replicas enabled | Cross-Region Read Replicas for flexible server (for paired region) is in private preview, and you can start migrating your single server. Cross-Region Read Replicas for flexible server (for any cross-region) is on the road map for later this year, post which you can migrate your single server. | +| Single servers with Private Link enabled | Private Link for flexible server is available now, and you can start migrating your single server. | +| Single servers with Cross-Region Read Replicas enabled | Cross-Region Read Replicas for flexible server (for paired region) is available now, and you can start migrating your single server. | | Single servers with Query Store enabled | You are eligible to migrate and you can configure slow query logs on the target flexible server by following steps [here](https://learn.microsoft.com/azure/mysql/flexible-server/tutorial-query-performance-insights#configure-slow-query-logs-by-using-the-azure-portal). You can then view query insights by using [workbooks template](https://learn.microsoft.com/azure/mysql/flexible-server/tutorial-query-performance-insights#view-query-insights-by-using-workbooks). | | Single server deployed in regions where flexible server isn't supported (Learn more about regions [here](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?regions=all&products=mysql)). | Azure Database Migration Service (classic) supports cross-region migration. Deploy your target flexible server in a suitable region and migrate using DMS (classic). | To upgrade to Azure Database for MySQL Flexible Server, it's important to know w **Q. I have private link configured for my single server, and this feature is not currently supported in Flexible Server. How do I migrate?** -**A.** Flexible Server support for private link is on our road map as our highest priority. Launch of the feature is planned in Q2 2023 and you have ample time to initiate your Single Server to Flexible Server migrations with private link configured. You can also choose to migrate now and perform VNet injection via a point-in-time restore operation to move to private access network connectivity method. +**A.** Private Link for flexible server is available now, and you can start migrating your single server. **Q. I have cross-region read replicas configured for my single server, and this feature is not currently supported in Flexible Server. How do I migrate?** -**A.** Flexible Server support for cross-region read replicas is on our roadmap as our highest priority. Cross-Region Read Replicas for flexible server (for paired region) is in private preview, and you can start migrating your single server. Cross-Region Read Replicas for flexible server (for any cross-region) is on the road map for later this year, post, which you can migrate your single server. +**A.** Cross-Region Read Replicas for flexible server (for paired region) is available now, and you can start migrating your single server. **Q. I have TLS v1.0/1.1 configured for my v8.0 single server, and this feature is not currently supported in Flexible Server. How do I migrate?** |
network-watcher | Diagnose Vm Network Traffic Filtering Problem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md | Azure allows and denies network traffic to and from a virtual machine based on i In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it. ++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + ## Prerequisites -- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.+- An Azure account with an active subscription ## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com) with your Azure account. | Public inbound ports | Select **None**. | > [!NOTE]- > Azure will create a default network security group for **myVm** virtual machine (because you selected **Basic** NIC network security group). You will use this default network security group to test network communication to and from the virtual machine in the next section. + > Azure will create a default network security group for **myVM** virtual machine (because you selected **Basic** NIC network security group). You will use this default network security group to test network communication to and from the virtual machine in the next section. 1. Select **Review + create**. In this section, you use the IP flow verify capability of Network Watcher to tes | Setting | Value | ||-| | **Target resource** | |- | Virtual machine | Select **myVm** virtual machine. | - | Network interface | Select the network interface of **myVm**. When you use the Azure portal to create a virtual machine, the portal names the network interface using the virtual machine's name and a random number (for example myvm36). | + | Virtual machine | Select **myVM** virtual machine. | + | Network interface | Select the network interface of **myVM**. When you use the Azure portal to create a virtual machine, the portal names the network interface using the virtual machine's name and a random number (for example myvm36). | | **Packet details** | | | Protocol | Select **TCP**. | | Direction | Select **Outbound**. | In this section, you use the IP flow verify capability of Network Watcher to tes :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-first-test-results.png" alt-text="Screenshot shows the result of IP flow verify to IP address 13.107.21.200." lightbox="./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-first-test-results.png"::: -1. Change **Remote IP address** to **10.0.0.10** and repeat the test by selecting **Verify IP flow** button again. The result of the second test indicates that access is allowed to **10.0.0.10** because of the default security rule **AllowVnetOutBound**. +1. Change **Remote IP address** to **10.0.1.10**, which is a private IP address in **myVNet** address space. Then, repeat the test by selecting **Verify IP flow** button again. The result of the second test indicates that access is allowed to **10.0.1.10** because of the default security rule **AllowVnetOutBound**. - :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-second-test-results.png" alt-text="Screenshot shows the result of IP flow verify to IP address 10.0.0.10." lightbox="./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-second-test-results.png"::: + :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-second-test-results.png" alt-text="Screenshot shows the result of IP flow verify to IP address 10.0.1.10." lightbox="./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-second-test-results.png"::: 1. Change **Remote IP address** to **10.10.10.10** and repeat the test. The result of the third test indicates that access is denied to **10.10.10.10** because of the default security rule **DenyAllOutBound**. In this section, you use the IP flow verify capability of Network Watcher to tes ## View details of a security rule -To determine why the rules in the previous section allow or deny communication, review the effective security rules for the network interface in **myVm** virtual machine. +To determine why the rules in the previous section allow or deny communication, review the effective security rules for the network interface in **myVM** virtual machine. 1. Under **Network diagnostic tools** in **Network Watcher**, select **Effective security rules**. To determine why the rules in the previous section allow or deny communication, | Virtual machine | Select **myVM**. | > [!NOTE]- > **myVm** virtual machine has one network interface which will be selected once you select **myVm**. If your virtual machine has more than one network interface, choose the one that you want to see its effective security rules. + > **myVM** virtual machine has one network interface that will be selected once you select **myVM**. If your virtual machine has more than one network interface, choose the one that you want to see its effective security rules. :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/effective-security-rules.png" alt-text="Screenshot of Effective security rules in Network Watcher." lightbox="./media/diagnose-vm-network-traffic-filtering-problem/effective-security-rules.png" ::: When no longer needed, delete the resource group and all of the resources it con In this quickstart, you created a virtual machine and diagnosed inbound and outbound network traffic filters. You learned that network security group rules allow or deny traffic to and from a virtual machine. Learn more about [network security groups](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and how to [create security rules](../virtual-network/manage-network-security-group.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#create-a-security-rule). -Even with the proper network traffic filters in place, communication to a virtual machine can still fail, due to routing configuration. To learn how to diagnose virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md). To diagnose outbound routing, latency, and traffic filtering problems with one tool, see [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-portal.md). +Even with the proper network traffic filters in place, communication to a virtual machine can still fail due to routing configuration. To learn how to diagnose virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md). To diagnose outbound routing, latency, and traffic filtering problems with one tool, see [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-portal.md). |
network-watcher | Network Watcher Nsg Flow Logging Azure Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-azure-resource-manager.md | -In this article, you learn how to manage NSG flow logs programmatically using an Azure Resource Manager template and Azure PowerShell. +In this article, you learn how to manage NSG flow logs programmatically using an Azure Resource Manager template and Azure PowerShell. You can learn how to manage an NSG flow log using the [Azure portal](nsg-flow-logging.md), [PowerShell](network-watcher-nsg-flow-logging-powershell.md), [Azure CLI](network-watcher-nsg-flow-logging-cli.md), or [REST API](network-watcher-nsg-flow-logging-rest.md). An [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project using declarative syntax. |
network-watcher | Network Watcher Nsg Flow Logging Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-cli.md | -In this article, you learn how to create, change, disable, or delete an NSG flow log using the Azure CLI. +In this article, you learn how to create, change, disable, or delete an NSG flow log using the Azure CLI. You can learn how to manage an NSG flow log using the [Azure portal](nsg-flow-logging.md), [PowerShell](network-watcher-nsg-flow-logging-powershell.md), [REST API](network-watcher-nsg-flow-logging-rest.md), or [ARM template](network-watcher-nsg-flow-logging-azure-resource-manager.md). ## Prerequisites |
network-watcher | Network Watcher Nsg Flow Logging Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-powershell.md | -In this article, you learn how to create, change, disable, or delete an NSG flow log using Azure PowerShell. +In this article, you learn how to create, change, disable, or delete an NSG flow log using Azure PowerShell. You can learn how to manage an NSG flow log using the [Azure portal](nsg-flow-logging.md), [Azure CLI](network-watcher-nsg-flow-logging-cli.md), [REST API](network-watcher-nsg-flow-logging-rest.md), or [ARM template](network-watcher-nsg-flow-logging-azure-resource-manager.md). ## Prerequisites |
network-watcher | Network Watcher Nsg Flow Logging Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-rest.md | -This article helps use the REST API to enable, disable, and query flow logs using. +This article shows you how to use the REST API to enable, disable, and query flow logs using the REST API. You can learn how to manage an NSG flow log using the [Azure portal](nsg-flow-logging.md), [PowerShell](network-watcher-nsg-flow-logging-powershell.md), [Azure CLI](network-watcher-nsg-flow-logging-cli.md), or [ARM template](network-watcher-nsg-flow-logging-azure-resource-manager.md). -You learn how to: +In this article, uou learn how to: > [!div class="checklist"] > * Enable flow logs (Version 2) |
network-watcher | Nsg Flow Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logging.md | -In this article, you learn how to create, change, disable, or delete an NSG flow log using the Azure portal. +In this article, you learn how to create, change, disable, or delete an NSG flow log using the Azure portal. You can learn how to manage an NSG flow log using [PowerShell](network-watcher-nsg-flow-logging-powershell.md), [Azure CLI](network-watcher-nsg-flow-logging-cli.md), [REST API](network-watcher-nsg-flow-logging-rest.md), or [ARM template](network-watcher-nsg-flow-logging-azure-resource-manager.md). ## Prerequisites |
networking | Networking Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md | -The networking services in Azure provide a variety of networking capabilities that can be used together or separately. Select any of the following key capabilities to learn more about them: +The networking services in Azure provide various networking capabilities that can be used together or separately. Select any of the following key capabilities to learn more about them: - [**Connectivity services**](#connect): Connect Azure resources and on-premises resources using any or a combination of these networking services in Azure - Virtual Network (VNet), Virtual WAN, ExpressRoute, VPN Gateway, Virtual network NAT Gateway, Azure DNS, Peering service, Azure Virtual Network Manager, Route Server, and Azure Bastion. - [**Application protection services**](#protect): Protect your applications using any or a combination of these networking services in Azure - Load Balancer, Private Link, DDoS protection, Firewall, Network Security Groups, Web Application Firewall, and Virtual Network Endpoints. - [**Application delivery services**](#deliver): Deliver applications in the Azure network using any or a combination of these networking services in Azure - Content Delivery Network (CDN), Azure Front Door Service, Traffic Manager, Application Gateway, Internet Analyzer, and Load Balancer. Azure Virtual Network (VNet) is the fundamental building block for your private For more information, see [What is Azure Virtual Network?](../../virtual-network/virtual-networks-overview.md) +### <a name="avnm"></a>Azure Virtual Network Manager ++Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define [network groups](../../virtual-network-manager/concept-network-groups.md) to identify and logically segment your virtual networks. Then you can determine the [connectivity](../../virtual-network-manager/concept-connectivity-configuration.md) and [security configurations](../../virtual-network-manager/concept-security-admins.md) you want and apply them across all the selected virtual networks in network groups at once. For more information, see [What is Azure Virtual Network Manager?](../../virtual-network-manager/overview.md). ++ ### <a name="expressroute"></a>ExpressRoute ExpressRoute enables you to extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. This connection is private. Traffic doesn't go over the internet. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, Microsoft 365, and Dynamics 365. For more information, see [What is ExpressRoute?](../../expressroute/expressroute-introduction.md) Azure DNS is a hosting service for DNS domains that provides name resolution by ### <a name="bastion"></a>Azure Bastion -Azure Bastion is a service that you can deploy to let you connect to a virtual machine using your browser and the Azure portal, or via the native SSH or RDP client already installed on your local computer. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software. For more information, see [What is Azure Bastion?](../../bastion/bastion-overview.md) +Azure Bastion is a service that you can deploy to let you connect to a virtual machine using your browser and the Azure portal, or via the native SSH or RDP client already installed on your local computer. The Azure Bastion service is a fully platform-managed PaaS service that you deploy inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software. For more information, see [What is Azure Bastion?](../../bastion/bastion-overview.md) :::image type="content" source="../../bastion/media/bastion-overview/architecture.png" alt-text="Diagram showing Azure Bastion architecture."::: For more information, see [What is virtual network NAT gateway?](../../virtual-n :::image type="content" source="./media/networking-overview/flow-map.png" alt-text="Virtual network NAT gateway"::: -### <a name="avnm"></a>Azure Virtual Network Manager --Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define network groups to identify and logically segment your virtual networks. Then you can determine the connectivity and security configurations you want and apply them across all the selected virtual networks in network groups at once. For more information, see [What is Azure Virtual Network Manager?](../../virtual-network-manager/overview.md). - ### <a name="routeserver"></a>Route Server Azure Route Server simplifies dynamic routing between your network virtual appliance (NVA) and your virtual network. It allows you to exchange routing information directly through Border Gateway Protocol (BGP) routing protocol between any NVA that supports the BGP routing protocol and the Azure Software Defined Network (SDN) in the Azure Virtual Network (VNet) without the need to manually configure or maintain route tables. For more information, see [What is Azure Route Server?](../../route-server/overview.md) For more information about Azure Firewall, see the [Azure Firewall documentation :::image type="content" source="./media/networking-overview/firewall-threat.png" alt-text="Firewall overview"::: ### <a name="waf"></a>Web Application Firewall-[Azure Web Application Firewall](../../web-application-firewall/overview.md) (WAF) provides protection to your web applications from common web exploits and vulnerabilities such as SQL injection, and cross site scripting. Azure WAF provides out of box protection from OWASP top 10 vulnerabilities via managed rules. Additionally customers can also configure custom rules, which are customer managed rules to provide additional protection based on source IP range, and request attributes such as headers, cookies, form data fields or query string parameters. +[Azure Web Application Firewall](../../web-application-firewall/overview.md) (WAF) provides protection to your web applications from common web exploits and vulnerabilities such as SQL injection, and cross site scripting. Azure WAF provides out of box protection from OWASP top 10 vulnerabilities via managed rules. Additionally customers can also configure custom rules, which are customer managed rules to provide extra protection based on source IP range, and request attributes such as headers, cookies, form data fields or query string parameters. -Customers can choose to deploy [Azure WAF with Application Gateway](../../web-application-firewall/ag/ag-overview.md) which provides regional protection to entities in public and private address space. Customers can also choose to deploy [Azure WAF with Front Door](../../web-application-firewall/afds/afds-overview.md) which provides protection at the network edge to public endpoints. +Customers can choose to deploy [Azure WAF with Application Gateway](../../web-application-firewall/ag/ag-overview.md), which provides regional protection to entities in public and private address space. Customers can also choose to deploy [Azure WAF with Front Door](../../web-application-firewall/afds/afds-overview.md) which provides protection at the network edge to public endpoints. :::image type="content" source="./media/networking-overview/waf-overview.png" alt-text="Web Application Firewall"::: |
operator-nexus | Howto Baremetal Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-functions.md | az networkcloud baremetalmachine reimage \ Use `Replace BMM` command when a server has encountered hardware issues requiring a complete or partial hardware replacement. After replacement of components such as motherboard or NIC replacement, the MAC address of BMM will change, however the IDrac IP address and hostname will remain the same. +> [!Warning] +> Running more than one baremetalmachine replace command at the same time will leave servers in a +> nonworking state. Make sure one replace has fully completed before starting another one. In a future +> release, we plan to either add the ability to replace multiple servers at once or have the command +> return an error when attempting to do so. + ```azurecli az networkcloud baremetalmachine replace \ --name "bareMetalMachineName" \ |
operator-nexus | Howto Monitor Naks Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-naks-cluster.md | + + Title: "Azure Operator Nexus: Monitoring of Nexus Kubernetes cluster" +description: How-to guide for setting up monitoring of Nexus Kubernetes cluster on Operator Nexus. ++++ Last updated : 01/26/2023++++# Monitor Nexus Kubernetes cluster ++Each Nexus Kubernetes cluster consists of multiple layers: ++- Virtual Machines (VMs) +- Kubernetes layer +- Application pods ++<! IMG  IMG > ++Figure: Sample Nexus Kubernetes cluster ++On an instance, Nexus Kubernetes clusters are delivered with an _optional_ [Container Insights](../azure-monitor/containers/container-insights-overview.md) observability solution. +Container Insights captures the logs and metrics from Nexus Kubernetes clusters and workloads. +It's solely your discretion whether to enable this tooling or deploy your own telemetry stack. ++The Nexus Kubernetes cluster with Azure monitoring tool looks like: ++<! IMG  IMG > ++Figure: Nexus Kubernetes cluster with Monitoring Tools ++## Extension onboarding with CLI using managed identity auth ++Documentation for starting with [Azure CLI](/cli/azure/get-started-with-azure-cli), how to install it across [multiple operating systems](/cli/azure/install-azure-cli), and how to install [CLI extensions](/cli/azure/azure-cli-extensions-overview). ++Install latest version of the +[necessary CLI extensions](./howto-install-cli-extensions.md). ++## Monitor Nexus Kubernetes cluster – VM layer ++This how-to guide provides steps and utility scripts to [Arc connect](../azure-arc/servers/overview.md) the Nexus Kubernetes cluster Virtual Machines to Azure and enable monitoring agents for the collection of System logs from these VMs using [Azure Monitoring Agent](../azure-monitor/agents/agents-overview.md). +The instructions further capture details on how to set up log data collection into a Log Analytics workspace. ++The following resources provide you with support: ++- `arc-connect.env`: use this template file to create environment variables needed by included scripts +- `dcr.sh`: use this script to create a Data Collection Rule (DCR) to configure syslog collection +- `assign.sh`: use the script to create a policy to associate the DCR with all Arc-enabled servers in a resource group +- `install.sh`: Arc-enable Nexus Kubernetes cluster VMs and install Azure Monitoring Agent on each VM ++### Prerequisites-VM ++- Cluster administrator access to the Nexus Kubernetes cluster. See [documentation](/azure-stack/aks-hci/create-aks-hybrid-preview-cli#connect-to-the-nexus-kubernetes-cluster) on + connecting to the Nexus Kubernetes cluster. ++- To use Azure Arc-enabled servers, register the following Azure resource providers in your subscription: + - Microsoft.HybridCompute + - Microsoft.GuestConfiguration + - Microsoft.HybridConnectivity ++Register these resource providers, if not done previously: ++```azurecli +az account set --subscription "{the Subscription Name}" +az provider register --namespace 'Microsoft.HybridCompute' +az provider register --namespace 'Microsoft.GuestConfiguration' +az provider register --namespace 'Microsoft.HybridConnectivity' +``` ++- Assign an Azure service principal to the following Azure built-in roles, as needed. +Assign the service principal to the Azure resource group that has the machines to be connected: ++| Role | Needed to | +|-|-- | +| [Azure Connected Machine Resource Administrator](../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)  or [Contributor](../role-based-access-control/built-in-roles.md#contributor) | Connect Arc-enabled Nexus Kubernetes cluster VM server in the resource group and install the Azure Monitoring Agent (AMA)| +| [Monitoring Contributor](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Contributor](../role-based-access-control/built-in-roles.md#contributor) | Create a [Data Collection Rule (DCR)](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md) in the resource group and associate Arc-enabled servers to it | +| [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator), and [Resource Policy Contributor](../role-based-access-control/built-in-roles.md#resource-policy-contributor) or [Contributor](../role-based-access-control/built-in-roles.md#contributor) | Needed if you want to use Azure policy assignment(s) to ensure that a DCR is associated with [Arc-enabled machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd5c37ce1-5f52-4523-b949-f19bf945b73a) | +| [Kubernetes Extension Contributor](../role-based-access-control/built-in-roles.md#kubernetes-extension-contributor) | Needed to deploy the K8s extension for Container Insights | ++### Environment setup ++Copy and run the included scripts. You can run them from an +[Azure Cloud Shell](../cloud-shell/overview.md), in the Azure portal. Or you can run them from a Linux command +prompt where the Kubernetes command line tool (kubectl) and Azure CLI are installed. ++Prior to running the included scripts, define the following environment variables: ++| Environment Variable | Description | +||| +| SUBSCRIPTION_ID | The ID of the Azure subscription that contains the resource group | +| RESOURCE_GROUP | The resource group name where Arc-enabled server and associated resources are created | +| LOCATION | The Azure Region where the Arc-enabled servers and associated resources are created | +| SERVICE_PRINCIPAL_ID | The appId of the Azure service principal with appropriate role assignment(s) | +| SERVICE_PRINCIPAL_SECRET | The authentication password for the Azure service principal | +| TENANT_ID | The ID of the tenant directory where the service principal exists | +| PROXY_URL | The proxy URL to use for connecting to Azure services | +| NAMESPACE | The namespace where the Kubernetes artifacts are created | ++For convenience, you can modify the template file, `arc-connect.env`, to set the environment variable values. ++```bash +# Apply the modified values to the environment + ./arc-connect.env +``` ++### Add a data collection rule (DCR) ++Associate the Arc-enabled servers with a DCR to enable the collection of log data into a Log Analytics workspace. +You can create the DCR via the Azure portal or CLI. +Information on creating a DCR to collect data from the VMs is available [here](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md). ++The included **`dcr.sh`** script creates a DCR, in the specified resource group, that will configure log collection. ++1. Ensure proper [environment setup](#environment-setup) and role [prerequisites](#prerequisites-vm) for the service principal. The DCR is created in the specified resource group. ++2. Create or identify a Log Analytics workspace for log data ingestion as per the DCR. Set an environment variable, LAW_RESOURCE_ID to its resource ID. Retrieve the resource ID for a known Log Analytics workspace name: ++ ```bash + export LAW_RESOURCE_ID=$(az monitor log-analytics workspace show -g "${RESOURCE_GROUP}" -n <law name> --query id -o tsv) + ``` ++3. Run the dcr.sh script. It creates a DCR in the specified resource group with name ${RESOURCE_GROUP}-syslog-dcr ++```bash +./dcr.sh +``` ++View/manage the DCR from the Azure portal or [CLI](/cli/azure/monitor/data-collection/rule). +By default, the Linux Syslog log level is set to "INFO". You can change the log level as needed. ++> [!NOTE] +> Manually, or via a policy, associate servers created prior to the DCR's creation. +See [remediation task](../governance/policy/how-to/remediate-resources.md#create-a-remediation-task). ++### Associate Arc-enabled server resources to DCR ++Associate the Arc-enabled server resources to the created DCR for logs to flow to the Log Analytics workspace. +There are options for associating servers with DCRs. ++#### Use Azure portal or CLI to associate selected Arc-enabled servers to DCR ++In Azure portal, add Arc-enabled server resource to the DCR using its Resources section. ++Use this [link](/cli/azure/monitor/data-collection/rule/association#az-monitor-data-collection-rule-association-create) +for information about associating the resources via the Azure CLI. ++### Use Azure policy to manage DCR associations ++Assign a policy to the resource group to enforce the association. +There's a built-in policy definition, to associate [Linux Arc Machines with a DCR](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd5c37ce1-5f52-4523-b949-f19bf945b73a). Assign the policy to the resource group with DCR as a parameter. +It ensures association of all Arc-enabled servers, within the resource group, with the same DCR. ++In the Azure portal, select the `Assign` button from the [policy definition](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd5c37ce1-5f52-4523-b949-f19bf945b73a) page. ++For convenience, the provided **`assign.sh`** script assigns the built-in policy to the specified resource group and DCR created with the **`dcr.sh`** script. ++1. Ensure proper [environment setup](#environment-setup) and role [prerequisites](#prerequisites-vm) for the service principal to do policy and role assignments. +2. Create the DCR, in the resource group, using **`dcr.sh`** script as described in [Adding a Data Collection Rule](../azure-monitor/essentials/data-collection-endpoint-overview.md?tabs=portal#create-a-data-collection-endpoint) section. +3. Run the **`assign.sh`** script. It creates the policy assignment and necessary role assignments. ++```bash +./assign.sh +``` ++#### Connect Arc-enabled servers and install Azure monitoring agent ++Use the included **`install.sh`** script to Arc-enroll all server VMs that represent the nodes of the Nexus Kubernetes cluster. +This script creates a Kubernetes daemonSet on the Nexus Kubernetes cluster. +It deploys a pod to each cluster node, connecting each VM to Arc-enabled servers and installing the Azure Monitoring Agent (AMA). +The `daemonSet` also includes a liveness probe that monitors the server connection and AMA processes. ++1. Set the environment as specified in [Environment Setup](#environment-setup). Set the current `kubeconfig` context for the Nexus Kubernetes cluster VMs. +2. Permit `Kubectl` access to the Nexus Kubernetes cluster. + [!INCLUDE [cluster-connect](./includes/kubernetes-cluster/cluster-connect.md)] +3. Run the **`install.sh`** script from the command prompt with kubectl access to the Nexus Kubernetes cluster. ++The script deploys the `daemonSet` to the cluster. Monitor the progress as follows: ++```bash +# Run the install script and observe results +./install.sh +kubectl get pod --selector='name=naks-vm-telemetry' +kubectl logs <podname> +``` ++On completion, the system logs the message "Server monitoring configured successfully". +At that point, the Arc-enabled servers appear as resources within the selected resource group. ++> [!NOTE] +> Associate these connected servers to the [DCR](#associate-arc-enabled-server-resources-to-dcr). +After you configure a policy, there may be some delay to observe the logs in Azure Log Analytics Workspace ++### Monitor Nexus Kubernetes cluster – K8s layer ++#### Prerequisites-Kubernetes ++There are certain prerequisites the operator should ensure to configure the monitoring tools on Nexus Kubernetes Clusters. ++Container Insights stores its data in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md). +Log data flows into the workspace whose Resource ID you provided during the initial scripts covered in the ["Add a data collection rule (DCR)"](#add-a-data-collection-rule-dcr) section. +Else, data funnels into a default workspace in the Resource group associated with your subscription (based on Azure location). ++An example for East US may look like follows: ++- Log Analytics workspace Name: DefaultWorkspace-\<GUID>-EUS +- Resource group name: DefaultResourceGroup-EUS ++Run the following command to get a pre-existing _Log Analytics workspace Resource ID_: ++```azurecli +az login ++az account set --subscription "<Subscription Name or ID the Log Analytics workspace is in>" ++az monitor log-analytics workspace show --workspace-name "<Log Analytics workspace Name>" \ + --resource-group "<Log Analytics workspace Resource Group>" \ + -o tsv --query id +``` ++To deploy +Container Insights and view data in the applicable Log Analytics workspace requires certain role assignments in your account. +For example, the "Contributor" role assignment. +See the instructions for [assigning required roles](../role-based-access-control/role-assignments-steps.md#step-5-assign-role): ++- [Log Analytics Contributor](../azure-monitor/logs/manage-access.md?tabs=portal#azure-rbac) role: necessary permissions to enable container monitoring on a CNF (provisioned) cluster. +- [Log Analytics Reader](../azure-monitor/logs/manage-access.md?tabs=portal#azure-rbac) role: non-members of the Log Analytics Contributor role, receive permissions to view data in the Log Analytics workspace once you enable container monitoring. ++#### Install the cluster extension ++Sign-in into the [Azure Cloud Shell](../cloud-shell/overview.md) to access the cluster: ++```azurecli +az login ++az account set --subscription "<Subscription Name or ID the Provisioned Cluster is in>" +``` ++Now, deploy Container Insights extension on a provisioned Nexus Kubernetes cluster using either of the next two commands: ++#### With customer pre-created Log analytics workspace ++```azurecli +az k8s-extension create --name azuremonitor-containers \ + --cluster-name "<Nexus Kubernetes cluster Name>" \ + --resource-group "<Nexus Kubernetes cluster Resource Group>" \ + --cluster-type connectedClusters \ + --extension-type Microsoft.AzureMonitor.Containers \ + --release-train preview \ + --configuration-settings logAnalyticsWorkspaceResourceID="<Log Analytics workspace Resource ID>" \ + amalogsagent.useAADAuth=true +``` ++#### Use the default Log analytics workspace ++```azurecli +az k8s-extension create --name azuremonitor-containers \ + --cluster-name "<Nexus Kubernetes cluster Name>" \ + --resource-group "<Nexus Kubernetes cluster Resource Group>" \ + --cluster-type connectedClusters \ + --extension-type Microsoft.AzureMonitor.Containers \ + --release-train preview \ + --configuration-settings amalogsagent.useAADAuth=true +``` ++#### Validate Cluster extension ++Validate the successful deployment of monitoring agents’ enablement on Nexus Kubernetes Clusters using the following command: ++```azurecli +az k8s-extension show --name azuremonitor-containers \ + --cluster-name "<Nexus Kubernetes cluster Name>" \ + --resource-group "<Nexus Kubernetes cluster Resource Group>" \ + --cluster-type conectedClusters +``` ++Look for a Provisioning State of "Succeeded" for the extension. The "k8s-extension create" command may have also returned the status. ++#### Customize logs & metrics collection ++Container Insights provides end-users functionality to fine-tune the collection of logs and metrics from Nexus Kubernetes Clusters--[Configure Container insights agent data collection](../azure-monitor/containers/container-insights-agent-config.md). ++## Extra resources ++- Review [workbooks documentation](../azure-monitor/visualize/workbooks-overview.md) and then you may use Operator Nexus telemetry [sample Operator Nexus workbooks](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services). +- Review [Azure Monitor Alerts](../azure-monitor/alerts/alerts-overview.md), how to create [Azure Monitor Alert rules](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=metric), and use [sample Operator Nexus Alert templates](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services). |
partner-solutions | Palo Alto Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-create.md | Next, you must accept the Terms of Use for the new Palo Alto Networks resource. ## Next steps - [Manage the Palo Alto Networks resource](palo-alto-manage.md)++- Get Started with Cloud Next-Generation Firewall by Palo Alto Networks - an Azure Native ISV Service on ++ > [!div class="nextstepaction"] + > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/PaloAltoNetworks.Cloudngfw%2Ffirewalls) ++ > [!div class="nextstepaction"] + > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/paloaltonetworks.pan_swfw_cloud_ngfw?tab=Overview) |
partner-solutions | Palo Alto Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-manage.md | Title: Manage Cloud NGFW by Palo Alto Networks resource through the Azure portal -description: This article describes management functions for Cloud NGFW by Palo Alto Networks on the Azure portal. +description: This article describes management functions for Cloud NGFW (Next-Generation Firewall) by Palo Alto Networks on the Azure portal. Previously updated : 04/25/2023 Last updated : 07/10/2023 After the account is deleted, logs are no longer sent to Cloud NGFW by Palo Alto ## Next steps - For help with troubleshooting, see [Troubleshooting Palo Alto integration with Azure](palo-alto-troubleshoot.md).++- Get Started with Cloud Next-Generation Firewall by Palo Alto Networks - an Azure Native ISV Service on ++ > [!div class="nextstepaction"] + > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/PaloAltoNetworks.Cloudngfw%2Ffirewalls) ++ > [!div class="nextstepaction"] + > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/paloaltonetworks.pan_swfw_cloud_ngfw?tab=Overview) |
partner-solutions | Palo Alto Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-overview.md | Title: What is Cloud NGFW by Palo Alto Networks -description: Learn about using the Cloud NGFW by Palo Alto Networks from the Marketplace. +description: Learn about using Cloud NGFW (Next-Generation Firewall) by Palo Alto Networks from the Azure Marketplace. Previously updated : 04/26/2023 Last updated : 07/10/2023 # What is Cloud NGFW by Palo Alto Networks Preview? -In this article, you learn how to use the integration of the Palo Alto Networks NGFW (Next Generation Firewall) service with Azure. -With the integration of Cloud NGFW for Azure into the Azure ecosystem, we are delivering an integrated platform and empowering a growing ecosystem of developers and customers to help protect their organizations on Azure. +Azure Native ISV Services enable you to easily provision, manage, and tightly integrate independent software vendor (ISV) software and services on Azure. This Azure Native ISV Service is developed and managed by Microsoft and Palo Alto Networks. -The Palo Alto Networks offering in the Azure Marketplace allows you to manage the Cloud NGFW by Palo Alto Networks in the Azure portal as an integrated service. You can set up the Cloud NGFW by Palo Alto Networks resources through a resource provider named `PaloAltoNetworks.Cloudngfw`. +You can find Cloud Next-Generation Firewall by Palo Alto Networks - an Azure Native ISV Service in the [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/PaloAltoNetworks.Cloudngfw%2Ffirewalls) or get it on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/paloaltonetworks.pan_swfw_cloud_ngfw?tab=Overview). -You can create and manage Palo Alto Networks resources through the Azure portal. Palo Alto Networks owns and runs the software as a service (SaaS) application including the accounts created. +Palo Alto Networks is leading provider of cloud security, offering next-generation cybersecurity to thousands of customers globally, across all sectors. With the integration of Cloud Next-Generation Firewall by Palo Alto for Azure into the Azure ecosystem we are delivering an integrated experience and empowering a growing ecosystem of developers and customers to help protect their organizations on Azure. ++The Palo Alto Networks offering in the Azure Marketplace allows you to manage the Cloud Next-Generation Firewall by Palo Alto Networks resources in the Azure portal as an integrated service. It enables you to easily utilize Palo Alto Networks best-in-class network security capabilities on Azure, and you can manage it using either Palo Alto Networks Panorama policy management solution or directly from the Azure portal. Cloud Next-Generation Firewall by Palo Alto - an Azure Native ISV Service combines the scalability and reliability of Microsoft Azure with Palo Alto Networks deep expertise in network security. ++You can create and manage Palo Alto Networks resources through the Azure portal. You can set up the Cloud Next-Generation Firewall by Palo Alto Networks resources through a resource provider named `PaloAltoNetworks.Cloudngfw`. Palo Alto Networks owns and runs the software as a service (SaaS) application including the accounts created. Here are the key capabilities provided by the Palo Alto integration: |
partner-solutions | Palo Alto Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-troubleshoot.md | Title: Troubleshooting your Cloud NGFW by Palo Alto Networks -description: This article provides information about getting support and troubleshooting a Cloud NGFW by Palo Alto Networks. +description: This article provides information about getting support and troubleshooting a Cloud NGFW (Next-Generation Firewall) by Palo Alto Networks. Previously updated : 04/25/2023 Last updated : 07/10/2023 -# Troubleshooting Cloud NGFW by Palo Alto Networks +# Troubleshooting Cloud Next-Generation Firewall by Palo Alto Networks - an Azure Native ISV Service You can get support for your Palo Alto deployment through a **New Support request**. The procedure for creating the request is here. In addition, we have included troubleshooting for problems you might experience in creating and using a Palo Alto deployment. Only users who have Owner access can setup a Palo Alto resource on the Azure sub ## Next steps - Learn about [managing your instance](palo-alto-manage.md) of Palo Alto.++- Get Started with Cloud Next-Generation Firewall by Palo Alto Networks - an Azure Native ISV Service on ++ > [!div class="nextstepaction"] + > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/PaloAltoNetworks.Cloudngfw%2Ffirewalls) ++ > [!div class="nextstepaction"] + > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/paloaltonetworks.pan_swfw_cloud_ngfw?tab=Overview) |
postgresql | Concepts Compute Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md | The process of scaling storage is performed online, without causing any downtime Remember that storage can only be scaled up, not down. +## Limitations ++1. Disk scaling operations are always online except in specific scenarios involving the 4096 GiB boundary. These scenarios include reaching, starting at, or crossing the 4096 GiB limit, such as when scaling from 2048 GiB to 8192 GiB etc. This limitation is due to the underlying Azure Managed disk V1 which needs a manual disk scaling operation. You will receive an informational message in the portal when you approach this limit. ++2. Storage auto-grow currently does not work for HA / Read replica-enabled servers; we will support this very soon. ++3. Storage Autogrow does not trigger when there is high WAL usage. ++> [!NOTE] +> Storage auto-grow never triggers offline increase. + ## Backup |
postgresql | How To Auto Grow Storage Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-auto-grow-storage-portal.md | + + Title: Storage Auto-grow - Azure portal - Azure Database for PostgreSQL - Flexible Server +description: This article describes how you can configure storage autogrow using the Azure portal in Azure Database for PostgreSQL - Flexible Server ++++++ Last updated : 06/24/2022+++# Storage Autogrow using Azure portal in Azure Database for PostgreSQL - Flexible Server +++++This article describes how you can configure an Azure Database for PostgreSQL server storage to grow without impacting the workload. ++When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage autogrow, the server storage increases to accommodate the growing data. For servers with less than 1 TiB of provisioned storage, the autogrow feature activates when storage consumption reaches 80%. For servers with 1 TB or more of storage, autogrow activates at 90% consumption. +++## Enable storage auto-grow for existing servers ++Follow these steps to enable Storage Autogrow on your Azure Database for PostgreSQL Flexible server. ++1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL Flexible Server. ++2. On the Flexible Server page, select **Compute + storage** ++3. In the **Storage Auto-growth** section, checkmark to enable storage autogrow. ++4. Select **Save** to apply the changes. ++  +++5. A notification confirms that auto grow was successfully enabled. ++ +## Enable storage auto-grow during server provisioning ++1. In the Azure portal, during server provisioning, under **Compute + storage** select **Configure server** ++  ++2. In the **Storage Auto-growth** section, checkmark to enable storage auto grow. ++  ++## Next steps +++- Learn about [service limits](concepts-limits.md). |
postgresql | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md | One advantage of running your workload in Azure is global reach. The flexible se | East US 2 | :heavy_check_mark: (v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | France Central | :heavy_check_mark: | :heavy_check_mark:| :heavy_check_mark: | :heavy_check_mark: | | France South | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |-| Germany West Central | :heavy_check_mark: (v3/v4 only) | :x: $ | :x: $ | :x: | +| Germany West Central | :heavy_check_mark: (v3/v4 only) | :x: $ | :x: $ | :heavy_check_mark: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: (v3 only)| :x: | :heavy_check_mark: | :x: | One advantage of running your workload in Azure is global reach. The flexible se | North Europe | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Norway East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | Qatar Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |-| South Africa North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | +| South Africa North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South Central US | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South India | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark:(v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: | |
postgresql | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md | Last updated 05/10/2023 This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL +## Release: July 2023 +* Support for [minor versions](./concepts-supported-versions.md) 15.3 (preview), 14.8, 13.11, 12.15, 11.20 <sup>$</sup> + ## Release: June 2023+* Support for [minor versions](./concepts-supported-versions.md) 15.2 (preview), 14.7, 13.10, 12.14, 11.19 <sup>$</sup> * General availability of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.-* Support for [minor versions](./concepts-supported-versions.md) 15.2 (preview), 14.7, 13.10, 12.14, 11.19. <sup>$</sup> +* General availability of [Major Version Upgrade](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL ΓÇô Flexible Server. +* General availability of [Restore a dropped server](how-to-restore-dropped-server.md) for Azure Database for PostgreSQL ΓÇô Flexible Server. +* Public preview of [Storage auto-grow](./concepts-compute-storage.md#storage-auto-grow-preview) for Azure Database for PostgreSQL ΓÇô Flexible Server. ## Release: May 2023 * Public preview of [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server. This page provides latest news and updates regarding feature additions, engine v * Support for [extension](concepts-extensions.md) server with new servers<sup>$</sup> * Public Preview of [Major Version Upgrade](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL ΓÇô Flexible Server. * Support for [Geo-redundant backup feature](./concepts-backup-restore.md#geo-redundant-backup-and-restore) when using [Disk Encryption with Customer Managed Key (CMK)](./concepts-data-encryption.md#how-data-encryption-with-a-customer-managed-key-work) feature.-* Support for [minor versions](./concepts-supported-versions.md) 14.6, 13.9, 12.13, 11.18. <sup>$</sup> +* Support for [minor versions](./concepts-supported-versions.md) 14.6, 13.9, 12.13, 11.18 <sup>$</sup> ## Release: January 2023 * General availability of [Azure Active Directory Support](./concepts-azure-ad-authentication.md) for Azure Database for PostgreSQL - Flexible Server in all Azure Public Regions |
private-5g-core | Azure Private 5G Core Release Notes 2306 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2306.md | + + Title: Azure Private 5G Core 2306 release notes +description: Discover what's new in the Azure Private 5G Core 2306 release ++++ Last updated : 07/03/2023+++# Azure Private 5G Core 2306 release notes ++The following release notes identify the new features, critical open issues, and resolved issues for the 2306 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, please review the information contained in these release notes. ++This article applies to the AP5GC 2306 release (PMN-2306-0). This release is compatible with the ASE Pro 1 GPU and ASE Pro 2 running the ASE 2303 release, and supports the 2022-04-01-preview and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions. ++## Support lifetime ++Packet core versions are supported until two subsequent versions have been released (unless otherwise noted). This is typically two months after the release date. You should plan to upgrade your packet core in this time frame to avoid losing support. ++## What's new +- **Reduced service interruption from configuration changes** ΓÇô This enhancement allows most AP5GC configuration to be changed in the portal and applied without requiring a reinstall of the packet core. Most configuration changes that previously required a reinstall to take effect now only triggers a short service interruption. + + The following configuration can now be changed without reinstalling the packet core: + - Adding an attached data network + - Modifying attached data network configuration: + - Dynamic UE IP pool prefixes + - Static UE IP pool prefixes + - Network address and port translation parameters + - DNS addresses ++To change your packet core configuration, see [Modify a packet core instance](modify-packet-core.md). ++<!-- removed issues fixed section as none in this release +## Issues fixed in the AP5GC 2306 release ++None in this release +--> +<!--The following table provides a summary of issues fixed in this release. ++ |No. |Feature | Issue | + |--|--|--| + | 1 | | | + | 2 | | | + | 3 | | | +--> +## Known issues in the AP5GC 2306 release ++ |No. |Feature | Issue | Workaround/comments | + |--|--|--|--| + | 1 | Local distributed tracing | The distributed tracing web GUI fails to display & decode some fields of 4G NAS messages. Specifically, 'Initial Context Setup Request' and 'Attach Accept messages' information elements. | Not applicable. | + | 2 | 4G/5G Signaling | Removal of static or dynamic UE IP pool as part of attached data network modification on an existing AP5GC setup still requires reinstall of packet core. | Not applicable. | ++## Known issues from previous releases ++The following table provides a summary of known issues carried over from the previous releases. ++ |No. |Feature | Issue | Workaround/comments | + |--|--|--|--| + | 1 | Packet forwarding | AP5GC may not forward buffered packets if NAT is enabled.ΓÇ»| Not applicable. | + | 2 | Install/Upgrade | In some cases, the packet core reports successful installation even when the underlying platform or networking is misconfigured. | Not applicable. | + | 3 | Local Dashboards | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory doesn't transmit via the web proxy. If there's a firewall blocking traffic that doesn't go via the web proxy then enabling Azure Active Directory causes the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead. | + | 4 | 4G/5G Signaling | AP5GC may intermittently fail to recover after underlying platform is rebooted and may require another reboot to recover. | Not applicable. | ++## Next steps ++- [Upgrade the packet core instance in a site - Azure portal](upgrade-packet-core-azure-portal.md) +- [Upgrade the packet core instance in a site - ARM template](upgrade-packet-core-arm-template.md) |
private-5g-core | Azure Stack Edge Packet Core Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-packet-core-compatibility.md | The following table provides information on which versions of the ASE device are | Packet core version | ASE Pro GPU compatible versions | ASE Pro 2 compatible versions | |--|--|--|+| 2306 | 2303 | 2303 | | 2305 | 2303 | 2303 | | 2303 | 2301, 2303 | 2301, 2303 | | 2302 | 2301 | N/A | |
private-5g-core | How To Guide Deploy A Private Mobile Network Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/how-to-guide-deploy-a-private-mobile-network-azure-portal.md | Title: Deploy a private mobile network - Azure portal description: This how-to guide shows how to deploy a private mobile network through Azure Private 5G Core using the Azure portal --++ Last updated 01/03/2022 In this step, you'll create the Mobile Network resource representing your privat :::image type="content" source="media/create-button-mobile-networks.png" alt-text="Screenshot of the Azure portal showing the Create button on the Mobile Networks page."::: 1. Use the information you collected in [Collect private mobile network resource values](collect-required-information-for-private-mobile-network.md#collect-mobile-network-resource-values) to fill out the fields on the **Basics** configuration tab. Once you've done this, select **Next : SIMs >**.- > [!CAUTION] - > If you configure **Mobile Country Code (MCC)** or **Mobile Network Code (MNC)** values incorrectly, you must redeploy the mobile network to change them. :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab."::: In this step, you'll create the Mobile Network resource representing your privat :::image type="content" source="media/pmn-deployment-resource-group.png" alt-text="Screenshot of the Azure portal showing a resource group containing Mobile Network, SIM, SIM group, Service, SIM policy, Data Network, and Slice resources."::: +## Modify a private mobile network ++You can change the public land mobile network (PLMN) identifier, comprising a Mobile Country Code (MCC) and Mobile Network Code (MNC), using the **Modify mobile network** button on the **Mobile Network** resource. ++1. Sign in to the [Azure portal](https://portal.azure.com/). +1. Search for and select the **Mobile Network** resource representing the private mobile network. +1. Select **Modify mobile network**. ++ :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/modify-mobile-network-button.png" alt-text="Screenshot of the Azure portal showing the modify mobile network button."::: ++1. Update the MCC and/or MNC as required. +1. Select **Modify**. +1. [Reinstall the packet core](reinstall-packet-core.md) to apply the change. If you have multiple packet cores in this mobile network, you will need to reinstall all of them. + ## Next steps You can begin designing policy control to determine how your private mobile network will handle traffic, create more network slices, or start adding sites to your private mobile network. |
private-5g-core | Modify Packet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md | If you want to modify a packet core instance's local access configuration, follo ## Plan a maintenance window -The following modifications will trigger a packet core reinstall, during which your service will be unavailable: +The following changes will trigger components of the packet core software to restart, during which your service will be unavailable for approximately 8-12 minutes: - Attaching a new or existing data network to the packet core instance.+- Changing the following configuration on an attached data network: + - Dynamic UE IP pool prefixes + - Static UE IP pool prefixes + - Network address and port translation parameters + - DNS addresses ++The following changes will trigger the packet core to reinstall, during which your service will be unavailable for up to two hours: + - Detaching a data network from the packet core instance. - Changing the packet core instance's custom location.+- Changing the N2, N3 or N6 interface configuration on an attached data network. -Additionally, the following changes don't trigger a packet core reinstall, but will require you to manually perform a reinstall to allow the new configuration to take effect: --- Modifying the access network configuration.-- Modifying an attached data network's configuration.+The following changes require you to manually perform a reinstall, during which your service will be unavailable for up to two hours, before they take effect: -If you're making any of these changes to a healthy packet core instance, we recommend running this process during a maintenance window to minimize the impact on your service. You should allow up to two hours for the process to complete. +- Changing access network configuration. -If your packet core instance is in **Uninstalled**, **Uninstalling** or **Failed** state, or if you're connecting an ASE device for the first time, you won't need a packet core reinstall after making your changes. In this case, you can skip the next step and move to [Select the packet core instance to modify](#select-the-packet-core-instance-to-modify). +If you're making any of these changes to a healthy packet core instance, we recommend running this process during a maintenance window to minimize the impact on your service. Changes not listed here should not trigger a service interruption, but we recommend using a maintenance window in case of misconfiguration. ## Back up deployment information -The following list contains the data that will be lost over a packet core reinstall. If you're making a change that requires a reinstall, back up any information you'd like to preserve; after the reinstall, you can use this information to reconfigure your packet core instance. +The following list contains the data that will be lost over a packet core reinstall. If you're making a change that requires a reinstall, back up any information you'd like to preserve; after the reinstall, you can use this information to reconfigure your packet core instance. If your packet core instance is in **Uninstalled**, **Uninstalling** or **Failed** state, or if you're connecting an ASE device for the first time, you can skip this step and proceed to [Select the packet core instance to modify](#select-the-packet-core-instance-to-modify). 1. Depending on your authentication method when signing in to the [distributed tracing](distributed-tracing.md) and [packet core dashboards](packet-core-dashboards.md): - If you use Azure AD, save a copy of the Kubernetes Secret Object YAML file you created in [Create Kubernetes Secret Objects](enable-azure-active-directory.md#create-kubernetes-secret-objects). |
private-5g-core | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/whats-new.md | To help you stay up to date with the latest developments, this article covers: This page is updated regularly with the latest developments in Azure Private 5G Core. +## June 2023 ++### Packet core 2306 ++**Type:** New release ++**Date available:** July 10, 2023 ++The 2306 release for the Azure Private 5G Core packet core is now available. For more information, see [Azure Private 5G Core 2306 release notes](azure-private-5g-core-release-notes-2306.md). +### Configuration changes to Packet Core without a reinstall and changes to MCC, MNC ++**Type:** New feature ++**Date available:** July 10, 2023 ++It is now possible to: +- attach a new or existing data network +- modify an attached data network's configuration + +followed by a few minutes of downtime, but not a packet core reinstall. ++For details, see [Modify a packet core instance](modify-packet-core.md). ++### PLMN configuration ++**Type:** New feature ++**Date available:** July 10, 2023 ++You can now change the public land mobile network (PLMN) identifier, comprising a Mobile Country Code (MCC) and Mobile Network Code (MNC), on an existing private mobile network. Previously, this required recreating the network with the new configuration. ++To change your PLMN configuration, see [Deploy a private mobile network through Azure Private 5G Core - Azure portal](how-to-guide-deploy-a-private-mobile-network-azure-portal.md). ++ ## May 2023 ### Packet core 2305 This page is updated regularly with the latest developments in Azure Private 5G The 2305 release for the Azure Private 5G Core packet core is now available. For more information, see [Azure Private 5G Core 2305 release notes](azure-private-5g-core-release-notes-2305.md). +### Easier creation of a site using PowerShell ++**Type:** New feature ++**Date available:** May 31, 2023 ++New-MobileNetworkSite now supports an additional parameter that makes it easier to create a site and its dependant resources. ++For details, see [Create additional Packet Core instances for a site using the Azure portal](create-additional-packet-core.md). + +### Multiple Packet Cores under the same Site ++**Type:** New feature ++**Date available:** May 1, 2023 ++It is now possible to add multiple packet cores in the same site using the Azure portal. ++For details, see [Create a Site and dependant resources](deploy-private-mobile-network-with-site-powershell.md#create-a-site-and-dependant-resources). + ## March 2023 ### Packet core 2303 |
purview | Register Scan Azure Cosmos Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md | Title: 'Connect to Azure Cosmos DB for NoSQL' + Title: 'Connect to Azure Cosmos DB for SQL API' description: This article outlines the process to register an Azure Cosmos DB instance in Microsoft Purview including instructions to authenticate and interact with the Azure Cosmos DB database -# Connect to Azure Cosmos DB for NoSQL in Microsoft Purview +# Connect to Azure Cosmos DB for SQL API in Microsoft Purview -This article outlines the process to register and scan Azure Cosmos DB for NoSQL instance in Microsoft Purview, including instructions to authenticate and interact with the Azure Cosmos DB database source +This article outlines the process to register and scan Azure Cosmos DB for SQL API instance in Microsoft Purview, including instructions to authenticate and interact with the Azure Cosmos DB database source ## Supported capabilities This article outlines the process to register and scan Azure Cosmos DB for NoSQL ## Register -This section will enable you to register the Azure Cosmos DB for NoSQL instance and set up an appropriate authentication mechanism to ensure successful scanning of the data source. +This section will enable you to register the Azure Cosmos DB for SQL API instance and set up an appropriate authentication mechanism to ensure successful scanning of the data source. ### Steps to register It is important to register the data source in Microsoft Purview prior to settin :::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-register-data-source.png" alt-text="Screenshot that shows the collection used to register the data source"::: -1. Select the **Azure Cosmos DB for NoSQL** data source and select **Continue** +1. Select the **Azure Cosmos DB for SQL API** data source and select **Continue** :::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-select-data-source.png" alt-text="Screenshot that allows selection of the data source"::: |
route-server | Tutorial Configure Route Server With Quagga | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/tutorial-configure-route-server-with-quagga.md | -This tutorial shows you how to deploy an Azure Route Server into a virtual network and establish a BGP peering connection with a Quagga network virtual appliance (NVA). You'll deploy a virtual network with four subnets. One subnet will be dedicated to the Route Server and another subnet dedicated to the Quagga NVA. The Quagga NVA will be configured to exchange routes with the Route Server. Lastly, you'll test to make sure routes are properly exchanged on the Route Server and Quagga NVA. +This tutorial shows you how to deploy an Azure Route Server into a virtual network and establish a BGP peering connection with a Quagga network virtual appliance (NVA). You deploy a virtual network with four subnets. One subnet is dedicated to the Route Server and another subnet dedicated to the Quagga NVA. The Quagga NVA will be configured to exchange routes with the Route Server. Lastly, you'll test to make sure routes are properly exchanged on the Route Server and Quagga NVA. In this tutorial, you learn how to: > [!div class="checklist"]-> * Create a virtual network with five subnets +> * Create a virtual network with four subnets > * Deploy an Azure Route Server > * Deploy a virtual machine running Quagga > * Configure Route Server peering If you don't have an Azure subscription, create a [free account](https://azure.m ## Prerequisites -* An Azure subscription +* An active Azure subscription ## Sign in to Azure Sign in to the Azure portal at https://portal.azure.com. ## Create a virtual network -You'll need a virtual network to deploy both the Route Server and the Quagga NVA. Azure Route Server must be deployed in a dedicated subnet called *RouteServerSubnet*. +You need a virtual network to deploy both the Route Server and the Quagga NVA. Azure Route Server must be deployed in a dedicated subnet called *RouteServerSubnet*. 1. On the Azure portal home page, search for *virtual network*, and select **Virtual networks** from the search results. The Route Server is used to communicate with your NVA and exchange virtual netwo ## Create Quagga network virtual appliance -To configure the Quagga network virtual appliance, you'll need to deploy a Linux virtual machine, and then configure it with this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh). +To configure the Quagga network virtual appliance, you need to deploy a Linux virtual machine, and then configure it with this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh). ### Create Quagga virtual machine (VM) To configure the Quagga network virtual appliance, you'll need to deploy a Linux | Region | Select **(US) East US**. | | Availability options | Select **No infrastructure required**. | | Security type | Select **Standard**. |- | Image | Select an **Ubuntu**, **SUSE** or **RHEL** image. | + | Image | Select an **Ubuntu** image. This tutorial uses **Ubuntu 18.04 LTS - Gen 2** image. | | Size | Select **Standard_B2s - 2vcpus, 4GiB memory**. | | **Administrator account** | | | Authentication type | Select **Password**. |- | Username | Enter *azureuser*. Don't use *quagga* as the user name or else the setup script will fail in a later step. | + | Username | Enter *azureuser*. Don't use *quagga* for the username as it causes the setup to fail in a later step. | | Password | Enter a password of your choosing. | | Confirm password | Reenter the password. | | **Inbound port rules** | | To configure the Quagga network virtual appliance, you'll need to deploy a Linux | Select inbound ports | Select **SSH (22)**. | :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/create-quagga-basics-tab.png" alt-text="Screenshot of basics tab for creating a new virtual machine." lightbox="./media/tutorial-configure-route-server-with-quagga/create-quagga-basics-tab-expanded.png":::- + 1. On the **Networking** tab, select the following network settings: | Settings | Value | To configure the Quagga network virtual appliance, you'll need to deploy a Linux :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/create-quagga-networking-tab.png" alt-text="Screenshot of networking tab for creating a new virtual machine." lightbox="./media/tutorial-configure-route-server-with-quagga/create-quagga-networking-tab-expanded.png"::: -1. Select **Review + create** and then **Create** after validation passes. The deployment of the virtual machine will take about 10 minutes. +1. Select **Review + create** and then **Create** after validation passes. 1. Once the virtual machine has deployed, go to the **Networking** page of **Quagga** virtual machine and select the network interface. To configure the Quagga network virtual appliance, you'll need to deploy a Linux :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/quagga-ip-configuration.png" alt-text="Screenshot of IP configurations page of the Quagga VM."::: -1. Under **Private IP address Settings**, change the **Assignment** from **Dynamic** to **Static**, and then change the **IP address** from **10.1.4.4** to **10.1.4.10**. This IP address is used in this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh), which will be run in a later step. If you want to use a different IP address, ensure to update the IP in the script. +1. Under **Private IP address Settings**, change the **Assignment** from **Dynamic** to **Static**, and then change the **IP address** from **10.1.4.4** to **10.1.4.10**. The [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh) that you run in a later step uses **10.1.4.10**. If you want to use a different IP address, ensure to update the IP in the script. 1. Take note of the public IP, and select **Save** to update the IP configurations of the virtual machine. To configure the Quagga network virtual appliance, you'll need to deploy a Linux 1. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a PowerShell prompt. -1. At your prompt, open an SSH connection to the Quagga VM. Replace the IP address with the one you took note of in the previous step. +1. At your prompt, open an SSH connection to the Quagga VM by executing the following command. Replace the IP address with the one you took note of in the previous step. ++ ```console + ssh azureuser@52.240.57.121 + ``` -```console -ssh azureuser@52.240.57.121 -``` +1. When prompted, enter the password you previously created for the Quagga VM. -3. When prompted, enter the password you previously created for the Quagga VM. +1. Once logged in, enter `sudo su` to switch to super user to avoid errors running the script. -1. Once logged in, enter `sudo su` to switch to super user to avoid errors running the script. Copy this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh) and paste it into the SSH session. The script will configure the virtual machine with Quagga along with other network settings. Update the script to suit your network environment before running it on the virtual machine. It will take a few minutes for the script to complete the setup. +1. Copy and paste the following commands into the SSH session. These commands download and install this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh) to configure the virtual machine with Quagga along with other network settings. ++ ```console + wget "raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh" +   + chmod +x quaggadeploy.sh +   + ./quaggadeploy.sh + ``` ## Configure Route Server peering ssh azureuser@52.240.57.121 Get-AzRouteServerPeerLearnedRoute @routes | ft ``` - The output should look like the following: + The output should look like the following output: :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/routes-learned.png" alt-text="Screenshot of routes learned by Route Server."::: -1. To check the routes learned by the Quagga NVA, enter `vtysh` and then enter `show ip bgp` on the NVA. Output should look like the following: +1. To check the routes learned by the Quagga NVA, enter `vtysh` and then enter `show ip bgp` on the NVA. The output should look like the following output: ``` root@Quagga:/home/azureuser# vtysh When no longer needed, you can delete all resources created in this tutorial by 1. On the Azure portal menu, select **Resource groups**. -2. Select the **myRouteServerRG** resource group. +1. Select the **myRouteServerRG** resource group. ++1. Select **Delete a resource group**. -3. Select **Delete resource group**. +1. Select **Apply force delete for selected Virtual machines and Virtual machine scale sets**. -4. Enter *myRouteServerRG* and select **Delete**. +1. Enter *myRouteServerRG* and select **Delete**. ## Next steps In this tutorial, you learned how to create and configure an Azure Route Server with a network virtual appliance (NVA). To learn more about Route Servers, see [Azure Route Server frequently asked questions (FAQs)](route-server-faq.md). + |
sap | Configure Control Plane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md | This table shows the Terraform parameters, these parameters need to be entered This table shows the parameters that define the resource naming. > [!div class="mx-tdCol2BreakAll "]-> | Variable | Description | Type | Notes | -> | -- | - | - | - | -> | `environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. | -> | `location` | The Azure region in which to deploy. | Required | Use lower case | -> | 'name_override_file' | Name override file | Optional | see [Custom naming](naming-module.md) | -+> | Variable | Description | Type | Notes | +> | - | - | - | - | +> | `environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. | +> | `location` | The Azure region in which to deploy. | Required | Use lower case | +> | 'name_override_file' | Name override file | Optional | see [Custom naming](naming-module.md) | +> | 'place_delete_lock_on_resources | Place a delete lock on the key resources | Optional | ### Resource Group This table shows the parameters that define the resource group. The table below defines the parameters used for defining the Key Vault informati > | `deployer_username_secret_name` | The Azure Key Vault secret name for the deployer username | Optional | > | `deployer_password_secret_name` | The Azure Key Vault secret name for the deployer password | Optional | > | `additional_users_to_add_to_keyvault_policies` | A list of user object IDs to add to the deployment KeyVault access policies | Optional |-+> | `set_secret_expiry` | Set expiry of 12 months for key vault secrets | Optional | ### DNS Support This table shows the parameters that define the resource group. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | -- |+> | `dns_label` | DNS name of the private DNS zone | Optional | > | `use_custom_dns_a_registration` | Use an existing Private DNS zone | Optional | > | `management_dns_subscription_id` | Subscription ID for the subscription containing the Private DNS Zone | Optional | > | `management_dns_resourcegroup_name` | Resource group containing the Private DNS Zone | Optional |-> | `dns_label` | DNS name of the private DNS zone | Optional | ### Extra parameters |
sap | Configure Extra Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-extra-disks.md | Create a file using the structure shown below and save the file in the same fold > The path to the disk configuration needs to be relative to the folder containing the tfvars file. -The following sample code is an example configuration file. It defines three data disks (LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU) and a backup disk (LUN 13, using the standard SSDN SKU). The application tier servers (Application, Central Services amd Web Dispatchers) will be deployed with jus a single 'sap' data disk. +The following sample code is an example configuration file. It defines three data disks (LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU) and a backup disk (LUN 13). The application tier servers (Application, Central Services amd Web Dispatchers) will be deployed with jus a single 'sap' data disk. ++The three data disks will be striped using LVM. The log disk will be mounted as a single disk. The backup disk will be mounted as a single disk. + ```json { "db" : { "Default": { "compute": {- "vm_size" : "Standard_D4s_v3", + "vm_size" : "Standard_E20ds_v4", "swap_size_gb" : 2 }, "storage": [ |
search | Index Add Language Analyzers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-language-analyzers.md | A better experience is to search for individual words: 明るい (Bright), 私 Azure Cognitive Search supports 35 language analyzers backed by Lucene, and 50 language analyzers backed by proprietary Microsoft natural language processing technology used in Office and Bing. -Some developers might prefer the more familiar, simple, open-source solution of Lucene. Lucene language analyzers are faster, but the Microsoft analyzers have advanced capabilities, such as lemmatization, word decompounding (in languages like German, Danish, Dutch, Swedish, Norwegian, Estonian, Finish, Hungarian, Slovak) and entity recognition (URLs, emails, dates, numbers). If possible, you should run comparisons of both the Microsoft and Lucene analyzers to decide which one is a better fit. You can use [Analyze API](/rest/api/searchservice/test-analyzer) to see the tokens generated from a given text using a specific analyzer. +Some developers might prefer the more familiar, simple, open-source solution of Lucene. Lucene language analyzers are faster, but the Microsoft analyzers have advanced capabilities, such as lemmatization, word decompounding (in languages like German, Danish, Dutch, Swedish, Norwegian, Estonian, Finnish, Hungarian, Slovak) and entity recognition (URLs, emails, dates, numbers). If possible, you should run comparisons of both the Microsoft and Lucene analyzers to decide which one is a better fit. You can use [Analyze API](/rest/api/searchservice/test-analyzer) to see the tokens generated from a given text using a specific analyzer. Indexing with Microsoft analyzers is on average two to three times slower than their Lucene equivalents, depending on the language. Search performance shouldn't be significantly affected for average size queries. |
search | Index Add Scoring Profiles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-scoring-profiles.md | POST /indexes/hotels/docs&api-version=2020-06-30 { "search": "inn", "scoringProfile": "geo",- "scoringParameters": [currentLocation--122.123,44.77233] + "scoringParameters": ["currentLocation--122.123,44.77233"] } ``` |
search | Search Features List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-features-list.md | The following table summarizes features by category. For more information about | Category | Features | |-|-|-| Data sources | Search indexes can accept text from any source, provided it's submitted as a JSON document. <br/><br/> [**Indexers**](search-indexer-overview.md) are a feature that automates data import from supported data sources to extract searchable content in primary data stores. Indexers handle JSON serialization for you and most support some form of change and deletion detection. You can connect to a [variety of data sources](search-data-sources-gallery.md), including [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), [Azure Cosmos DB](search-howto-index-cosmosdb.md), or [Azure Blob storage](search-howto-indexing-azure-blob-storage.md). | +| Data sources | Search indexes can accept text from any source, provided it's submitted as a JSON document. <br/><br/>At the field level, you can also [index vectors](vector-search-how-to-create-index.md). Vector fields can co-exist with nonvector fields in the same document.<br/><br/> [**Indexers**](search-indexer-overview.md) are a feature that automates data import from supported data sources to extract searchable content in primary data stores. Indexers handle JSON serialization for you and most support some form of change and deletion detection. You can connect to a [variety of data sources](search-data-sources-gallery.md), including [Azure SQL Database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), [Azure Cosmos DB](search-howto-index-cosmosdb.md), or [Azure Blob storage](search-howto-indexing-azure-blob-storage.md). | | Hierarchical and nested data structures | [**Complex types**](search-howto-complex-data-types.md) and collections allow you to model virtually any type of JSON structure within a search index. One-to-many and many-to-many cardinality can be expressed natively through collections, complex types, and collections of complex types.| | Linguistic analysis | Analyzers are components used for text processing during indexing and search operations. By default, you can use the general-purpose Standard Lucene analyzer, or override the default with a language analyzer, a custom analyzer that you configure, or another predefined analyzer that produces tokens in the format you require. <br/><br/>[**Language analyzers**](index-add-language-analyzers.md) from Lucene or Microsoft are used to intelligently handle language-specific linguistics including verb tenses, gender, irregular plural nouns (for example, 'mouse' vs. 'mice'), word de-compounding, word-breaking (for languages with no spaces), and more. <br/><br/>[**Custom lexical analyzers**](index-add-custom-analyzers.md) are used for complex query forms such as phonetic matching and regular expressions.<br/><br/> | The following table summarizes features by category. For more information about | Category | Features | |-|-| |Free-form text search | [**Full-text search**](search-lucene-query-architecture.md) is a primary use case for most search-based apps. Queries can be formulated using a supported syntax. <br/><br/>[**Simple query syntax**](query-simple-syntax.md) provides logical operators, phrase search operators, suffix operators, precedence operators. <br/><br/>[**Full Lucene query syntax**](query-lucene-syntax.md) includes all operations in simple syntax, with extensions for fuzzy search, proximity search, term boosting, and regular expressions.|+|Vector queries| [**Vector search (preview)**](vector-search-overview.md) adds [query support for vector data](vector-search-how-to-query.md). | | Relevance | [**Simple scoring**](index-add-scoring-profiles.md) is a key benefit of Azure Cognitive Search. Scoring profiles are used to model relevance as a function of values in the documents themselves. For example, you might want newer products or discounted products to appear higher in the search results. You can also build scoring profiles using tags for personalized scoring based on customer search preferences you've tracked and stored separately. <br/><br/>[**Semantic search (preview)**](semantic-search-overview.md) is premium feature that reranks results based on semantic relevance to the query. Depending on your content and scenario, it can significantly improve search relevance with almost minimal configuration or effort. | | Geospatial search | [**Geospatial functions**](search-query-odata-geo-spatial-functions.md) filter over and match on geographic coordinates. You can [match on distance](search-query-simple-examples.md#example-6-geospatial-search) or by inclusion in a polygon shape. | | Filters and facets | [**Faceted navigation**](search-faceted-navigation.md) is enabled through a single query parameter. Azure Cognitive Search returns a faceted navigation structure you can use as the code behind a categories list, for self-directed filtering (for example, to filter catalog items by price-range or brand). <br/><br/> [**Filters**](query-odata-filter-orderby-syntax.md) can be used to incorporate faceted navigation into your application's UI, enhance query formulation, and filter based on user- or developer-specified criteria. Create filters using the OData syntax. | |
search | Search Get Started Vector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md | api-key: {{admin-api-key}} ### Semantic hybrid search +In Cognitive Search, semantic search and vector search are separate features, but you can use them together as described in this example. Semantic search adds language representation models that rerank search results based on query intent. This feature is optional and billable for the transactions against the language models. + Assuming that you've [enabled semantic search](semantic-search-overview.md#enable-semantic-search) and your index definition includes a [semantic configuration](semantic-how-to-query-request.md), you can formulate a query that includes vector search, plus keyword search with semantic ranking, caption, answers, and spell check. ```http |
search | Search What Is Azure Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md | Azure Cognitive Search ([formerly known as "Azure Search"](whats-new.md#new-serv Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities: -+ A search engine for full text search over a search index containing user-owned content ++ A search engine for full text and [vector search](vector-search-overview.md) over a search index containing user-owned content + Rich indexing, with [lexical analysis](search-analyzers.md) and [optional AI enrichment](cognitive-search-concept-intro.md) for content extraction and transformation-+ Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more ++ Rich query syntax for [vector queries](vector-search-how-to-query.md), text search, fuzzy search, autocomplete, geo-search and more + Programmability through REST APIs and client libraries in Azure SDKs + Azure integration at the data layer, machine learning layer, and AI (Cognitive Services) Across the Azure platform, Cognitive Search can integrate with other Azure servi On the search service itself, the two primary workloads are *indexing* and *querying*. -+ [**Indexing**](search-what-is-an-index.md) is an intake process that loads content into your search service and makes it searchable. Internally, inbound text is processed into tokens and stored in inverted indexes for fast scans. You can upload JSON documents, or use an indexer to serialize your data into JSON. ++ [**Indexing**](search-what-is-an-index.md) is an intake process that loads content into your search service and makes it searchable. Internally, inbound text is processed into tokens and store in inverted indexes, and inbound vectors are stored in vector indexes. You can upload JSON documents, or use an indexer to serialize your data into JSON. [AI enrichment](cognitive-search-concept-intro.md) through [cognitive skills](cognitive-search-working-with-skillsets.md) is an extension of indexing. If your content needs image or language analysis before it can be indexed, AI enrichment can extract text embedded in application files, translate text, and also infer text and structure from non-text files by analyzing the content. |
search | Vector Search Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md | Title: Vector search description: Describes concepts, scenarios, and availability of the vector search feature in Cognitive Search. --++ Previously updated : 07/07/2023 Last updated : 07/10/2023 # Vector search within Azure Cognitive Search Last updated 07/07/2023 > [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme). -This article is a high-level introduction to vector search in Azure Cognitive Search. It also explains integration with other Azure services and covers the core concepts you need to know for vector search development. +This article is a high-level introduction to vector search in Azure Cognitive Search. It also explains integration with other Azure services and covers the core concepts you should know for vector search development. We recommend this article for background, but if you'd rather get started, follow these steps: We recommend this article for background, but if you'd rather get started, follo ## What's vector search in Cognitive Search? -Azure Cognitive Search can efficiently store and retrieve vector embeddings using Approximate Nearest Neighbor methods (ANN). Vector search can be used to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://arxiv.org/abs/2005.11401). +Vector search is a new capability for indexing, storing, and retrieving vector embeddings. You can use it to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://arxiv.org/abs/2005.11401). Support for vector search is in public preview and available through the [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview). To use vector search, define a *vector field* in the index definition and index documents with vector data. Then you can issue search request with a query vector, returning documents with the requested `k` nearest neighbors (kNN) according to the selected vector similarity metric. Scenarios for vector search include: + **Vector search across different data types (multi-modal)**. Encode images, text, audio, and video, or even a mix of them (for example, with models like CLIP) and do a similarity search across them. -+ **Multi-lingual search**: Use a multi-lingual embeddings model to represent your document in multiple languages in a single vector space to find documents regardless of the language they are in. ++ **Multi-lingual search**. Use a multi-lingual embeddings model to represent your document in multiple languages in a single vector space to find documents regardless of the language they are in. -+ **Filtered vector search**: Use [filters](search-filters.md) with vector queries to select a specific category of indexed documents, or to implement document-level security, geospatial search, and more. +<!-- @Farzad, filterable is false on a vector field, so we need to explain what we mean here. I wonder if it goes with hybrid query? --> ++ **Filtered vector search**. Use [filters](search-filters.md) with vector queries to select a specific category of indexed documents, or to implement document-level security, geospatial search, and more. + **Hybrid search**. For text data, combine the best of vector retrieval and keyword retrieval to obtain the best results. Use with [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing. You can use other Azure services to provide embeddings and data storage. + Azure Cognitive Search can automatically index vector data from two data sources: [Azure blob indexers](search-howto-indexing-azure-blob-storage.md) and [Azure Cosmos DB for NoSQL indexers](search-howto-index-cosmosdb.md). For more information, see [Add vector fields to a search index.](vector-search-how-to-create-index.md) -+ [LangChain](https://python.langchain.com/docs/get_started/introduction.html) is a framework for developing applications powered by language models. Use the [Azure Cognitive Search vector store integraton](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/azuresearch) to simplify the creation of applications using LLMs with Azure Cognitive Search as your vector datastore. ++ [LangChain](https://docs.langchain.com/docs/) is a framework for developing applications powered by language models. Use the [Azure Cognitive Search vector store integration](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/azuresearch) to simplify the creation of applications using LLMs with Azure Cognitive Search as your vector datastore. -+ [Semantic kernel](https://github.com/microsoft/semantic-kernel/blob/main/README.md) is a lightweight SDK enabling integration of AI Large Language Models (LLMs) with conventional programming languages. ++ [Semantic kernel](https://github.com/microsoft/semantic-kernel/blob/main/README.md) is a lightweight SDK enabling integration of AI Large Language Models (LLMs) with conventional programming languages. It's useful for chunking large documents in a larger workflow that sends inputs to embedding models. ## Vector search concepts Vector search is a method of information retrieval that aims to overcome the lim ### Embeddings and vectorization -*Embeddings* are a specific type of vector representation created by machine learning models that capture the semantic meaning of text, or abstract representations of other content such as images. Natural language machine learning models are trained on large amounts of data to identify patterns and relationships between words. During training, they learn to represent any input as a vector of real numbers in an intermediary step called the *encoder*. After training is complete, these language models can be modified so the intermediary vector representation becomes the model's output. The resulting embeddings are high-dimensional vectors, where words with similar meanings are closer together in the vector space, as explained in [this Azure OpenAI Service article](/azure/cognitive-services/openai/concepts/understand-embeddings). The effectiveness of vector search in retrieving relevant information depends on the effectiveness of the embedding model in distilling the meaning of documents and queries into the resulting vector. The best models are well-trained on the types of data they're representing. This can be achieved by training directly on the problem space or fine-tuning a general-purpose model, such as GPT. Azure Cognitive Search today doesn't provide a way to vectorize documents and queries, leaving it up to you to pick the best embedding model for your data. The new vector search APIs allow you to store and retrieve vectors efficiently. +*Embeddings* are a specific type of vector representation created by machine learning models that capture the semantic meaning of text, or representations of other content such as images. Natural language machine learning models are trained on large amounts of data to identify patterns and relationships between words. During training, they learn to represent any input as a vector of real numbers in an intermediary step called the *encoder*. After training is complete, these language models can be modified so the intermediary vector representation becomes the model's output. The resulting embeddings are high-dimensional vectors, where words with similar meanings are closer together in the vector space, as explained in [this Azure OpenAI Service article](/azure/cognitive-services/openai/concepts/understand-embeddings). ++The effectiveness of vector search in retrieving relevant information depends on the effectiveness of the embedding model in distilling the meaning of documents and queries into the resulting vector. The best models are well-trained on the types of data they're representing. You can evaluate existing models such as Azure OpenAI text-embedding-ada-002, bring your own model that's trained directly on the problem space, or fine-tune a general-purpose model. Azure Cognitive Search doesn't impose constraints on which model you choose, so pick the best one for your data. In order to create effective embeddings for vector search, it's important to take input size limitations into account. Therefore, we recommend following the [guidelines for chunking data](vector-search-how-to-chunk-documents.md) before generating embeddings. This best practice ensures that the embeddings accurately capture the relevant information and enable more efficient vector search. ### What is the embedding space? -*Embedding space* is the corpus for vector search. Machine learning models create the embedding space by mapping individual words, phrases, or documents (for natural language processing), images, or other forms of data into an abstract representation comprised of a vector of real numbers representing a coordinate in a high-dimensional space. In this embedding space, similar items are located close together, and dissimilar items are located farther apart. +*Embedding space* is the corpus for vector search. Machine learning models create the embedding space by mapping individual words, phrases, or documents (for natural language processing), images, or other forms of data into a representation comprised of a vector of real numbers representing a coordinate in a high-dimensional space. In this embedding space, similar items are located close together, and dissimilar items are located farther apart. -For example, documents that talk about different species of dogs would be clustered close together in the embedding space. Documents about cats would be close together, but farther from the dogs cluster while still being in the neighborhood for animals. Dissimilar concepts such as cloud computing would be much farther away. In practice, these embedding spaces are very abstract and don't have well-defined, human-interpretable meanings, but the core idea stays the same. +For example, documents that talk about different species of dogs would be clustered close together in the embedding space. Documents about cats would be close together, but farther from the dogs cluster while still being in the neighborhood for animals. Dissimilar concepts such as cloud computing would be much farther away. In practice, these embedding spaces are abstract and don't have well-defined, human-interpretable meanings, but the core idea stays the same. Popular vector similarity metrics include the following, which are all supported by Azure Cognitive Search. -+ `euclidean`: Also known as _l2-norm_, this measures the length of the vector difference between two vectors. ++ `euclidean` (also known as `L2 norm`): This measures the length of the vector difference between two vectors. + `cosine`: This measures the angle between two vectors, and is not affected by differing vector lengths. + `dotProduct`: This measures both the length of each of the pair of two vectors, and the angle between them. For normalized vectors, this is identical to `cosine` similarity, but slightly more performant. ### Approximate Nearest Neighbors -Approximate Nearest Neighbor search (ANN) is a class of algorithms for finding matches in vector space. This class of algorithms employs different data structures or data partitioning methods to significantly reduce the search space to accelerate query processing. The specific approach depends on the algorithm. While sacrificing some precision, these algorithms offer scalable and faster retrieval of approximate nearest neighbors, which makes them ideal for balancing accuracy and efficiency in modern information retrieval applications. You may adjust the parameters of your algorithm to fine-tune the recall, latency, memory, and disk footprint requirements of your search application. +Approximate Nearest Neighbor search (ANN) is a class of algorithms for finding matches in vector space. This class of algorithms employs different data structures or data partitioning methods to significantly reduce the search space to accelerate query processing. The specific approach depends on the algorithm. While this approach sacrifices some accuracy, these algorithms offer scalable and faster retrieval of approximate nearest neighbors, which makes them ideal for balancing accuracy and efficiency in modern information retrieval applications. You may adjust the parameters of your algorithm to fine-tune the recall, latency, memory, and disk footprint requirements of your search application. Azure Cognitive Search uses Hierarchical Navigation Small Worlds (HNSW), which is a leading algorithm optimized for high-recall, low-latency applications where data distribution is unknown or can change frequently. |
search | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md | Learn about the latest updates to Azure Cognitive Search functionality, docs, an | Item | Type | Description | |--||--|-| [**Vector search public preview**](vector-search-overview.md) | Feature | Adds vector fields to a search index for similarity search over vector representations of text, image, and multilingual content. | +| [**Vector search public preview**](vector-search-overview.md) | Feature | Adds vector fields to a search index for similarity search over vector representations of data. | | [**2023-07-01-Preview Search REST API**](/rest/api/searchservice/index-preview) | API | New preview version of the Search REST APIs that adds support for vector search. This API version is inclusive of all preview features. If you're using earlier previews, switch to **2023-07-01-preview** with no loss of functionality. | ## May 2023 Azure Search was renamed to **Azure Cognitive Search** in October 2019 to reflec ## Service updates [Service update announcements](https://azure.microsoft.com/updates/?product=search&status=all) for Azure Cognitive Search can be found on the Azure web site.+ |
security | Threat Modeling Tool Releases 73306305 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73306305.md | + + Title: Microsoft Threat Modeling Tool release 06/30/2023 - Azure +description: Documenting the release notes for the threat modeling tool release 7.3.30630.5. ++++ Last updated : 06/30/2023+++# Threat Modeling Tool update release 7.3.30630.5 - 06/30/2023 ++Version 7.3.30630.5 of the Microsoft Threat Modeling Tool (TMT) was released on June 30 2023 and contains the following changes: ++- Bug fixes +- Accessibility fixes ++## Known issues ++### Errors related to TMT7.application file deserialization ++#### Issue ++Some customers have reported receiving the following error message when downloading the Threat Modeling Tool: ++``` +The threat model file '$PATH\TMT7.application' could not be deserialized. File is not an actual threat model or the threat model may be corrupted. +``` ++This error occurs because some browsers don't natively support ClickOnce installation. In those cases the ClickOnce application file is downloaded to the user's hard drive. ++#### Workaround ++This error will continue to appear if the Threat Modeling Tool is launched by double-clicking on the TMT7.application file. However, after bypassing the error the tool will function normally. Rather than launching the Threat Modeling Tool by double-clicking the TMT7.application file, users should utilize shortcuts created in the Windows Menu during the installation to start the Threat Modeling Tool. ++## System requirements ++- Supported Operating Systems + - [Microsoft Windows 10 Anniversary Update](https://blogs.windows.com/windowsexperience/2016/08/02/how-to-get-the-windows-10-anniversary-update/#HTkoK5Zdv0g2F2Zq.97) or later +- .NET Version Required + - [.NET 4.7.1](https://go.microsoft.com/fwlink/?LinkId=863262) or later +- Additional Requirements + - An Internet connection is required to receive updates to the tool as well as templates. ++## Documentation and feedback ++- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md). ++## Next steps ++Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool). |
security | Threat Modeling Tool Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases.md | The Microsoft Threat Modeling Tool is currently released as a free [click-to-dow ## Release Notes +- [Microsoft Threat Modeling Tool GA Release Version 7.3.30630.5](threat-modeling-tool-releases-73306305.md) - June 30 2023 - [Microsoft Threat Modeling Tool GA Release Version 7.3.21108.2](threat-modeling-tool-releases-73211082.md) - November 8 2022 - [Microsoft Threat Modeling Tool GA Release Version 7.3.20927.9](threat-modeling-tool-releases-73209279.md) - September 27 2022 - [Microsoft Threat Modeling Tool GA Release Version 7.3.00729.1](threat-modeling-tool-releases-73007291.md) - July 29 2020 |
sentinel | Connect Threat Intelligence Upload Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-upload-api.md | Enable the **Threat Intelligence Upload Indicators API** data connector to allow For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md). -1. To configure the upload API data connector, select the **Data connectors** menu. -1. Find and select the **Threat Intelligence Upload Indicators API** data connector > **Open connector page** button. +1. The data connector is now visible in **Data Connectors** page. Open the data connector page to find more information on configuring your application to this API. :::image type="content" source="media/connect-threat-intelligence-upload-api/upload-api-data-connector.png" alt-text="Screenshot displaying the data connectors page with the upload API data connector listed." lightbox="media/connect-threat-intelligence-upload-api/upload-api-data-connector.png"::: -1. Select the **Connect** button. - ### Configure your TIP solution or custom application The following configuration information required by the upload indicators API:- - Application (client) ID - - Client secret - - Microsoft Sentinel workspace ID +- Application (client) ID +- Client secret +- Microsoft Sentinel workspace ID ++Enter these values in the configuration of your integrated TIP or custom solution where required. -1. Enter these values in the configuration of your integrated TIP or custom solution where required. 1. Submit the indicators to the Microsoft Sentinel upload API. To learn more about the upload indicators API, see the reference document [Microsoft Sentinel upload indicators API](upload-indicators-api.md). 1. Within a few minutes, threat indicators should begin flowing into your Microsoft Sentinel workspace. Find the new indicators in the **Threat intelligence** blade, accessible from the Microsoft Sentinel navigation menu. 1. The data connector status reflects the **Connected** status and the **Data received** graph is updated once indicators are submitted successfully. |
sentinel | Oracle Cloud Infrastructure Using Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/oracle-cloud-infrastructure-using-azure-function.md | To integrate with Oracle Cloud Infrastructure (using Azure Functions) make sure 7. Select Source: Logging 8. Select Target: Streaming 9. (Optional) Configure *Log Group*, *Filters* or use custom search query to stream only logs that you need.-10. Configure Target - select the strem created before. +10. Configure Target - select the stream created before. 11. Click *Create* Check the documentation to get more information about [Streaming](https://docs.oracle.com/en-us/iaas/Content/Streaming/home.htm) and [Service Connectors](https://docs.oracle.com/en-us/iaas/Content/service-connector-hub/home.htm). |
service-fabric | Container Image Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/container-image-management.md | + + Title: Azure Service Fabric container image management +description: How to use container image management in a service fabric cluster. +++++ Last updated : 06/22/2023+++# Container Image Management +The activation path during Service Fabric containers deployment, handles the downloading of the container images to the VM on which the containers are running. Once the containers have been removed from the cluster and their application types have been unregistered, there's a cleanup cycle that deletes the container images. This container image cleanup works only if the container image has been hard coded in the service manifest. For existing Service Fabric runtime versions, the configurations supporting the cleanup of the container images are as follows - ++## Settings ++ ```json + "fabricSettings": [ + { + "name": "Hosting", + "parameters": [ + { + "name": "PruneContainerImages", + "value": "true" + }, + { + "name": "CacheCleanupScanInterval", + "value": "3600" + } + ] + } + ] + ``` ++|Setting |Description | + | | | + |PruneContainerImage |Setting to enable or disable pruning of container images when application type is unregistered. | + |CacheCleanupScanInterval |Setting in seconds determining how often the cleanup cycle runs. | ++## Container Image Management v2 +Starting Service Fabric version 10.0 there's a newer version of the container image deletion flow. This flow cleans up container images irrespective of how the container images may have been defined - either hard coded or parameterized during application deployment. PruneContainerImages and ContainerImageDeletionEnabled configuration are mutually exclusive and cluster upgrade validation exists to ensure one or the other is switched on but not both. The configuration supporting this feature are as follows - ++### Settings ++```json + "fabricSettings": [ + { + "name": "Hosting", + "parameters": [ + { + "name": "ContainerImageDeletionEnabled", + "value": "true" + }, + { + "name": "ContainerImageCleanupInterval", + "value": "3600" + }, + { + "name": "ContainerImageTTL", + "value": "3600" + }, + { + "name": "ContainerImageDeletionOnAppInstanceDeletionEnabled", + "value": "true" + }, + { + "name": "ContainerImagesToSkip", + "value": "microsoft/windowsservercore|microsoft/nanoserver" + } + ] + } + ] + ``` ++|Setting |Description | + | | | + |ContainerImageDeletionEnabled |Setting to enable or disable deletion of container images. | + |ContainerImageCleanupInterval |Time interval for cleaning up unused container images. | + |ContainerImageTTL |Time to live for container images once they're eligible for removal (not referenced by containers on the VM and the application is deleted(if ContainerImageDeletionOnAppInstanceDeletionEnabled is enabled)). | + |ContainerImageDeletionOnAppInstanceDeletionEnabled |Setting to enable or disable deletion of expired ttl container images only after application has been deleted as well. | + |ContainerImagesToSkip |When set enables the container runtime to skip deleting images that match any of the set of regular expressions. The \| character separates each expression. Example: "mcr.microsoft.com/.+\|docker.io/library/alpine:latest" - this example matches everything prefixed with "mcr.microsoft.com/" and matches exactly "docker.io/library/alpine:latest". By default we don't delete the known Windows base images microsoft/windowsservercore or microsoft/nanoserver. | ++## Next steps +See the following article for related information: +* [Service Fabric and containers][containers-introduction-link] |
service-fabric | Service Fabric Cluster Creation Setup Azure Ad Via Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-setup-azure-ad-via-portal.md | Select **Properties**, and then select **No** for **Assignment required?**.  +For the cluster app registration only, go to the [Enterprise Applications](https://portal.azure.com/#view/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/~/AppAppsPreview/menuId~/null) pane. ++Select **Properties**, and then select **Yes** for **Assignment required?**. ++ + ## Assign application roles to users After you create Azure AD app registrations for Service Fabric, you can modify Azure AD users to use app registrations to connect to a cluster by using Azure AD. |
service-fabric | Service Fabric Manage Application In Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-manage-application-in-visual-studio.md | Last updated 07/14/2022 # Use Visual Studio to simplify writing and managing your Service Fabric applications You can manage your Azure Service Fabric applications and services through Visual Studio. Once you've [set up your development environment](service-fabric-get-started.md), you can use Visual Studio to create Service Fabric applications, add services, or package, register, and deploy applications in your local development cluster. +> [!NOTE] +> With the transition from ADAL to MSAL, administrators are now required to explicitly grant permission to the Visual Studio client for publishing applications by adding the following in the cluster's Azure AD App Registration. +> - Visual Studio 2022 and future versions: 04f0c124-f2bc-4f59-8241-bf6df9866bbd +> - Visual Studio 2019 and earlier: 872cd9fa-d31f-45e0-9eab-6e460a02d1f1 + ## Deploy your Service Fabric application By default, deploying an application combines the following steps into one simple operation: |
storage | Convert Append And Page Blobs To Block Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/convert-append-and-page-blobs-to-block-blobs.md | To convert blobs, copy them to a new location by using PowerShell, Azure CLI, or containerName = '<source container name>' srcblobName = '<source append or page blob name>' destcontainerName = '<destination container name>'- destblobName = '<destination block blob name>' + destBlobName = '<destination block blob name>' destTier = '<destination block blob tier>' az storage blob copy start --account-name $accountName --destination-blob $destBlobName --destination-container $destcontainerName --destination-blob-type BlockBlob --source-blob $srcblobName --source-container $containerName --tier $destTier |
storage | Network File System Protocol Support How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md | Create a directory on your Linux system and then mount the container in the stor 1. Create an entry in the /etc/fstab file by adding the following line: ```- <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /nfsdata aznfs defaults,sec=sys,vers=3,nolock,proto=tcp,nofail 0 0 + <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /nfsdata aznfs defaults,sec=sys,vers=3,nolock,proto=tcp,nofail,_netdev 0 0 ``` 2. Run the following command to immediately process the /etc/fstab entries and attempt to mount the preceding path: |
storage | File Sync Choose Cloud Tiering Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-choose-cloud-tiering-policies.md | Title: Choose Azure File Sync cloud tiering policies description: Details on what to keep in mind when choosing Azure File Sync cloud tiering policies. -+ Last updated 04/13/2021 - # Choose cloud tiering policies |
storage | File Sync Cloud Tiering Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-overview.md | Title: Understand Azure File Sync cloud tiering description: Understand cloud tiering, an optional Azure File Sync feature. Frequently accessed files are cached locally on the server; others are tiered to Azure Files. -+ Last updated 04/13/2023 - # Cloud tiering overview |
storage | File Sync Cloud Tiering Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-policy.md | Title: Azure File Sync cloud tiering policies description: Details on how the date and volume free space policies work together for different scenarios. -+ Last updated 06/07/2022 - # Cloud tiering policies |
storage | File Sync Deployment Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md | Title: Deploy Azure File Sync description: Learn how to deploy Azure File Sync from start to finish using the Azure portal, PowerShell, or the Azure CLI. -+ Last updated 02/03/2023 - ms.devlang: azurecli |
storage | File Sync Disaster Recovery Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-disaster-recovery-best-practices.md | Title: Best practices for disaster recovery with Azure File Sync description: Learn about best practices for disaster recovery with Azure File Sync, including high availability, data protection, and data redundancy. -+ Last updated 04/18/2023 - # Best practices for disaster recovery with Azure File Sync |
storage | File Sync Extend Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-extend-servers.md | Title: Tutorial - Extend Windows file servers with Azure File Sync description: Learn how to extend Windows file servers with Azure File Sync, from start to finish. -+ Last updated 06/21/2022 - #Customer intent: As an IT administrator, I want see how to extend Windows file servers with Azure File Sync, so I can evaluate the process for extending the storage capacity of my Windows servers. |
storage | File Sync Firewall And Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-firewall-and-proxy.md | Title: Azure File Sync on-premises firewall and proxy settings description: Understand Azure File Sync on-premises proxy and firewall settings. Review configuration details for ports, networks, and special connections to Azure. -+ Last updated 04/04/2023 - # Azure File Sync proxy and firewall settings |
storage | File Sync How To Manage Tiered Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-how-to-manage-tiered-files.md | Title: How to manage Azure File Sync tiered files description: Tips and PowerShell commandlets to help you manage tiered files -+ Last updated 06/06/2022 - # How to manage tiered files |
storage | File Sync Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-introduction.md | Title: Introduction to Azure File Sync description: An overview of Azure File Sync, a service that enables you to create and use network file shares in the cloud using the industry standard SMB protocol. -+ Last updated 09/14/2022 - # What is Azure File Sync? |
storage | File Sync Modify Sync Topology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-modify-sync-topology.md | Title: Modify your Azure File Sync topology description: Guidance on how to modify your Azure File Sync sync topology -+ Last updated 4/23/2021 - # Modify your Azure File Sync topology |
storage | File Sync Monitor Cloud Tiering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-monitor-cloud-tiering.md | Title: Monitor Azure File Sync cloud tiering description: Use metrics to monitor your cloud tiering policies. You can monitor files synced, server cache size, cache hit rate, and more. -+ Last updated 05/11/2023 - # Monitor cloud tiering |
storage | File Sync Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-monitoring.md | Title: Monitor Azure File Sync description: Review how to monitor your Azure File Sync deployment by using Azure Monitor, Storage Sync Service, and Windows Server. -+ Last updated 01/3/2022 - # Monitor Azure File Sync |
storage | File Sync Networking Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-endpoints.md | Title: Configuring Azure File Sync network endpoints description: Learn how to configure Azure File Sync network endpoints. -+ Last updated 04/26/2023 - |
storage | File Sync Networking Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-overview.md | Title: Azure File Sync networking considerations description: Learn how to configure networking to use Azure File Sync to cache files on-premises. -+ Last updated 09/14/2022 - # Azure File Sync networking considerations |
storage | File Sync Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md | Title: Planning for an Azure File Sync deployment description: Plan for a deployment with Azure File Sync, a service that allows you to cache several Azure file shares on an on-premises Windows Server or cloud VM. -+ Last updated 02/03/2023 - |
storage | File Sync Resource Move | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-resource-move.md | Title: Azure File Sync resource moves and topology changes description: Learn how to move sync resources across resource groups, subscriptions, and Azure Active Directory tenants. -+ Last updated 03/15/2023 - # Move Azure File Sync resources to a different resource group, subscription, or Azure AD tenant |
storage | File Sync Server Endpoint Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-endpoint-create.md | Title: Create an Azure File Sync server endpoint description: Understand the options during server endpoint creation and how to best apply them to your situation. -+ Last updated 06/01/2021 - # Create an Azure File Sync server endpoint |
storage | File Sync Server Endpoint Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-endpoint-delete.md | Title: Deprovision your Azure File Sync server endpoint description: Guidance on how to deprovision your Azure File Sync server endpoint based on your use case -+ Last updated 6/01/2021 - # Deprovision or delete your Azure File Sync server endpoint |
storage | File Sync Server Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-recovery.md | Title: Recover an Azure File Sync equipped server from a server-level failure description: Learn how to recover an Azure File Sync equipped server from a server-level failure -+ Last updated 12/07/2021 - # Recover an Azure File Sync equipped server from a server-level failure |
storage | File Sync Server Registration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-registration.md | Title: Manage registered servers with Azure File Sync description: Learn how to register and unregister a Windows Server with an Azure File Sync Storage Sync Service. -+ Last updated 06/15/2022 - # Manage registered servers with Azure File Sync |
storage | File Sync Storsimple Cost Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-storsimple-cost-comparison.md | Title: Comparing the costs of StorSimple to Azure File Sync description: Learn how you can save money and modernize your storage infrastructure by migrating from StorSimple to Azure File Sync. -+ Last updated 01/12/2023 - # Comparing the costs of StorSimple to Azure File Sync |
storage | Authorize Data Operations Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/authorize-data-operations-portal.md | Title: Authorize access to Azure file share data in the Azure portal description: When you access file data using the Azure portal, the portal makes requests to Azure Files behind the scenes. These requests can be authenticated and authorized using either your Azure AD account or the storage account access key. -+ Last updated 05/23/2023 - # Choose how to authorize access to file data in the Azure portal |
storage | Authorize Oauth Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/authorize-oauth-rest.md | Title: Enable admin-level read and write access to Azure file shares using Azure Active Directory with Azure Files OAuth over REST (preview) description: Authorize access to Azure file shares and directories via the OAuth authentication protocol over REST APIs using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access files with an Azure AD account. -+ Last updated 05/11/2023 - |
storage | Files Manage Namespaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-manage-namespaces.md | Title: How to use DFS-N with Azure Files description: Common DFS-N use cases with Azure Files -+ Last updated 3/02/2021 - # How to use DFS Namespaces with Azure Files |
storage | Files Nfs Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md | Title: NFS file shares in Azure Files description: Learn about file shares hosted in Azure Files using the Network File System (NFS) protocol. -+ Last updated 11/15/2022 - |
storage | Files Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-redundancy.md | Title: Data redundancy in Azure Files description: Understand the data redundancy options available in Azure file shares and how to choose the best fit for your availability and disaster recovery requirements. -+ Last updated 06/19/2023 - |
storage | Files Remove Smb1 Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-remove-smb1-linux.md | Title: Secure your Azure and on-premises environments by removing SMB 1 on Linux description: Azure Files supports SMB 3.x and SMB 2.1, but not insecure legacy versions of SMB such as SMB 1. Before connecting to an Azure file share, you might wish to disable older versions of SMB such as SMB 1. -+ Last updated 02/23/2023 - # Remove SMB 1 on Linux |
storage | Files Reserve Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-reserve-capacity.md | |
storage | Files Smb Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md | Title: SMB file shares in Azure Files description: Learn about file shares hosted in Azure Files using the Server Message Block (SMB) protocol. -+ Last updated 03/31/2023 - |
storage | Files Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md | Title: What's new in Azure Files and Azure File Sync description: Learn about new features and enhancements in Azure Files and Azure File Sync. -+ Last updated 05/24/2023 - # What's new in Azure Files |
storage | Geo Redundant Storage For Large File Shares | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/geo-redundant-storage-for-large-file-shares.md | Title: Azure Files geo-redundancy for large file shares (preview) description: Azure Files geo-redundancy for large file shares (preview) significantly improves standard SMB file share capacity and performance limits when using geo-redundant storage (GRS) and geo-zone redundant storage (GZRS) options. -+ Last updated 05/24/2023 - |
storage | Nfs Nconnect Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/nfs-nconnect-performance.md | Title: Improve NFS Azure file share performance with nconnect description: Learn how using nconnect with Linux clients can improve the performance of NFS Azure file shares at scale. -+ Last updated 03/20/2023 - # Improve NFS Azure file share performance with `nconnect` |
storage | Redundancy Premium File Shares | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/redundancy-premium-file-shares.md | Title: Azure Files zone-redundant storage (ZRS) support for premium file shares description: ZRS is supported for premium Azure file shares through the FileStorage storage account kind. Use this reference to determine the Azure regions in which ZRS is supported. -+ Last updated 03/29/2023 - |
storage | Storage Dotnet How To Use Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-dotnet-how-to-use-files.md | Title: Develop for Azure Files with .NET description: Learn how to develop .NET applications and services that use Azure Files to store data. -+ Last updated 10/02/2020 - ms.devlang: csharp |
storage | Storage Files Active Directory Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md | Title: Overview - Azure Files identity-based authentication description: Azure Files supports identity-based authentication over SMB (Server Message Block) with Active Directory Domain Services (AD DS), Azure Active Directory Domain Services (Azure AD DS), and Azure Active Directory (Azure AD) Kerberos for hybrid identities. --+ Last updated 06/26/2023 |
storage | Storage Files Configure P2s Vpn Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-linux.md | Title: Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files description: How to configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files -+ Last updated 02/07/2023 - |
storage | Storage Files Configure P2s Vpn Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-windows.md | Title: Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files description: How to configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files -+ Last updated 11/08/2022 - |
storage | Storage Files Configure S2s Vpn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-s2s-vpn.md | Title: Configure a Site-to-Site (S2S) VPN for use with Azure Files description: How to configure a Site-to-Site (S2S) VPN for use with Azure Files -+ Last updated 10/19/2019 - # Configure a Site-to-Site VPN for use with Azure Files |
storage | Storage Files Enable Soft Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-enable-soft-delete.md | Title: Enable soft delete - Azure file shares description: Learn how to enable soft delete on Azure file shares for data recovery and preventing accidental deletion. -+ Last updated 04/05/2021 - |
storage | Storage Files Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md | Title: Frequently asked questions (FAQ) for Azure Files description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. -+ Last updated 05/16/2023 - - **Does Azure File Sync sync the LastWriteTime for directories?** - No, Azure File Sync doesn't sync the LastWriteTime for directories. + **Does Azure File Sync sync the LastWriteTime for directories? Why isn't the *date modified* timestamp on a directory updated when files within it are changed?** + No, Azure File Sync doesn't sync the LastWriteTime for directories. Furthermore, Azure Files doesn't update the **date modified** timestamp (LastWriteTime) for directories when files within the directory are changed. This is expected behavior. ## Security, authentication, and access control |
storage | Storage Files How To Mount Nfs Shares | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md | Title: Mount an NFS Azure file share on Linux description: Learn how to mount a Network File System (NFS) Azure file share on Linux. -+ Last updated 02/06/2023 - |
storage | Storage Files Identity Ad Ds Assign Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md | Title: Control access to Azure file shares by assigning share-level permissions description: Learn how to assign share-level permissions to an Azure Active Directory (Azure AD) identity that represents a hybrid user to control user access to Azure file shares with identity-based authentication. --+ Last updated 12/07/2022 |
storage | Storage Files Identity Ad Ds Configure Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md | Title: Control what a user can do at the directory and file level - Azure Files description: Learn how to configure Windows ACLs for directory and file level permissions for Active Directory authentication to Azure file shares, allowing you to take advantage of granular access control. --+ Last updated 12/19/2022 |
storage | Storage Files Identity Ad Ds Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md | Title: Enable AD DS authentication for Azure file shares description: Learn how to enable Active Directory Domain Services authentication over SMB for Azure file shares. Your domain-joined Windows virtual machines can then access Azure file shares by using AD DS credentials. --+ Last updated 03/28/2023 |
storage | Storage Files Identity Ad Ds Mount File Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md | Title: Mount Azure file share to an AD DS-joined VM description: Learn how to mount an Azure file share to your on-premises Active Directory Domain Services domain-joined machines. --+ Last updated 04/07/2023 |
storage | Storage Files Identity Ad Ds Update Password | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-update-password.md | Title: Update AD DS storage account password description: Learn how to update the password of the Active Directory Domain Services computer or service account that represents your storage account. This prevents authentication failures and keeps the storage account from being deleted when the password expires. --+ Last updated 11/17/2022 |
storage | Storage Files Identity Auth Active Directory Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md | Title: Overview - On-premises AD DS authentication to Azure file shares description: Learn about Active Directory Domain Services (AD DS) authentication to Azure file shares. This article goes over supported scenarios, availability, and explains how the permissions work between your AD DS and Azure Active Directory. --+ Last updated 06/12/2023 |
storage | Storage Files Identity Auth Domain Services Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-domain-services-enable.md | Title: Use Azure Active Directory Domain Services (Azure AD DS) to authorize user access to Azure Files over SMB description: Learn how to enable identity-based authentication over Server Message Block (SMB) for Azure Files through Azure Active Directory Domain Services (Azure AD DS). Your domain-joined Windows VMs can then access Azure file shares by using Azure AD credentials. -+ Last updated 05/03/2023 - recommendations: false |
storage | Storage Files Identity Auth Hybrid Identities Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-hybrid-identities-enable.md | Title: Use Azure Active Directory to access Azure file shares over SMB for hybrid identities using Kerberos authentication description: Learn how to enable identity-based Kerberos authentication for hybrid user identities over Server Message Block (SMB) for Azure Files through Azure Active Directory (Azure AD). Your users can then access Azure file shares by using their Azure AD credentials. -+ Last updated 06/30/2023 - recommendations: false |
storage | Storage Files Identity Auth Linux Kerberos Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-linux-kerberos-enable.md | Title: Use on-premises Active Directory Domain Services or Azure Active Directory Domain Services to authorize access to Azure Files over SMB for Linux clients using Kerberos authentication description: Learn how to enable identity-based Kerberos authentication for Linux clients over Server Message Block (SMB) for Azure Files using on-premises Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (Azure AD DS) -+ Last updated 04/18/2023 - # Enable Active Directory authentication over SMB for Linux clients accessing Azure Files |
storage | Storage Files Identity Multiple Forests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-multiple-forests.md | Title: Use Azure Files with multiple Active Directory (AD) forests description: Configure on-premises Active Directory Domain Services (AD DS) authentication for SMB Azure file shares with an AD DS environment using multiple forests. -+ Last updated 05/23/2023 - # Use Azure Files with multiple Active Directory forests |
storage | Storage Files Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-introduction.md | Title: Introduction to Azure Files description: An overview of Azure Files, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols. -+ Last updated 09/14/2022 - # What is Azure Files? |
storage | Storage Files Migration Linux Hybrid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-linux-hybrid.md | Title: Linux migration to Azure File Sync description: Learn how to migrate files from a Linux server location to a hybrid cloud deployment with Azure File Sync and Azure file shares. -+ Last updated 03/19/2020 - # Migrate from Linux to a hybrid cloud deployment with Azure File Sync |
storage | Storage Files Migration Nas Cloud Databox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md | Title: On-premises NAS migration to Azure file shares description: Learn how to migrate files from an on-premises Network Attached Storage (NAS) location to Azure file shares with Azure DataBox. -+ Last updated 12/15/2022 - recommendations: false |
storage | Storage Files Migration Nas Hybrid Databox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-hybrid-databox.md | Title: On-premises NAS migration to Azure File Sync via Data Box description: Learn how to migrate files from an on-premises Network Attached Storage (NAS) location to a hybrid cloud deployment by using Azure File Sync via Azure Data Box. -+ Last updated 03/5/2021 - # Use Data Box to migrate from Network Attached Storage (NAS) to a hybrid cloud deployment by using Azure File Sync |
storage | Storage Files Migration Nas Hybrid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-hybrid.md | Title: On-premises NAS migration to Azure File Sync description: Learn how to migrate files from an on-premises Network Attached Storage (NAS) location to a hybrid cloud deployment with Azure File Sync and Azure file shares. -+ Last updated 03/28/2023 - # Migrate from Network Attached Storage (NAS) to a hybrid cloud deployment with Azure File Sync |
storage | Storage Files Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md | Title: Migrate to Azure file shares description: Learn how to migrate to Azure file shares and find your migration guide. -+ Last updated 05/30/2023 - # Migrate to Azure file shares |
storage | Storage Files Migration Robocopy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-robocopy.md | Title: Migrate to Azure file shares using RoboCopy description: Learn how to move or migrate files to an SMB Azure file share using RoboCopy. -+ Last updated 12/16/2022 - recommendations: false |
storage | Storage Files Migration Server Hybrid Databox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-server-hybrid-databox.md | Title: Migrate data into Azure File Sync with Azure Data Box description: Migrate bulk data offline that's compatible with Azure File Sync. Avoid file conflicts, and catch up your file share with the latest changes on the server for a zero downtime cloud migration. -+ Last updated 06/01/2021 - # Migrate data offline to Azure File Sync with Azure Data Box |
storage | Storage Files Migration Storsimple 1200 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-1200.md | Title: StorSimple 1200 migration to Azure File Sync description: Learn how to migrate a StorSimple 1200 series virtual appliance to Azure File Sync. -+ Last updated 01/12/2023 - # StorSimple 1200 migration to Azure File Sync |
storage | Storage Files Migration Storsimple 8000 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md | Title: StorSimple 8000 series migration to Azure File Sync description: Learn how to migrate a StorSimple 8100 or 8600 appliance to Azure File Sync. -+ Last updated 01/12/2023 - |
storage | Storage Files Monitoring Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring-reference.md | Title: Azure Files monitoring data reference description: Log and metrics reference for monitoring data from Azure Files. -+ Last updated 03/29/2023 - |
storage | Storage Files Netapp Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-netapp-comparison.md | Title: Azure Files and Azure NetApp Files comparison description: Comparison of Azure Files and Azure NetApp Files. --+ Last updated 03/01/2023 |
storage | Storage Files Networking Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-dns.md | Title: Configuring DNS forwarding for Azure Files description: Learn how to configure DNS forwarding for Azure Files. -+ Last updated 07/02/2021 - |
storage | Storage Files Networking Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-endpoints.md | Title: Configuring Azure Files network endpoints description: Learn how to configure Azure File network endpoints. -+ Last updated 07/02/2021 - |
storage | Storage Files Networking Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-overview.md | Title: Azure Files networking considerations description: An overview of networking options for Azure Files. -+ Last updated 05/23/2022 - # Azure Files networking considerations |
storage | Storage Files Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md | Title: Planning for an Azure Files deployment description: Understand how to plan for an Azure Files deployment. You can either direct mount an Azure file share, or cache Azure file shares on-premises with Azure File Sync. -+ Last updated 06/09/2023 - |
storage | Storage Files Prevent File Share Deletion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-prevent-file-share-deletion.md | Title: Prevent accidental deletion - Azure file shares description: Learn about soft delete for Azure file shares and how you can use it to for data recovery and preventing accidental deletion. -+ Last updated 03/29/2021 - |
storage | Storage Files Quick Create Use Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md | Title: Tutorial - Create an NFS Azure file share and mount it on a Linux VM using the Azure portal description: This tutorial covers how to use the Azure portal to deploy a Linux virtual machine, create an Azure file share using the NFS protocol, and mount the file share so that it's ready to store files. -+ Last updated 10/21/2022 - #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file share using NFS and Linux so I can determine whether I want to subscribe to the service. |
storage | Storage Files Quick Create Use Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-windows.md | Title: Tutorial - Create an SMB Azure file share and connect it to a Windows virtual machine using the Azure portal description: This tutorial covers how to create an SMB Azure file share using the Azure portal, connect it to a Windows VM, upload a file to the file share, create a snapshot, and restore the share from the snapshot. -+ Last updated 10/24/2022 - #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file shares so I can determine whether I want to subscribe to the service. |
storage | Storage Files Scale Targets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md | Title: Azure Files scalability and performance targets description: Learn about the capacity, IOPS, and throughput rates for Azure file shares. -+ Last updated 11/2/2022 - # Azure Files scalability and performance targets |
storage | Storage Files Smb Multichannel Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-smb-multichannel-performance.md | Title: SMB Multichannel performance - Azure Files description: Learn how SMB Multichannel can improve performance for Azure file shares. -+ Last updated 02/22/2023 - # SMB Multichannel performance |
storage | Storage How To Create File Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md | Title: Create an SMB Azure file share description: How to create and delete an SMB Azure file share by using the Azure portal, Azure PowerShell, or Azure CLI. -+ Last updated 05/24/2022 - |
storage | Storage How To Use Files Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md | Title: Mount SMB Azure file share on Linux description: Learn how to mount an Azure file share over SMB on Linux and review SMB security considerations on Linux clients. -+ Last updated 01/10/2023 - # Mount SMB Azure file share on Linux |
storage | Storage How To Use Files Mac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-mac.md | Title: Mount SMB Azure file share on macOS description: Learn how to mount an Azure file share over SMB with macOS using Finder or Terminal. Azure Files is Microsoft's easy-to-use cloud file system. -+ Last updated 05/26/2022 - # Mount SMB Azure file share on macOS |
storage | Storage How To Use Files Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md | Title: Quickstart for creating and using Azure file shares description: Learn how to create and use Azure file shares with the Azure portal, Azure CLI, or Azure PowerShell. Create a storage account, create an SMB Azure file share, and use your Azure file share. -+ Last updated 01/03/2023 - ms.devlang: azurecli #Customer intent: As an IT admin new to Azure Files, I want to try out Azure Files so I can determine whether I want to subscribe to the service. |
storage | Storage How To Use Files Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-windows.md | Title: Mount SMB Azure file share on Windows description: Learn to use Azure file shares with Windows and Windows Server. Use Azure file shares with SMB 3.x on Windows installations running on-premises or on Azure VMs. -+ Last updated 05/02/2023 - # Mount SMB Azure file share on Windows |
storage | Storage Java How To Use File Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-java-how-to-use-file-storage.md | Title: Develop for Azure Files with Java description: Learn how to develop Java applications and services that use Azure Files to store file data. -+ Last updated 05/26/2021 - # Develop for Azure Files with Java |
storage | Storage Snapshots Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-snapshots-files.md | Title: Overview of share snapshots for Azure Files description: A share snapshot is a read-only version of an Azure file share that's taken at a point in time, as a way to back up the share. -+ Last updated 06/07/2023 - # Overview of share snapshots for Azure Files |
storage | Understand Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understand-performance.md | Title: Understand Azure Files performance description: Learn about the factors that can impact Azure file share performance and how to optimize performance for your workload. -+ Last updated 07/06/2023 - # Understand Azure Files performance |
storage | Understanding Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md | Title: Understand Azure Files billing description: Learn how to interpret the provisioned and pay-as-you-go billing models for Azure file shares. -+ Last updated 01/24/2023 - # Understand Azure Files billing |
storage | Windows Server To Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/windows-server-to-azure-files.md | Title: Replace or extend Windows file servers with Azure Files and Azure File Sync description: Azure Files and Azure File Sync can be useful when replacing your on-premises Windows file servers or extending them into the cloud. Learn how you can use Azure storage services to increase flexibility, improve data protection, and reduce TCO for file storage. -+ Last updated 03/17/2023 - # Replace or extend Windows file servers with Azure Files and Azure File Sync |
storsimple | Storsimple Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md | Title: StorSimple 8000 series solution overview | Microsoft Docs -description: Describes StorSimple data copy resources, data migration, device decommission operations, end of support, tiering, virtual device, and storage management, and introduces key terms used in StorSimple. +description: Describes StorSimple data copy resources, data migration, device decommission operations, end of support, tiering, virtual device, and storage management. documentationcenter: NA Decommission operations can't be undone. We recommend that you complete your dat ``` To instead reset a single controller, use the [Reset-HcsFactoryDefault](https://learn.microsoft.com/previous-versions/windows/powershell-scripting/dn688132(v=wps.630)) cmdlet with the *-scope* parameter. - The system reboots multiple times. You'll be notified when the reset has successfully completed. Depending on the system model, it can take 45-60 minutes for an 8100 device and 60-90 minutes for an 8600 to finish this process. + The system reboots multiple times. You're notified when the reset has successfully completed. Depending on the system model, it can take 45-60 minutes for an 8100 device and 60-90 minutes for an 8600 to finish this process. **Step 3. Shut down the device.** This section explains how to shut down a running or a failed StorSimple device f **Step 3.3** - You must now look at the back plane of the device. After the two controllers are shut down, the status LEDs on both the controllers should be blinking red. To turn off the device completely at this time, flip the power switches on both Power and Cooling Modules (PCMs) to the OFF position. This turns off the device. +## Create a support request ++Use the following steps to create a support ticket for StorSimple data copy, data migration, and device decommission operations. ++1. In Azure portal, type **help** in the search bar and then select **Help + Support**. ++  ++1. On the **Help + Support** page, select **Create a support request.** ++  ++1. On the **New support request** page, provide the required information: + - Provide a brief **Summary** of the issue. + - Specify **Technical** as the **Issue type**. + - Specify the affected **Subscription**. + - Specify **All services**. You must specify **All services** because **StorSimple Manager Service** is no longer available. + - For **Service type**, specify **Azure StorSimple 8000 Series**. + - For **Problem type**, Specify **StorSimple Migration Utility**. + - To continue, select **Next**. ++  + +1. If the **Solutions** page appears, select **Return to support request** and then select **Next**. ++1. On the **Additional details** tab, provide additional details and contact information: ++ - Specify the time when the problem started, a description, and upload relevant files, if applicable. + - Specify **Yes** or **No** for **Advanced diagnostic information** collection. + - Your support plan will be generated based on your subscription. Specify severity, your preferred contact method, and language. + - Specify **Contact information**: First name, Last name, Email, Phone, and Country/region. + - To continue, select **Next**. ++  ++1. On the **Review + create** tab, review the summary of your case. To continue, select **Create**. ++  ++Microsoft Support will use this information to reach out to you for additional details and diagnosis. A Support engineer will contact you as soon as possible to proceed with your request. + ## Next steps - [StorSimple 8000 series copy utility documentation](https://aka.ms/storsimple-copy-utility-docs). |
stream-analytics | Monitor Azure Stream Analytics Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/monitor-azure-stream-analytics-reference.md | + + Title: Monitoring Azure Stream Analytics data reference +description: Important reference material needed when you monitor Azure Stream Analytics ++++ Last updated : 07/10/2023++++# Monitoring Azure Stream Analytics data reference +This article provides a reference of log and metric data collected to analyze the performance and availability of Azure Stream Analytics jobs. See [Monitoring Azure Stream Analytics](monitor-azure-stream-analytics.md) for details on collecting and analyzing monitoring data for Azure Stream Analytics. ++## Metrics ++This section lists all the automatically collected platform metrics collected for Azure Stream Analytics. ++|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics | +|-|--| +| Stream Analytics streaming jobs | [Microsoft.StreamAnalytics/streamingjobs](/azure/azure-monitor/platform/metrics-supported#microsoftstreamanalyticsstreamingjobs) | +++### Scenarios to monitor metrics +++## Metric dimensions +++### Logical Name dimension ++### Node Name dimension ++### Partition ID dimension ++## Resource logs +++### Resource logs schema ++++## Activity log +The following table lists the operations that Azure Stream Analytics may record in the Activity log. This set of operations is a subset of the possible entries you might find in the activity log. ++| Namespace | Description | +|:-|:| +| [Microsoft.StreamAnalytics](/azure/role-based-access-control/resource-provider-operations#microsoftstreamanalytics) | The operations that can be created in the Activity log for the Azure Data Share service. | ++See [all the possible resource provider operations in the activity log](/azure/role-based-access-control/resource-provider-operations). For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema). +++## Next steps +* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md) +* [Dimensions for Azure Stream Analytics metrics](./stream-analytics-job-metrics-dimensions.md) +* [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md) +* [Analyze Stream Analytics job performance by using metrics and dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md) +* [Monitor a Stream Analytics job with the Azure portal](./stream-analytics-monitoring.md) +* [Get started with Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md) + |
stream-analytics | Monitor Azure Stream Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/monitor-azure-stream-analytics.md | + + Title: Monitoring Azure Stream Analytics +description: Start here to learn how to monitor Azure Stream Analytics +++++ Last updated : 07/10/2023+++# Monitoring Azure Stream Analytics +When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. ++This article describes the monitoring data generated by Azure Stream Analytics. Azure Stream Analytics uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). ++## What is Azure Monitor? +Azure Stream Analytics creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises. ++Start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts: ++- What is Azure Monitor? +- Costs associated with monitoring +- Monitoring data collected in Azure +- Configuring data collection +- Standard tools in Azure for analyzing and alerting on monitoring data ++The following sections build on this article by describing the specific data gathered for Azure Stream Analytics. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. ++> [!TIP] +> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor//usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md). ++## Monitoring overview page in Azure portal +Azure Stream Analytics provides plenty of metrics that you can use to monitor and troubleshoot your query and job performance. You can view data from these metrics on the **Overview** page of the Azure portal, in the **Monitoring** section. +++If you want to check a specific metric, select **Metrics** in the **Monitoring** section. On the page that appears, select the metric. +++## Monitoring data +Azure Stream Analytics collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources). ++See [Monitoring Azure Stream Analytics data reference](monitor-azure-stream-analytics-reference.md) for detailed information on the metrics and logs metrics created by Azure Stream Analytics. ++## Collection and routing +Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. ++See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for **Azure Stream Analytics** are listed in [Azure Stream Analytics monitoring data reference](monitor-azure-stream-analytics-reference.md#resource-logs). ++The metrics and logs you can collect are discussed in the following sections. ++## Analyzing metrics +You can analyze metrics for **Azure Stream Analytics** with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool. ++For a list of the platform metrics collected for Azure Stream Analytics, see [Monitoring Azure Stream Analytics data reference metrics](monitor-azure-stream-analytics-reference.md#metrics) ++For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported). +++## Analyzing logs +Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. ++All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). ++The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of Azure platform log that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. For more information, see [Debugging using activity logs](stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs). ++For a list of the types of resource logs collected for Azure Stream Analytics, see [Monitoring Azure Stream Analytics data reference](monitor-azure-stream-analytics-reference.md#resource-logs). For more information, see [Send diagnostics to Azure Monitor logs](stream-analytics-job-diagnostic-logs.md#send-diagnostics-to-azure-monitor-logs). ++### Sample Kusto queries ++<!-- REQUIRED if you support logs. Please keep headings in this order --> +<!-- Add sample Log Analytics Kusto queries for your service. --> ++<!-- WRITER: For sample Log Analytics Kusto queries, add some of the pre-defined queries provided by Stream Analytics to the left of the query editor, within the Queries tab? --> ++Following are sample queries that you can use to help you monitor your Azure Stream Analytics resources: ++- List all input data errors. The following query shows all errors that occurred while processing the data from inputs. ++ ```kusto + AzureDiagnostics + | where ResourceProvider == "MICROSOFT.STREAMANALYTICS" and parse_json(properties_s).Type == "DataError" + | project TimeGenerated, Resource, Region_s, OperationName, properties_s, Level, _ResourceId + ``` +- Events that arrived late. The following query shows errors due to events where difference between application time and arrival time is greater than the late arrival policy. ++ ```kusto + AzureDiagnostics + | where ResourceProvider == "MICROSOFT.STREAMANALYTICS" and parse_json(properties_s).DataErrorType == "LateInputEvent" + | project TimeGenerated, Resource, Region_s, OperationName, properties_s, Level, _ResourceId + ``` +- Events that arrived early. The following query shows errors due to events where difference between Application time and Arrival time is greater than 5 minutes. + + ```kusto + AzureDiagnostics + | where ResourceProvider == "MICROSOFT.STREAMANALYTICS" and parse_json(properties_s).DataErrorType == "EarlyInputEvent" + | project TimeGenerated, Resource, Region_s, OperationName, properties_s, Level, _ResourceId + ``` +- Events that arrived out of order. The following query shows errors due to events that arrive out of order according to the out-of-order policy. + + ```kusto + // To create an alert for this query, click '+ New alert rule' + AzureDiagnostics + | where ResourceProvider == "MICROSOFT.STREAMANALYTICS" and parse_json(properties_s).DataErrorType == "OutOfOrderEvent" + | project TimeGenerated, Resource, Region_s, OperationName, properties_s, Level, _ResourceId + ``` +- All output data errors. The following query shows all errors that occurred while writing the results of the query to the outputs in your job. ++ ```kusto + AzureDiagnostics + | where ResourceProvider == "MICROSOFT.STREAMANALYTICS" and parse_json(properties_s).DataErrorType in ("OutputDataConversionError.RequiredColumnMissing", "OutputDataConversionError.ColumnNameInvalid", "OutputDataConversionError.TypeConversionError", "OutputDataConversionError.RecordExceededSizeLimit", "OutputDataConversionError.DuplicateKey") + | project TimeGenerated, Resource, Region_s, OperationName, properties_s, Level, _ResourceId + ``` +- The following query shows the summary of failed operations in the last seven days. ++ ```kusto + AzureDiagnostics + | where TimeGenerated > ago(7d) //last 7 days + | where ResourceProvider == "MICROSOFT.STREAMANALYTICS" and status_s == "Failed" + | summarize Count=count(), sampleEvent=any(properties_s) by JobName=Resource + ``` ++> [!IMPORTANT] +> When you select **Logs** from the Stream Analytics job menu, Log Analytics is opened with the query scope set to the current Stream Analytics job. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Stream Analytics jobs or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details. ++For a list of common queries for Azure Stream Analytics, see the [Log Analytics queries interface](/azure/azure-monitor/logs/queries). ++## Alerts +Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks. ++++## Next steps ++<!-- Add additional links. You can change the wording of these and add more if useful. --> ++- See [Monitoring Azure Stream Analytics data reference](monitor-azure-stream-analytics-reference.md) for a reference of the metrics, logs, and other important values created by Azure Stream Analytics. +- See the following articles: + - [Monitor jobs using Azure portal](stream-analytics-monitoring.md) + - [Monitor jobs using Azure PowerShell](stream-analytics-monitor-and-manage-jobs-use-powershell.md) + - [Monitor jobs using Azure .NET SDK](stream-analytics-monitor-jobs.md) + - [Set up alerts](stream-analytics-set-up-alerts.md) +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. |
stream-analytics | No Code Stream Processing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md | Title: No-code stream processing through Azure Stream Analytics -description: Learn about processing your real-time data streams in Azure Event Hubs by using the Azure Stream Analytics no-code editor. + Title: No-code stream processing in Azure Stream Analytics +description: Learn about processing your real-time data streams by using the Azure Stream Analytics no-code editor. Previously updated : 2/17/2023 Last updated : 7/5/2023 -# No-code stream processing through Azure Stream Analytics +# No-code stream processing in Azure Stream Analytics -You can process your real-time data streams in Azure Event Hubs by using Azure Stream Analytics. The no-code editor allows you to develop a Stream Analytics job without writing a single line of code. In minutes, you can develop and run a job that tackles many scenarios, including: +The no-code editor allows you to develop a Stream Analytics job effortlessly to process your real-time streaming data, using drag-and-drop functionality, without writing a single line of code. The experience provides a canvas that allows you to connect to input sources to quickly see your streaming data. Then you can transform it before writing to your destinations. -- [Build real-time dashboard with Power BI dataset](./no-code-build-power-bi-dashboard.md)-- [Capture data from Event Hubs in Delta Lake format (preview)](./capture-event-hub-data-delta-lake.md)-- [Filtering and ingesting to Azure Synapse SQL](./filter-ingest-synapse-sql.md)-- [Capturing your Event Hubs data in Parquet format in Azure Data Lake Storage Gen2](./capture-event-hub-data-parquet.md)-- [Materializing data in Azure Cosmos DB](./no-code-materialize-cosmos-db.md)-- [Filter and ingest to Azure Data Lake Storage Gen2](./filter-ingest-data-lake-storage-gen2.md)-- [Enrich data and ingest to event hub](./no-code-enrich-event-hub-data.md)-- [Transform and store data to Azure SQL database](./no-code-transform-filter-ingest-sql.md)-- [Filter and ingest to Azure Data Explorer](./no-code-filter-ingest-data-explorer.md)--The experience provides a canvas that allows you to connect to input sources to quickly see your streaming data. Then you can transform it before writing to your destination of choice in Azure. --You can: +With the no-code editor, you can easily: - Modify input schemas. - Perform data preparation operations like joins and filters. After you create and run your Stream Analytics jobs, you can easily operationali Before you develop your Stream Analytics jobs by using the no-code editor, you must meet these requirements: -- The Azure Event Hubs namespace and any target destination resource where you want to write must be publicly accessible and can't be in an Azure virtual network.+- The streaming input sources and target destination resources for the Stream Analytics job must be publicly accessible and can't be in an Azure virtual network. - You must have the required permissions to access the streaming input and output resources. - You must maintain permissions to create and modify Azure Stream Analytics resources. +> [!NOTE] +> The no-code editor is currently not available in the China region. + ## Azure Stream Analytics job A Stream Analytics job is built on three main components: _streaming inputs_, _transformations_, and _outputs_. You can have as many components as you want, including multiple inputs, parallel branches with multiple transformations, and multiple outputs. For more information, see [Azure Stream Analytics documentation](index.yml). -To use the no-code editor to create a Stream Analytics job, open an Event Hubs instance. Select **Process Data**, and then select any template. +> [!NOTE] +> The following functionalities and output types are unavailable when using the no-code editor: +> - User defined-functions. +> - Query editing in Azure Stream Analytics query blade. However, you can view the query generated by the no-code editor in query blade. +> - Input/output adding in Azure Stream Analytics input/output blades. However, you can view the input/output generated by the no-code editor in input/output blade. +> - The following output types are not available: Azure Function, ADLS Gen1, PostgreSQL DB, Service Bus queue/topic, Table storage. ++To access the no-code editor for building your stream analytics job, there are two approaches: ++1. **Through Azure Stream Analytics portal (preview)**: Create a Stream Analytics job, and then select the no-code editor in the **Get started** tab in **Overview** blade, or select **No-code editor** in the left panel. ++ :::image type="content" source="./media/no-code-stream-processing/no-code-on-asa-portal.png" alt-text="Screenshot that shows no-code on ASA portal locations." lightbox="./media/no-code-stream-processing/no-code-on-asa-portal.png" ::: +++2. **Through Azure Event Hubs portal**: Open an Event Hubs instance. Select **Process Data**, and then select any pre-defined template. + :::image type="content" source="./media/no-code-stream-processing/new-stream-analytics-job.png" alt-text="Screenshot that shows selections to create a new Stream Analytics job." lightbox="./media/no-code-stream-processing/new-stream-analytics-job.png" ::: -The following screenshot shows a finished Stream Analytics job. It highlights all the sections available to you while you author. + The pre-defined templates can assist you in developing and running a job to address various scenarios, including: ++ - [Build real-time dashboard with Power BI dataset](./no-code-build-power-bi-dashboard.md) + - [Capture data from Event Hubs in Delta Lake format (preview)](./capture-event-hub-data-delta-lake.md) + - [Filtering and ingesting to Azure Synapse SQL](./filter-ingest-synapse-sql.md) + - [Capturing your Event Hubs data in Parquet format in Azure Data Lake Storage Gen2](./capture-event-hub-data-parquet.md) + - [Materializing data in Azure Cosmos DB](./no-code-materialize-cosmos-db.md) + - [Filter and ingest to Azure Data Lake Storage Gen2](./filter-ingest-data-lake-storage-gen2.md) + - [Enrich data and ingest to event hub](./no-code-enrich-event-hub-data.md) + - [Transform and store data to Azure SQL database](./no-code-transform-filter-ingest-sql.md) + - [Filter and ingest to Azure Data Explorer](./no-code-filter-ingest-data-explorer.md) ++The following screenshot shows a completed Stream Analytics job. It highlights all the sections available to you while you author. :::image type="content" source="./media/no-code-stream-processing/created-stream-analytics-job.png" alt-text="Screenshot that shows the authoring interface sections." lightbox="./media/no-code-stream-processing/created-stream-analytics-job.png" ::: The following screenshot shows a finished Stream Analytics job. It highlights al 3. **Side pane**: Depending on which component you selected in the diagram view, you'll have settings to modify input, transformation, or output. 4. **Tabs for data preview, authoring errors, runtime logs, and metrics**: For each tile, the data preview will show you results for that step (live for inputs; on demand for transformations and outputs). This section also summarizes any authoring errors or warnings that you might have in your job when it's being developed. Selecting each error or warning will select that transform. It also provides the job metrics for you to monitor the running job's health. -## Event Hubs as the streaming input +## Streaming data input ++The no-code editor supports streaming data input from three types of resources: ++- Azure Event Hubs +- Azure IoT Hub +- Azure Data Lake Storage Gen2 ++For more information about the streaming data inputs, see [Stream data as input into Stream Analytics](./stream-analytics-define-inputs.md). ++> [!NOTE] +> The no-code editor in the Azure Event Hubs portal only has **Event Hub** as an input option. +++### Azure Event Hubs as streaming input Azure Event Hubs is a big-data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored through any real-time analytics provider or batching/storage adapter. After you set up your event hub's details and select **Connect**, you can add fi When Stream Analytics jobs detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view. -You can always edit the field names, or remove or change the data type, or change the event time (**Mark as event time**: TIMESTAMP BY clause if a datetime type field), by selecting the three-dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image. +#### Modify input data ++You can edit the field names, or remove field, or change the data type, or change the event time (**Mark as event time**: TIMESTAMP BY clause if a datetime type field), by selecting the three-dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image. ++> [!TIP] +> This applies to the input data from Azure IoT Hub and ADLS Gen2 as well. :::image type="content" source="./media/no-code-stream-processing/event-hub-schema.png" alt-text="Screenshot that shows selections for adding, removing, and editing the fields for an event hub." lightbox="./media/no-code-stream-processing/event-hub-schema.png" ::: The available data types are: - **Record**: Nested object with multiple records. - **String**: Text. +### Azure IoT Hub as the streaming input ++Azure IoT Hub is a managed service hosted in the cloud that acts as a central message hub for communication between an IoT application and its attached devices. IoT device data sent to IoT hub can be used as an input for a Stream Analytics job. ++> [!NOTE] +> Azure IoT Hub input is available in the no-code editor on Azure Stream Analytics portal. ++To add an IoT hub as a streaming input for your job, select the **IoT Hub** under **Inputs** from the ribbon. Then fill in the needed information in the right panel to connect IoT hub to your job. To learn more about the details of each field, see [Stream data from IoT Hub to Stream Analytics job](./stream-analytics-define-inputs.md#stream-data-from-iot-hub). +++### Azure Data Lake Storage Gen2 as streaming input ++Azure Data Lake Storage Gen2 (ADLS Gen2) is a cloud-based, enterprise data lake solution. It's designed to store massive amounts of data in any format, and to facilitate big data analytical workloads. The data stored in ADLS Gen2 can be processed as a data stream by Stream Analytics. To learn more about this type of input, see [Stream data from ADLS Gen2 to Stream Analytics job](./stream-analytics-define-inputs.md#stream-data-from-blob-storage-or-data-lake-storage-gen2) ++> [!NOTE] +> Azure Data Lake Storage Gen2 input is available in the no-code editor on Azure Stream Analytics portal. ++To add an ADLS Gen2 as a streaming input for your job, select the **ADLS Gen2** under **Inputs** from the ribbon. Then fill in the needed information in the right panel to connect ADLS Gen2 to your job. To learn more about the details of each field, see [Stream data from ADLS Gen2 to Stream Analytics job](./stream-analytics-define-inputs.md#stream-data-from-blob-storage-or-data-lake-storage-gen2) ++ ## Reference data inputs Reference data is static or changes slowly over time. It's typically used to enrich incoming streams and do lookups in your job. For example, you might join data stream input to reference data, much as you would perform a SQL join to look up static values. For more information about reference data inputs, see [Use reference data for lookups in Stream Analytics](stream-analytics-use-reference-data.md). First, under the **Inputs** section on the ribbon, select **Reference ADLS Gen2* Then, upload a JSON array file. The fields in the file will be detected. Use this reference data to perform transformation with streaming input data from Event Hubs. - +[  ](./media/no-code-stream-processing/blob-referencedata-upload-nocode.png#lightbox) ### Azure SQL Database as reference data The **Manage fields** transformation allows you to add, remove, or rename fields :::image type="content" source="./media/no-code-stream-processing/manage-field-transformation.png" alt-text="Screenshot that shows selections for managing fields." lightbox="./media/no-code-stream-processing/manage-field-transformation.png" ::: -You can also add new field with the **Build-in Functions** to aggregate the data from upstream. Currently, the build-in functions we support are some functions in **String Functions**, **Date and Time Functions**, **Mathematical Functions**. To learn more about the definitions of these functions, see [Built-in Functions (Azure Stream Analytics)](/stream-analytics-query/built-in-functions-azure-stream-analytics). +You can also add new field with the **Built-in Functions** to aggregate the data from upstream. Currently, the built-in functions we support are some functions in **String Functions**, **Date and Time Functions**, **Mathematical Functions**. To learn more about the definitions of these functions, see [Built-in Functions (Azure Stream Analytics)](/stream-analytics-query/built-in-functions-azure-stream-analytics). > [!TIP] > After you configure a tile, the diagram view gives you a glimpse of the settings within the tile. For example, in the **Manage fields** area of the preceding image, you can see the first three fields being managed and the new names assigned to them. Each tile has information that's relevant to it. To configure Azure SQL Database as output, select **SQL Database** under the **O For more information about Azure SQL Database output for a Stream Analytics job, see [Azure SQL Database output from Azure Stream Analytics](./sql-database-output.md). -### Event Hubs +### Event Hub With the real-time data coming through event hub to ASA, no-code editor can transform, enrich the data and then output the data to another event hub as well. You can choose the **Event Hub** output when you configure your Azure Stream Analytics job. To configure Azure Data Explorer as output, select **Azure Data Explorer** under For more information about Azure Data Explorer output for a Stream Analytics job, see [Azure Data Explorer output from Azure Stream Analytics (Preview)](./azure-database-explorer-output.md). +### Power BI ++[Power BI](https://powerbi.microsoft.com/) offers a comprehensive visualization experience for your data analysis result. With Power BI output to Stream Analytics, the processed streaming data is written to Power BI streaming dataset, then it can be used to build the near real-time Power BI dashboard. To learn more about how to build the near real-time dashboard, see [Build real-time dashboard with Power BI dataset produced from Stream Analytics no code editor](./no-code-build-power-bi-dashboard.md). ++To configure Power BI as output, select **Power BI** under the **Outputs** section on the ribbon. Then fill in the needed information to connect your Power BI workspace and provide the names for the streaming dataset and table that you want to write the data to. To learn more about the details of each field, see [Power BI output from Azure Stream Analytics](./power-bi-output.md). ++ ## Data preview, authoring errors, runtime logs, and metrics The no-code drag-and-drop experience provides tools to help you author, troubleshoot, and evaluate the performance of your analytics pipeline for streaming data. ### Live data preview for inputs -When you're connecting to an event hub and selecting its tile in the diagram view (the **Data Preview** tab), you'll get a live preview of data coming in if all the following are true: +When you're connecting to an input source, for example: event hub, and selecting its tile in the diagram view (the **Data Preview** tab), you'll get a live preview of data coming in if all the following are true: - Data is being pushed. - The input is configured correctly. You can select more metrics from the list. To understand all the metrics in deta ## Start a Stream Analytics job -You can save the job anytime while creating it. After you configure the event hub, transformations, and streaming outputs for the job, you can start the job. +You can save the job anytime while creating it. After you configure the streaming inputs, transformations, and streaming outputs for the job, you can start the job. > [!NOTE]-> Although the no-code editor is in preview, the Azure Stream Analytics service is generally available. +> Although the no-code editor on Azure Stream Analtyics portal is in preview, the Azure Stream Analytics service is generally available. :::image type="content" source="./media/no-code-stream-processing/no-code-save-start.png" alt-text="Screenshot that shows the Save and Start buttons." lightbox="./media/no-code-stream-processing/no-code-save-start.png" ::: You can configure these options: :::image type="content" source="./media/no-code-stream-processing/start-job.png" alt-text="Screenshot that shows the dialog for reviewing the Stream Analytics job configuration and starting the job." lightbox="./media/no-code-stream-processing/start-job.png" ::: -## Stream Analytics jobs list +### Stream Analytics job list in Azure Event Hubs portal -To see a list of all Stream Analytics jobs that you created by using the no-code drag-and-drop experience, select **Process data** > **Stream Analytics jobs**. +To see a list of all Stream Analytics jobs that you created by using the no-code drag-and-drop experience in **Azure Event Hubs portal**, select **Process data** > **Stream Analytics jobs**. :::image type="content" source="./media/no-code-stream-processing/jobs-list.png" alt-text="Screenshot that shows the Stream Analytics job list where you review job status." lightbox="./media/no-code-stream-processing/jobs-list.png" ::: These are the elements of the **Stream Analytics jobs** tab: Learn how to use the no-code editor to address common scenarios by using predefined templates: -- [Capture Event Hubs data in Parquet format](capture-event-hub-data-parquet.md)-- [Filter and ingest to Azure Synapse SQL](filter-ingest-synapse-sql.md)-- [Filter and ingest to Azure Data Lake Storage Gen2](filter-ingest-data-lake-storage-gen2.md)-- [Materialize data to Azure Cosmos DB](no-code-materialize-cosmos-db.md)-- [Transform and store data to SQL database](no-code-transform-filter-ingest-sql.md)-- [Filter and store data to Azure Data Explorer](no-code-filter-ingest-data-explorer.md)-- [Enrich data and ingest to Event Hubs](no-code-enrich-event-hub-data.md)+- [Introduction to Azure Stream Analytics](./stream-analytics-introduction.md) +- [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md) +- [Understand inputs for Azure Stream Analytics](./stream-analytics-add-inputs.md) +- [Outputs from Azure Stream Analytics](./stream-analytics-define-outputs.md) |
stream-analytics | Stream Analytics Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-introduction.md | You can try Azure Stream Analytics with a free Azure subscription. Azure Stream Analytics is easy to start. It only takes a few clicks to connect to multiple sources and sinks, creating an end-to-end pipeline. Stream Analytics can connect to Azure Event Hubs and Azure IoT Hub for streaming data ingestion, as well as Azure Blob storage to ingest historical data. Job input can also include static or slow-changing reference data from Azure Blob storage or SQL Database that you can join to streaming data to perform lookup operations. -Stream Analytics can route job output to many storage systems such as Azure Blob storage, Azure SQL Database, Azure Data Lake Store, and Azure Cosmos DB. You can also run batch analytics on stream outputs with Azure Synapse Analytics or HDInsight, or you can send the output to another service, like Event Hubs for consumption or Power BI for real-time visualization. +Stream Analytics can route job output to many storage systems such as Azure Blob storage, Azure SQL Database, Azure Data Lake Store, and Azure Cosmos DB. You can also run batch analytics on stream outputs with Azure Synapse Analytics or HDInsight, or you can send the output to another service, like Event Hubs for consumption or Power BI for real-time visualization. For the entire list of Stream Analytics outputs, see [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md). -For the entire list of Stream Analytics outputs, see [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md). +The Azure Stream Analytics no-code editor offers a no-code experience that enables you to develop Stream Analytics jobs effortlessly, using drag-and-drop functionality, without having to write any code. It further simplifies Stream Analytics job development experience. To learn more about the no-code editor, see [No-code stream processing in Azure Stream Analytics](./no-code-stream-processing.md) ## Programmer productivity |
stream-analytics | Stream Analytics Job Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-diagnostic-logs.md | Last updated 06/18/2020 Occasionally, an Azure Stream Analytics job unexpectedly stops processing. It's important to be able to troubleshoot this kind of event. Failures can be caused by an unexpected query result, by connectivity to devices, or by an unexpected service outage. The resource logs in Stream Analytics can help you identify the cause of issues when they occur and reduce recovery time. -It is highly recommended to enable resource logs for all jobs as this will greatly help with debugging and monitoring. +It's highly recommended to enable resource logs for all jobs as it will greatly help with debugging and monitoring. ## Log types Activity logs are on by default and give high-level insights into operations per 2. You can see a list of operations that have been performed. Any operation that caused your job to fail has a red info bubble. -3. Click an operation to see its summary view. Information here is often limited. To learn more details about the operation, click **JSON**. +3. Select an operation to see its summary view. Information here's often limited. To learn more details about the operation, select **JSON**.  Activity logs are on by default and give high-level insights into operations per ## Send diagnostics to Azure Monitor logs -Turning on resource logs and sending them to Azure Monitor logs is highly recommended. They are **off** by default. To turn them on, complete these steps: +Turning on resource logs and sending them to Azure Monitor logs is highly recommended. They're **off** by default. To turn them on, complete these steps: -1. Create a Log Analytics workspace if you don't already have one. It is recommended to have your Log Analytics workspace in the same region as your Stream Analytics job. +1. Create a Log Analytics workspace if you don't already have one. It's recommended to have your Log Analytics workspace in the same region as your Stream Analytics job. 2. Sign in to the Azure portal, and navigate to your Stream Analytics job. Under **Monitoring**, select **Diagnostics logs**. Then select **Turn on diagnostics**.  -2. Provide a **Name** in **Diagnostic settings name** and check the boxes for **Execution** and **Authoring** under **log**, and **AllMetrics** under **metric**. Then select **Send to Log Analytics** and choose your workspace. Click **Save**. +2. Provide a **Name** in **Diagnostic settings name** and check the boxes for **Execution** and **Authoring** under **log**, and **AllMetrics** under **metric**. Then select **Send to Log Analytics** and choose your workspace. Select **Save**.  Turning on resource logs and sending them to Azure Monitor logs is highly recomm  -4. Stream Analytics provides pre-defined queries that allows you to easily search for the logs that you are interested in. You can select any pre-defined queries on the left pane and then select **Run**. You will see the results of the query in the bottom pane. +4. Stream Analytics provides predefined queries that allow you to easily search for the logs that you're interested in. You can select any predefined queries on the left pane and then select **Run**. You'll see the results of the query in the bottom pane.  ## Resource log categories -Azure Stream Analytics captures two categories of resource logs: --* **Authoring**: Captures log events that are related to job authoring operations, such as job creation, adding and deleting inputs and outputs, adding and updating the query, and starting or stopping the job. --* **Execution**: Captures events that occur during job execution. - * Connectivity errors - * Data processing errors, including: - * Events that donΓÇÖt conform to the query definition (mismatched field types and values, missing fields, and so on) - * Expression evaluation errors - * Other events and errors ## Resource logs schema -All logs are stored in JSON format. Each entry has the following common string fields: --Name | Description -- | --time | Timestamp (in UTC) of the log. -resourceId | ID of the resource that the operation took place on, in upper case. It includes the subscription ID, the resource group, and the job name. For example, **/SUBSCRIPTIONS/6503D296-DAC1-4449-9B03-609A1F4A1C87/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.STREAMANALYTICS/STREAMINGJOBS/MYSTREAMINGJOB**. -category | Log category, either **Execution** or **Authoring**. -operationName | Name of the operation that is logged. For example, **Send Events: SQL Output write failure to mysqloutput**. -status | Status of the operation. For example, **Failed** or **Succeeded**. -level | Log level. For example, **Error**, **Warning**, or **Informational**. -properties | Log entry-specific detail, serialized as a JSON string. For more information, see the following sections in this article. --### Execution log properties schema --Execution logs have information about events that happened during Stream Analytics job execution. The schema of properties varies depending on whether the event is a data error or a generic event. --### Data errors --Any error that occurs while the job is processing data is in this category of logs. These logs most often are created during data read, serialization, and write operations. These logs do not include connectivity errors. Connectivity errors are treated as generic events. You can learn more about the cause of various different [input and output data errors](./data-errors.md). --Name | Description -- | --Source | Name of the job input or output where the error occurred. -Message | Message associated with the error. -Type | Type of error. For example, **DataConversionError**, **CsvParserError**, or **ServiceBusPropertyColumnMissingError**. -Data | Contains data that is useful to accurately locate the source of the error. Subject to truncation, depending on size. --Depending on the **operationName** value, data errors have the following schema: --* **Serialize events** occur during event read operations. They occur when the data at the input does not satisfy the query schema for one of these reasons: -- * *Type mismatch during event (de)serialize*: Identifies the field that's causing the error. -- * *Cannot read an event, invalid serialization*: Lists information about the location in the input data where the error occurred. Includes blob name for blob input, offset, and a sample of the data. --* **Send events** occur during write operations. They identify the streaming event that caused the error. --### Generic events --Generic events cover everything else. --Name | Description | ---Error | (optional) Error information. Usually, this is exception information if it's available. -Message| Log message. -Type | Type of message. Maps to internal categorization of errors. For example, **JobValidationError** or **BlobOutputAdapterInitializationFailure**. -Correlation ID | GUID that uniquely identifies the job execution. All execution log entries from the time the job starts until the job stops have the same **Correlation ID** value. ## Next steps |
stream-analytics | Stream Analytics Job Metrics Dimensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics-dimensions.md | -Stream Analytics has [many metrics](./stream-analytics-job-metrics.md) available to monitor a job's health. To troubleshoot performance problems with your job, you can split and filter metrics by using the following dimensions. --| Dimension | Definition | -| - | - | -| **Logical Name** | The input or output name for a Stream Analytics job. | -| **Partition ID** | The ID of the input data partition from an input source. For example, if the input source is an event hub, the partition ID is the event hub's partition ID. For embarrassingly parallel jobs, **Partition ID** in the output is the same as it is in the input. | -| **Node Name** | The identifier of a streaming node that's provisioned when your job runs. A streaming node represents the amount of compute and memory resources allocated to your job. | --- ## Logical Name dimension -**Logical Name** is the input or output name for a Stream Analytics job. For example, assume that a Stream Analytics job has four inputs and five outputs. You'll see the four individual logical inputs and five individual logical outputs when you split input-related and output-related metrics by this dimension. ---<!--:::image type="content" source="./media/stream-analytics-job-metrics-dimensions/05-input-events-splitting-by-logic-name.png" alt-text="Screenshot that shows splitting the Input Events metric by Logical Name."::: --> ---The **Logical Name** dimension is available for filtering and splitting the following metrics: -- **Backlogged Input Events** -- **Data Conversion Errors**-- **Early Input Events**-- **Input Deserialization Errors**-- **Input Event Bytes**-- **Input Events**-- **Input Source Received**-- **Late Input Events**-- **Out-of-Order Events**-- **Output Events**-- **Watermark Delay** ## Node Name dimension -A streaming node represents a set of compute resources that's used to process your input data. Every six streaming units (SUs) translate to one node, which the service automatically manages on your behalf. For more information about the relationship between streaming units and streaming nodes, see [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md). --**Node Name** is a dimension at the streaming node level. It can help you to drill down certain metrics to the specific streaming node level. For example, you can split the **CPU % Utilization** metric by streaming node level to check the CPU utilization of an individual streaming node. ---The **Node Name** dimension is available for filtering and splitting the following metrics: -- **Backlogged Input Events**-- **CPU % Utilization (preview)** -- **Input Events**-- **Output Events**-- **SU (Memory) % Utilization**-- **Watermark Delay**- ## Partition ID dimension -When streaming data is ingested into the Azure Stream Analytics service for processing, the input data is distributed to streaming nodes according to the partitions in the input source. The **Partition ID** dimension is the ID of the input data partition from the input source. --For example, if the input source is an event hub, the partition ID is the event hub's partition ID. **Partition ID** in the input is the same as it is in the output. - -The **Partition ID** dimension is available for filtering and splitting the following metrics: -- **Backlogged Input Events**-- **Data Conversion Errors**-- **Early Input Events**-- **Input Deserialization Errors**-- **Input Event Bytes**-- **Input Events**-- **Input Source Received**-- **Late Input Events**-- **Output Events**-- **Watermark Delay** ## Next steps |
stream-analytics | Stream Analytics Job Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics.md | If you want to check a specific metric, select **Metrics** in the **Monitoring** ## Metrics available for Stream Analytics -Azure Stream Analytics provides the following metrics for you to monitor your job's health. --| Metric | Definition | -| - | - | -| **Backlogged Input Events** | Number of input events that are backlogged. A nonzero value for this metric implies that your job can't keep up with the number of incoming events. If this value is slowly increasing or is consistently nonzero, you should scale out your job. To learn more, see [Understand and adjust streaming units](stream-analytics-streaming-unit-consumption.md). | -| **Data Conversion Errors** | Number of output events that couldn't be converted to the expected output schema. To drop events that encounter this scenario, you can change the error policy to **Drop**. | -| **CPU % Utilization** (preview) | Percentage of CPU that your job utilizes. Even if this value is very high (90 percent or more), you shouldn't increase the number of SUs based on this metric alone. If the number of backlogged input events or watermark delays increases, you can then use this metric to determine if the CPU is the bottleneck. <br><br>This metric might have intermittent spikes. We recommend that you do scale tests to determine the upper bound of your job after which inputs are backlogged or watermark delays increase because of a CPU bottleneck. | -| **Early Input Events** | Events whose application time stamp is earlier than their arrival time by more than 5 minutes. | -| **Failed Function Requests** | Number of failed Azure Machine Learning function calls (if present). | -| **Function Events** | Number of events sent to the Azure Machine Learning function (if present). | -| **Function Requests** | Number of calls to the Azure Machine Learning function (if present). | -| **Input Deserialization Errors** | Number of input events that couldn't be deserialized. | -| **Input Event Bytes** | Amount of data that the Stream Analytics job receives, in bytes. You can use this metric to validate that events are being sent to the input source. | -| **Input Events** | Number of records deserialized from the input events. This count doesn't include incoming events that result in deserialization errors. Stream Analytics can ingest the same events multiple times in scenarios like internal recoveries and self-joins. Don't expect **Input Events** and **Output Events** metrics to match if your job has a simple pass-through query. | -| **Input Sources Received** | Number of messages that the job receives. For Azure Event Hubs, a message is a single `EventData` item. For Azure Blob Storage, a message is a single blob. <br><br>Note that input sources are counted before deserialization. If there are deserialization errors, input sources can be greater than input events. Otherwise, input sources can be less than or equal to input events because each message can contain multiple events. | -| **Late Input Events** | Events that arrived later than the configured tolerance window for late arrivals. [Learn more about Azure Stream Analytics event order considerations](./stream-analytics-time-handling.md). | -| **Out-of-Order Events** | Number of events received out of order that were either dropped or given an adjusted time stamp, based on the event ordering policy. This metric can be affected by the configuration of the **Out-of-Order Tolerance Window** setting. | -| **Output Events** | Amount of data that the Stream Analytics job sends to the output target, in number of events. | -| **Runtime Errors** | Total number of errors related to query processing. It excludes errors found while ingesting events or outputting results. | -| **SU (Memory) % Utilization** | Percentage of memory that your job utilizes. If this metric is consistently over 80 percent, the watermark delay is rising, and the number of backlogged events is rising, consider increasing streaming units (SUs). High utilization indicates that the job is using close to the maximum allocated resources. | -| **Watermark Delay** | Maximum watermark delay across all partitions of all outputs in the job. | ## Scenarios to monitor+Azure Stream Analytics provides a serverless, distributed streaming processing service. Jobs can run on one or more distributed streaming nodes, which the service automatically manages. The input data is partitioned and allocated to different streaming nodes for processing. -|Metric|Condition|Time aggregation|Threshold|Corrective actions| -|-|-|-|-|-| -|**SU (Memory) % Utilization**|Greater than|Average|80|Multiple factors increase the utilization of SUs. You can scale with query parallelization or increase the number of SUs. For more information, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).| -|**CPU % Utilization**|Greater than|Average|90|This likely means that some operations (such as user-defined functions, user-defined aggregates, or complex input deserialization) are requiring a lot of CPU cycles. You can usually overcome this problem by increasing the number of SUs for the job.| -|**Runtime Errors**|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the inputs, query, or outputs.| -|**Watermark Delay**|Greater than|Average|When the average value of this metric over the last 15 minutes is greater than the late arrival tolerance (in seconds). If you haven't modified the late arrival tolerance, the default is set to 5 seconds.|Try increasing the number of SUs or parallelizing your query. For more information on SUs, see [Understand and adjust streaming units](stream-analytics-streaming-unit-consumption.md#how-many-sus-are-required-for-a-job). For more information on parallelizing your query, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).| -|**Input Deserialization Errors**|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the input. For more information on resource logs, see [Troubleshoot Azure Stream Analytics by using resource logs](stream-analytics-job-diagnostic-logs.md).| ## Get help For further assistance, try the [Microsoft Q&A page for Azure Stream Analytics](/answers/topics/azure-stream-analytics.html). |
synapse-analytics | Synapse Workspace Understand What Role You Need | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md | You can open Synapse Studio and view details of the workspace and list any of it ### Resource management -You can create SQL pools, Data Explorer pools, Apache Spark pools, and Integration runtimes if you're an Azure Owner or Contributor on the workspace. When using ARM templates for automated deployment, you need to be an Azure Contributor on the resource group. +You can create SQL pools, Data Explorer pools, and Apache Spark pools if you are an Azure Owner or Contributor on the resource group. You can create an Integration Runtime if you are an Azure Owner or Contributor on the workspace. When using ARM templates for automated deployment, you need to be an Azure Contributor on the resource group. You can pause or scale a dedicated SQL pool, configure a Spark pool, or an integration runtime if you're an Azure Owner or Contributor on the workspace or that resource. Task (I want to...) |Role (I need to be...)|Synapse RBAC permission/action ||Azure Owner or Contributor, or Reader on the workspace|none |List linked services or credentials or managed private endpoints|Synapse User|read SQL POOLS|-Create a dedicated SQL pool or a serverless SQL pool|Azure Owner or Contributor on the workspace|none +Create a dedicated SQL pool or a serverless SQL pool|Azure Owner or Contributor on the resource group|none Manage (pause or scale, or delete) a dedicated SQL pool|Azure Owner or Contributor on the SQL pool or workspace|none Create a SQL script</br>|Synapse User or </br>Azure Owner or Contributor on the workspace. </br></br>*Additional SQL permissions are required to run a SQL script, publish, or commit changes*.| List and open any published SQL script| Synapse Artifact User or Artifact Publisher, or Synapse Contributor|artifacts/read Publish a new or updated, or deleted SQL script|Synapse Artifact Publisher or Sy Commit changes to a SQL script to the Git repo|Requires Git permissions on the repo| Assign Active Directory Admin on the workspace (via workspace properties in the Azure Portal)|Azure Owner or Contributor on the workspace | DATA EXPLORER POOLS|-Create a Data Explorer pool |Azure Owner or Contributor on the workspace|none +Create a Data Explorer pool |Azure Owner or Contributor on the resource group|none Manage (pause or scale, or delete) a Data Explorer pool|Azure Owner or Contributor on the Data Explorer pool or workspace|none Create a KQL script</br>|Synapse User. </br></br>*Additional Data Explorer permissions are required to run a script, publish, or commit changes*.| List and open any published KQL script| Synapse Artifact User or Artifact Publisher, or Synapse Contributor|artifacts/read Run a KQL script on a Data Explorer pool| Data Explorer permissions on the pool Publish new, update, or delete KQL script|Synapse Artifact Publisher or Synapse Contributor|kqlScripts/write, delete Commit changes to a KQL script to the Git repo|Requires Git permissions on the repo| APACHE SPARK POOLS|-Create an Apache Spark pool|Azure Owner or Contributor on the workspace| +Create an Apache Spark pool|Azure Owner or Contributor on the resource group| Monitor Apache Spark applications| Synapse User|read View the logs for completed notebook and job execution |Synapse Monitoring Operator| Cancel any notebook or Spark job running on an Apache Spark pool|Synapse Compute Operator on the Apache Spark pool.|bigDataPools/useCompute |
virtual-desktop | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md | Azure Virtual Desktop updates regularly. This article is where you'll find out a Make sure to check back here often to keep up with new updates. +> [!TIP] +> See [What's new in documentation](whats-new-documentation.md), where we highlight new and updated articles for Azure Virtual Desktop. + ## April 2023 Here's what changed in April 2023: The [Azure Virtual Desktop Store app for Windows](users/connect-windows-azure-vi For more information about the public preview release version, check out [Use features of the Azure Virtual Desktop Store app for Windows when connecting to Azure Virtual Desktop (preview)](users/client-features-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&bc=%2Fazure%2Fvirtual-desktop%2Fbreadcrumb%2Ftoc.json), [What's new in the Azure Virtual Desktop Store App (preview)](whats-new-client-windows-azure-virtual-desktop-app.md), or read [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-of-the-new-azure-virtual-desktop-app/ba-p/3785698). -## Intune user-scope configuration for Windows 10 Enterprise multi-session VMs now generally available +### Intune user-scope configuration for Windows 10 Enterprise multi-session VMs now generally available Microsoft Intune user-scope configuration for Azure Virtual Desktop multi-session Virtual Machines (VMs) on Windows 10 and 11 are now generally available. With this feature, you are able to: |
virtual-machines | Basv2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/basv2.md | + + Title: 'Basv2 Series (preview)' #Required; page title is displayed in search results. 60 characters max. +description: Overview of AMD Bsv2 Virtual Machine Series; #Required; this appears in search as the short description +++++ Last updated : 06/20/2022 #Required; mm/dd/yyyy format. Date the article was created or the last time it was tested and confirmed correct ++++# Basv2-series (Public Preview) ++Basv2-series virtual machines run on the AMD's 3rd Generation EPYCTM 7763v processor in a multi-threaded configuration with up to 256 MB L3 cache configuration, providing low cost CPU burstable general purpose virtual machines. Basv2-series virtual machines utilize a CPU credit model to track how much CPU is consumed - the virtual machine accumulates CPU credits when a workload is operating below the base CPU performance threshold and, uses credits when running above the base CPU performance threshold, until all of its credits are consumed. Upon consuming all the CPU credits, a Basv2-series virtual machine is throttled back to its base CPU performance until it accumulates the credits to CPU burst again. ++Basv2-series virtual machines offer a balance of compute, memory, and network resources, and are a cost effective way to run a broad spectrum of general purpose workloads, including large scale micro-services, small and medium databases, virtual desktops, and business-critical applications; and are also an affordable option to run your code repositories and dev/test environments. Basv2-Series offers virtual machines of up-to 32 vCPU and 128 Gib of RAM, with max network bandwidth of upto 6250 Mbps and max uncached disk throughput of 600 Mbps. Basv2-series virtual machines also support attachments of Standard SSD, Standard HDD, Premium SSD disk types with a default Remote-SSD support, you can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). +++[Premium Storage](premium-storage-performance.md): Supported<br> +[Premium Storage caching](premium-storage-performance.md): Supported<br> +[Live Migration](maintenance-and-updates.md): Supported<br> +[Memory Preserving Updates](maintenance-and-updates.md): Supported<br> +[VM Generation Support](generation-2.md): Generation 2<br> +[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Supported<br> +[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> +[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> +<br> ++| Size | vCPU | RAM | Base CPU Performance of VM (%) | Initial Credits (#) | Credits banked/hour | Max Banked Credits (#) | Max uncached disk throughput: IOPS/M8ps | Max burst uncached disk throughput: IOPS/MBps | Max Data Disks | Max Network Bandwidth (Gbps) | Max NICs | +|--||--|--||||--|--|-||-| +| Standard_B2ats_v2 | 2 | 1 | 20% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 | +| Standard_B2als_v2 | 2 | 4 | 30% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 | +| Standard_B2as_v2 | 2 | 8 | 40% | 600 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 | +| Standard_B4als_v2 | 4 | 8 | 30% | 120 | 48 | 1152 | 6,400/145 | 20,000/960 | 8 | 6.25 | 2 | +| Standard_B4as_v2 | 4 | 16 | 40% | 120 | 48 | 1150 | 6,400/145 | 20,000/960 | 8 | 6.25 | 2 | +| Standard_B8als_v2 | 8 | 16 | 30% | 240 | 96 | 2304 | 12,800/290 | 20,000/960 | 16 | 6.25 | 2 | +| Standard_B8as_v2 | 8 | 32 | 40% | 240 | 96 | 2304 | 12,800/290 | 20,000/960 | 16 | 6.25 | 2 | +| Standard_B16als_v2 | 16 | 32 | 30% | 480 | 192 | 4608 | 25,600/600 | 40,000/960 | 32 | 6.25 | 4 | +| Standard_B16as_v2 | 16 | 64 | 40% | 480 | 192 | 4608 | 25,600/600 | 40,000/960 | 32 | 6.25 | 4 | +| Standard_B32als_v2 | 32 | 64 | 60% | 960 | 384 | 9216 | 25,600/600 | 80,000/960 | 32 | 6.25 | 4 | +| Standard_B32as_v2 | 32 | 128 | 40% | 960 | 384 | 9216 | 25,600/600 | 80,000/960 | 32 | 6.25 | 4 | ++++++## Other sizes and information ++- [General purpose](sizes-general.md) +- [Memory optimized](sizes-memory.md) +- [Storage optimized](sizes-storage.md) +- [GPU optimized](sizes-gpu.md) +- [High performance compute](sizes-hpc.md) +- [Previous generations](sizes-previous-gen.md) ++Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) ++More information on Disks Types: [Disk Types](./disks-types.md#ultra-disks) |
virtual-machines | Bpsv2 Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/bpsv2-arm.md | + + Title: 'Bpsv2 Series (preview)' #Required; page title is displayed in search results. 60 characters max. +description: Overview of Bpsv2 ARM series; this appears in search as the short description +++++ Last updated : 06/09/2023 ++++# Bpsv2-series (public preview) ++The Bpsv2-series virtual machines are based on the Arm architecture, featuring the Ampere® Altra® Arm-based processor operating at 3.0 GHz, delivering outstanding price-performance for general-purpose workloads, These virtual machines offer a range of VM sizes, from 0.5 GiB to up to 4 GiB of memory per vCPU, to meet the needs of applications that do not need the full performance of the CPU continuously, such as development and test servers, low traffic web servers, small databases, micro services, servers for proof-of-concepts, build servers, and code repositories. These workloads typically have burstable performance requirements. The Bpsv2-series VMs provides you with the ability to purchase a VM size with baseline performance that can build up credits when it is using less than its baseline performance. When the VM has accumulated credits, the VM can burst above the baseline using up to 100% of the vCPU when your application requires higher CPU performance. ++## Bpsv2-series +Bpsv2 VMs offer up to 16 vCPU and 64 GiB of RAM and are optimized for scale-out and most enterprise workloads. Bpsv2-series virtual machines support Standard SSD, Standard HDD, Premium SSd disk types with no local-SSD support (i.e. no local or temp disk) and you can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). +++[Premium Storage](premium-storage-performance.md): Supported<br> +[Premium Storage caching](premium-storage-performance.md): Supported<br> +[Live Migration](maintenance-and-updates.md): Supported<br> +[Memory Preserving Updates](maintenance-and-updates.md): Supported<br> +[VM Generation Support](generation-2.md): Generation 2<br> +[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Supported<br> +[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> +[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> +<br> ++| Size | vCPU | RAM | Base CPU Performance / vCPU (%) | Initial Credits (#) | Credits banked/hour | Max Banked Credits (#) | Max uncached disk throughput: IOPS/M8ps | Max burst uncached disk throughput: IOPS/MBps | Max Data Disks | Max Network Bandwidth (Gbps) (up to) | Max NICs | +|--||--|--||||--|--|-||-| +| Standard_B2pts_v2 | 2 | 1 | 20% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 | +| Standard_B2pls_v2 | 2 | 4 | 30% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 | +| Standard_B2ps_v2 | 2 | 8 | 40% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 | +| Standard_B4pls_v2 | 4 | 8 | 30% | 120 | 48 | 1152 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 | +| Standard_B4ps_v2 | 4 | 16 | 40% | 120 | 48 | 1152 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 | +| Standard_B8pls_v2 | 8 | 16 | 30% | 240 | 96 | 2304 | 12,800/290 | 20,000/960 | 16 | 6.250 | 2 | +| Standard_B8ps_v2 | 8 | 32 | 40% | 240 | 96 | 2304 | 12,800/290 | 20,000/960 | 16 | 6.250 | 2 | +| Standard_B16pls_v2 | 16 | 32 | 30% | 480 | 192 | 4608 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 | +| Standard_B16ps_v2 | 16 | 64 | 40% | 480 | 192 | 4608 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 | ++<sup>*</sup> Accelerated networking is required and turned on by default on all Dpsv5 machines <br> +++++++## Other sizes and information ++- [General purpose](sizes-general.md) +- [Memory optimized](sizes-memory.md) +- [Storage optimized](sizes-storage.md) +- [GPU optimized](sizes-gpu.md) +- [High performance compute](sizes-hpc.md) +- [Previous generations](sizes-previous-gen.md) ++++More information on Disks Types: [Disk Types](./disks-types.md#ultra-disks) |
virtual-machines | Bsv2 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/bsv2-series.md | + + Title: 'Bsv2 Series (preview)' #Required; page title is displayed in search results. 60 characters max. +description: Overview of Intel Bsv2 Virtual Machine Series; #Required; this appears in search as the short description +++++ Last updated : 06/20/2022 #Required; mm/dd/yyyy format. Date the article was created or the last time it was tested and confirmed correct ++++# Bsv2-series (Public Preview) ++Bsv2-series virtual machines run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor in a [hyper threaded](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) configuration, providing low cost CPU burstable general purpose virtual machines. Bsv2-series virtual machines utilize a CPU credit model to track how much CPU is consumed - the virtual machine accumulates CPU credits when a workload is operating below the base CPU performance threshold and, uses credits when running above the base CPU performance threshold until all of its credits are consumed. Upon consuming all the CPU credits, a Bsv2-series virtual machine is throttled back to its base CPU performance until it accumulates the credits to CPU burst again. ++Bsv2-series virtual machines offer a balance of compute, memory, and network resources and are a cost effective way to run a broad spectrum of general purpose workloads, including large scale micro-services, small and medium databases, virtual desktops, and business-critical applications; and are also an affordable option to run your code repositories and dev/test environments. Bsv2-Series offers virtual machines of up-to 32 vCPU and 128 Gib of RAM, with max network bandwidth of upto 6250 Mbps and max uncached disk thoughput of 600 Mbps. Bsv2-series virtual machines also support attachments of Standard SSD, Standard HDD, Premium SSD disk types with a default Remote-SSD support, you can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). +++[Premium Storage](premium-storage-performance.md): Supported<br> +[Premium Storage caching](premium-storage-performance.md): Supported<br> +[Live Migration](maintenance-and-updates.md): Supported<br> +[Memory Preserving Updates](maintenance-and-updates.md): Supported<br> +[VM Generation Support](generation-2.md): Generation 2<br> +[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md)<sup>1</sup>: Supported<br> +[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br> +[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> +<br> ++| Size | vCPU | RAM | Base CPU Performance of VM (%) | Initial Credits (#) | Credits banked/hour | Max Banked Credits (#) | Max uncached disk throughput: IOPS/M8ps | Max burst uncached disk throughput: IOPS/MBps | Max Data Disks | Max Network Bandwidth (Gbps) | Max NICs | +|-||--|--||||--|--|-||-| +| Standard_B2ts_v2 | 2 | 1 | 20% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 | +| Standard_B2ls_v2 | 2 | 4 | 30% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.50 | 2 | +| Standard_B2s_v2 | 2 | 8 | 40% | 600 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.50 | 2 | +| Standard_B4ls_v2 | 4 | 8 | 30% | 120 | 48 | 1152 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 | +| Standard_B4s_v2 | 4 | 16 | 40% | 120 | 48 | 1150 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 | +| Standard_B8ls_v2 | 8 | 16 | 30% | 240 | 96 | 2304 | 12,800/290 | 20,000/960 | 16 | 3.250 | 2 | +| Standard_B8s_v2 | 8 | 32 | 40% | 240 | 96 | 2304 | 12,800/290 | 20,000/960 | 16 | 6.250 | 2 | +| Standard_B16ls_v2 | 16 | 32 | 30% | 480 | 192 | 4608 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 | +| Standard_B16s_v2 | 16 | 64 | 40% | 480 | 192 | 4608 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 | +| Standard_B32ls_v2 | 32 | 64 | 30% | 960 | 384 | 9216 | 51,200/600 | 80,000/960 | 32 | 6.250 | 4 | +| Standard_B32s_v2 | 32 | 128 | 40% | 960 | 384 | 9216 | 51,200/600 | 80,000/960 | 32 | 6.250 | 4 | +++<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br> +<sup>1</sup> Accelerated networking is required and turned on by default on all Bsv2 virtual machines.<br> +++<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br> +<sup>1</sup> Accelerated networking is required and turned on by default on all Bsv2 virtual machines.<br> +<sup>2</sup> Bsv2-series virtual machines can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time. +++## Other sizes and information ++- [General purpose](sizes-general.md) +- [Memory optimized](sizes-memory.md) +- [Storage optimized](sizes-storage.md) +- [GPU optimized](sizes-gpu.md) +- [High performance compute](sizes-hpc.md) +- [Previous generations](sizes-previous-gen.md) ++Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) ++More information on Disks Types: [Disk Types](./disks-types.md#ultra-disks) |
virtual-machines | Maintenance Notifications Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-notifications-portal.md | You can use the Azure portal and look for VMs scheduled for maintenance. ## Notification and alerts in the portal -Azure communicates a schedule for planned maintenance by sending an email to the subscription owner and co-owners group. You can add additional recipients and channels to this communication by creating Azure activity log alerts. For more information, see [Create activity log alerts on service notifications](../service-health/alerts-activity-log-service-notifications-portal.md). +[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/#overview) has a dedicated tab for Planned Maintenance where all Azure services (for example, Virtual Machines) publish their upcoming Maintenance events. -Make sure you set the **Event type** as **Planned maintenance**, and **Services** as **Virtual Machine Scale Sets** and/or **Virtual Machines**. +Virtual Machine related Maintenance notifications are available under [Service Health](https://aka.ms/azureservicehealth) in the Azure portal. For some specific Virtual Machine Planned Maintenance scenarios, Azure might communicate the schedule by sending an additional email (besides Service Health) to the Subscription Classic Admin, Co-Admin, and Subscription Owners group. ++[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/#overview) enables users to configure their own custom Service Health alerts for the Planned Maintenance category. With Azure Service Health alerts, you can assign different Action-Groups to include additional recipients and channels (such as emails and SMS) based on event or service type, like Virtual Machine maintenance in this context. For more information, see [Create activity log alerts on service notifications](../service-health/alerts-activity-log-service-notifications-portal.md). ++While creating alerts specific to Virtual Machine maintenance, make sure you set the **Event type** as **Planned maintenance** and **Services** as **Virtual Machine Scale Sets** and/or **Virtual Machines**. ## Start Maintenance on your VM from the portal |
virtual-machines | Maintenance Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-notifications.md | Azure periodically performs updates to improve the reliability, performance, and Planned maintenance that requires a reboot is scheduled in waves. Each wave has different scope (regions). -- A wave starts with a notification to customers. By default, the notification is sent to the subscription admin and co-admins. You can add more recipients and messaging options like email, SMS, and webhooks, using [Activity Log Alerts](../service-health/alerts-activity-log-service-notifications-portal.md). +- A wave starts with a notification to customers. Virtual Machine related Maintenance notifications are available under [Service Health](https://aka.ms/azureservicehealth) in the Azure portal. For some specific Virtual Machine Planned Maintenance scenarios, Azure may also communicate the schedule by sending an additional email to the Subscription Classic Admin, Co-Admin, and Subscription Owners group. [Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/#overview) enables users to configure their own custom alerts for the Planned Maintenance category. With Azure Service Health alerts, you can add more recipients and messaging options like email, SMS, and webhooks using [Activity Log Alerts](../service-health/alerts-activity-log-service-notifications-portal.md). - Once a notification goes out, a *self-service window* is made available. During this window, you can query which of your virtual machines are affected and start maintenance based on your own scheduling needs. The self-service window is typically about 35 days. - After the self-service window, a *scheduled maintenance window* begins. At some point during this window, Azure schedules and applies the required maintenance to your virtual machine. For more information about high availability, see [Availability for virtual mach **Q: How do I get notified about planned maintenance?** -**A:** A planned maintenance wave starts by setting a schedule to one or more Azure regions. Soon after, an email notification is sent to the subscription admins, co-admins, owners, and contributors (One email per subscription with all recipients added). Additional channels and recipients for this notification could be configured using Activity Log Alerts. In case you deploy a virtual machine to a region where planned maintenance is already scheduled, you will not receive the notification but rather need to check the maintenance state of the VM. +**A:** A planned maintenance wave starts by setting a schedule to one or more Azure regions. Virtual Machine related Maintenance notifications are available under [Service Health](https://aka.ms/azureservicehealth) in the Azure portal. For some specific Virtual Machine Planned Maintenance scenarios, Azure may also communicate the schedule by sending an additional email (one email per subscription with all recipients added) to the Subscription Classic Admin, Co-Admin, and Subscription Owners group. ++[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/#overview) enables users to configure their own custom alerts for the Planned Maintenance category. With Azure Service Health alerts you can add more recipients and messaging options like email, SMS, and webhooks using [Activity Log Alerts](../service-health/alerts-activity-log-service-notifications-portal.md). ++In case you deploy a virtual machine to a region where planned maintenance is already scheduled, you won't receive the notification but rather need to check the maintenance state of the VM. **Q: I don't see any indication of planned maintenance in the portal, PowerShell, or CLI. What is wrong?** |
virtual-network | Accelerated Networking Mana Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-mana-linux.md | + + Title: Linux VMs with Azure MANA +description: Learn how the Microsoft Azure Network Adapter can improve the networking performance of Linux VMs on Azure. +++ Last updated : 07/10/2023++++# Linux VMs with Azure MANA ++Learn how to use the Microsoft Azure Network Adapter (MANA) to improve the performance and availability of Linux virtual machines in Azure. ++For Windows support, see [Windows VMs with Azure MANA](./accelerated-networking-mana-windows.md) ++For more info regarding Azure MANA, see [Microsoft Azure Network Adapter (MANA) overview](./accelerated-networking-mana-overview.md) ++> [!IMPORTANT] +> Azure MANA is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++## Supported Marketplace Images +Several [Azure marketplace](https://learn.microsoft.com/marketplace/azure-marketplace-overview) Linux images have built-in support for Azure MANA's ethernet driver. ++- Ubuntu 20.04 LTS +- Ubuntu 22.04 LTS +- Red Hat Enterprise Linux 8.8 +- Red Hat Enterprise Linux 9.2 +- SUSE Linux Enterprise Server 15 SP4 +- Debian 12 ΓÇ£BookwormΓÇ¥ +- Oracle Linux 9.0 ++>[!NOTE] +>None of the current Linux distros in Azure Marketplace are on a 6.2 or later kernel, which is required for RDMA/InfiniBand and DPDK. If you use an existing Marketplace Linux image, you will need to update the kernel. ++## Check status of MANA support +Because Azure MANA's feature set requires both host hardware and VM software components, there are several checks required to ensure MANA is working properly ++### Azure portal check ++Ensure that you have Accelerated Networking enabled on at least one of your NICs: +1. From the Azure portal page for the VM, select Networking from the left menu. +1. On the Networking settings page, select the Network Interface. +1. On the NIC Overview page, under Essentials, note whether Accelerated networking is set to Enabled or Disabled. ++### Hardware check ++When Accelerated Networking is enabled, the underlying MANA NIC can be identified as a PCI device in the Virtual Machine. ++``` +$ lspci +7870:00:00.0 Ethernet controller: Microsoft Corporation Device 00ba +``` ++### Kernel version check +Verify your VM has a MANA Ethernet driver installed. ++``` +$ grep /mana*.ko /lib/modules/$(uname -r)/modules.builtin || find /lib/modules/$(uname -r)/kernel -name mana*.ko* ++kernel/drivers/net/ethernet/microsoft/mana/mana.ko +``` ++## Kernel update ++Ethernet drivers for MANA are included in kernel 5.15 and up. Linux support for features such as InfiniBand/RDMA and DPDK are included in kernel 6.2. Prior or forked kernel versions (5.15 and 6.1) require backported support. ++To update your VM's Linux kernel, check the docs for your specific distro. ++## Verify traffic is flowing through the MANA adapter ++Each vNIC configured for the VM with Accelerated Networking enabled will result in two network interfaces in the VM. For example, eth0 and enP30832p0s0 a single-NIC configuration: ++``` +$ ip link +1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 +2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 + link/ether 00:22:48:71:c2:8c brd ff:ff:ff:ff:ff:ff + alias Network Device +3: enP30832p0s0: <BROADCAST,MULTICAST,CHILD,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000 + link/ether 00:22:48:71:c2:8c brd ff:ff:ff:ff:ff:ff + altname enP30832s1296119428 +``` ++The eth0 interface is the primary port serviced by the netvsc driver and the routable interface for the vNIC. The associated enP* interface represents the MANA Virtual Function (VF) and is bound to the eth0 interface in this case. You can get packet and byte count of the MANA Virtual Function (VF) from the routable ethN interface: +``` +$ ethtool -S eth0 | grep -E "^[ \t]+vf" + vf_rx_packets: 226418 + vf_rx_bytes: 99557501 + vf_tx_packets: 300422 + vf_tx_bytes: 76231291 + vf_tx_dropped: 0 +``` ++## Next Steps ++- [TCP/IP Performance Tuning for Azure VMs](./virtual-network-tcpip-performance-tuning.md) +- [Proximity Placement Groups](../virtual-machines/co-location.md) +- [Monitor Virtual Network](./monitor-virtual-network.md) |
virtual-network | Accelerated Networking Mana Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-mana-overview.md | + + Title: Microsoft Azure Network Adapter (MANA) overview +description: Learn how the Microsoft Azure Network Adapter can improve the networking performance of Azure VMs. +++ Last updated : 07/10/2023++++# Microsoft Azure Network Adapter (MANA) overview ++Learn how to use the Microsoft Azure Network Adapter (MANA) to improve the performance and availability of virtual machines in Azure. MANA is a next-generation network interface that provides stable forward-compatible device drivers for Windows and Linux operating systems. MANA hardware and software are engineered by Microsoft and take advantage of the latest advancements in cloud networking technology. ++> [!IMPORTANT] +> Azure MANA is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++## Compatibility +Azure MANA supports several VM operating systems. While your VM might be running a supported OS, you may need to update the kernel (Linux) or install drivers (Windows). ++MANA maintains feature-parity with previous Azure networking features. VMs run on hardware with both Mellanox and MANA NICs, so existing 'mlx4' and 'mlx5' support still need to be present. ++### Supported Marketplace Images +Several [Azure Marketplace](https://learn.microsoft.com/marketplace/azure-marketplace-overview) images have built-in support for Azure MANA's ethernet driver. ++#### Linux: +- Ubuntu 20.04 LTS +- Ubuntu 22.04 LTS +- Red Hat Enterprise Linux 8.8 +- Red Hat Enterprise Linux 9.2 +- SUSE Linux Enterprise Server 15 SP4 +- Debian 12 ΓÇ£BookwormΓÇ¥ +- Oracle Linux 9.0 ++>[!NOTE] +>None of the current Linux distros in Azure Marketplace are on a 6.2 or later kernel, which is required for RDMA/InfiniBand and DPDK. If you use an existing Marketplace Linux image, you will need to update the kernel. ++#### Windows: +- Windows Server 2016 +- Windows Server 2019 +- Windows Server 2022 ++### Custom images and legacy VMs +We recommend using an operating system with support for MANA to maximize performance. In instances where the operating system doesn't or can't support MANA, network connectivity is provided through the hypervisorΓÇÖs virtual switch. The virtual switch is also used during some infrastructure servicing events where the Virtual Function (VF) is revoked. ++### Using DPDK +Utilizing DPDK on MANA hardware requires the Linux kernel 6.2 or later or a backport of the Ethernet and InfiniBand drivers from the latest Linux kernel. It also requires specific versions of DPDK and user-space drivers. ++DPDK requires the following set of drivers: +1. [Linux kernel Ethernet driver](https://github.com/torvalds/linux/tree/master/drivers/net/ethernet/microsoft/mana) (5.15 kernel and later) +1. [Linux kernel InfiniBand driver](https://github.com/torvalds/linux/tree/master/drivers/infiniband/hw/mana) (6.2 kernel and later) +1. [DPDK MANA poll-mode driver](https://github.com/DPDK/dpdk/tree/main/drivers/net/mana) (DPDK 22.11 and later) +1. [Libmana user-space drivers](https://github.com/linux-rdma/rdma-core/tree/master/providers/mana) (rdma-core v44 and later) ++DPDK only functions on Linux VMs. ++## Evaluating performance +Differences in VM SKUs, operating systems, applications, and tuning parameters can all affect network performance on Azure. For this reason, we recommend that you benchmark and test your workloads to ensure you achieve the expected network performance. +See the following documents for information on testing and optimizing network performance in Azure. +Look into [TCP/IP performance tuning](/azure/virtual-network/virtual-network-tcpip-performance-tuning) and more info on [VM network throughput](/azure/virtual-network/virtual-machine-network-throughput) ++## Start using Azure MANA +Tutorials for each supported OS type are available for you to get started: ++For Linux support, see [Linux VMs with Azure MANA](./accelerated-networking-mana-linux.md) ++For Windows support, see [Windows VMs with Azure MANA](./accelerated-networking-mana-windows.md) ++## Next Steps ++- [TCP/IP Performance Tuning for Azure VMs](./virtual-network-tcpip-performance-tuning.md) +- [Proximity Placement Groups](../virtual-machines/co-location.md) +- [Monitor Virtual Network](./monitor-virtual-network.md) |
virtual-network | Accelerated Networking Mana Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-mana-windows.md | + + Title: Windows VMs with Azure MANA +description: Learn how the Microsoft Azure Network Adapter can improve the networking performance of Windows VMs on Azure. +++ Last updated : 07/10/2023++++# Windows VMs with Azure MANA ++Learn how to use the Microsoft Azure Network Adapter (MANA) to improve the performance and availability of Windows virtual machines in Azure. ++For Linux support, see [Linux VMs with Azure MANA](./accelerated-networking-mana-linux.md) ++For more info regarding Azure MANA, see [Microsoft Azure Network Adapter (MANA) overview](./accelerated-networking-mana-overview.md) ++> [!IMPORTANT] +> Azure MANA is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++## Supported Marketplace Images +Several [Azure marketplace](https://learn.microsoft.com/marketplace/azure-marketplace-overview) Windows images have built-in support for Azure MANA's ethernet driver. ++Windows: +- Windows Server 2016 +- Windows Server 2019 +- Windows Server 2022 ++## Check status of MANA support +Because Azure MANA's feature set requires both host hardware and VM driver software components, there are several checks required to ensure MANA is working properly. All checks are required to ensure MANA functions properly on your VM. ++### Azure portal check ++Ensure that you have Accelerated Networking enabled on at least one of your NICs: +1. From the Azure portal page for the VM, select Networking from the left menu. +1. On the Networking settings page, select the Network Interface. +1. On the NIC Overview page, under Essentials, note whether Accelerated networking is set to Enabled or Disabled. ++### Hardware check ++When Accelerated Networking is enabled, the underlying MANA NIC can be identified as a PCI device in the Virtual Machine. ++>[!NOTE] +>When multiple NICs are configured on MANA-supported hardware, there will still only be one PCIe Virtual Function assigned to the VM. MANA is designed such that all VM NICs interact with the same PCIe Virtual function. Since network resource limits are set at the VM SKU level, this has no impact on performance. ++### Driver check +There are several ways to verify your VM has a MANA Ethernet driver installed: ++#### PowerShell: +```powershell +PS C:\Users\testVM> Get-NetAdapter ++Name InterfaceDescription ifIndex Status MacAddress LinkSpeed +- -- - - +Ethernet 4 Microsoft Hyper-V Network Adapter #2 10 Up 00-00-AA-AA-00-AA 200 Gbps +Ethernet 5 Microsoft Azure Network Adapter #3 7 Up 11-11-BB-BB-11-BB 200 Gbps +``` ++#### Device Manager +1. Open up device Manager +2. Within device manager, you should see the Hyper-V Network Adapter and the Microsoft Azure Network Adapter (MANA) ++ ++## Driver install ++If your VM has both portal and hardware support for MANA but doesn't have drivers installed, Windows drivers can be downloaded [here](https://aka.ms/manawindowsdrivers). ++Installation is similar to other Windows device drivers. A readme file with more detailed instructions is included in the download. +++## Verify traffic is flowing through the MANA adapter ++In PowerShell, run the following command: ++```powershell +PS C:\ > Get-NetAdapter | Where-Object InterfaceDescription -Like "*Microsoft Azure Network Adapter*" | Get-NetAdapterStatistics ++Name ReceivedBytes ReceivedUnicastPackets SentBytes SentUnicastPackets +- - - +Ethernet 5 1230513627217 22739256679 ...724576506362 381331993845 +``` ++## Next Steps ++- [TCP/IP Performance Tuning for Azure VMs](./virtual-network-tcpip-performance-tuning.md) +- [Proximity Placement Groups](../virtual-machines/co-location.md) +- [Monitor Virtual Network](./monitor-virtual-network.md) |
virtual-network | Accelerated Networking Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md | +>[!NOTE] +>For more information on Microsoft Azure Network Adapter (MANA) preview, please refer to the [Azure MANA Docs](./accelerated-networking-mana-overview.md) + The following diagram illustrates how two VMs communicate with and without Accelerated Networking. :::image type="content" source="./media/create-vm-accelerated-networking/accelerated-networking.png" alt-text="Screenshot that shows communication between Azure VMs with and without Accelerated Networking."::: |