Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Use Scim To Provision Users And Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md | The following table lists an example of required attributes: |lastName|name.familyName|surName| |workMail|emails[type eq ΓÇ£workΓÇ¥].value|Mail| |manager|manager|manager|-|tag|urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User:tag|extensionAttribute1| +|tag|`urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User:tag`|extensionAttribute1| |status|active|isSoftDeleted (computed value not stored on user)| The following JSON payload shows an example SCIM schema: It helps to categorize between `/User` and `/Group` to map any default user attr The following table lists an example of user attributes: -| Azure AD user | urn:ietf:params:scim:schemas:extension:enterprise:2.0:User | +| Azure AD user | `urn:ietf:params:scim:schemas:extension:enterprise:2.0:User` | | | | | IsSoftDeleted |active | |department| `urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department`| The following table lists an example of user attributes: The following table lists an example of group attributes: -| Azure AD group | urn:ietf:params:scim:schemas:core:2.0:Group | +| Azure AD group | `urn:ietf:params:scim:schemas:core:2.0:Group` | | | | | displayName |displayName | | members |members | |
active-directory | Daemon Quickstart Portal Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-python.md | -> logging.info("No suitable token exists in cache. Let's get a new one from AAD.") +> logging.info("No suitable token exists in cache. Let's get a new one from Azure AD.") > result = app.acquire_token_for_client(scopes=config["scope"]) > ``` > |
active-directory | Desktop Quickstart Portal Nodejs Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-nodejs-desktop.md | -> > [!div class="nextstepaction"] -> > [Make this change for me]() +> +> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button> > > > [!div id="appconfigured" class="alert alert-info"] > >  Your application is configured with these attributes. |
active-directory | Desktop Quickstart Portal Uwp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-uwp.md | Title: "Quickstart: Sign in users and call Microsoft Graph in a Universal Windows Platform app" description: In this quickstart, learn how a Universal Windows Platform (UWP) application can get an access token and call an API protected by Microsoft identity platform. -+ |
active-directory | Desktop Quickstart Portal Wpf | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-wpf.md | -> ## Quickstart: Acquire a token and call the Microsoft Graph API from a Windows desktop application +> # Quickstart: Acquire a token and call the Microsoft Graph API from a Windows desktop application > > In this quickstart, you download and run a code sample that demonstrates how a Windows Presentation Foundation (WPF) application can sign in users and get an access token to call the Microsoft Graph API. > |
active-directory | Msal Js Pass Custom State Authentication Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-pass-custom-state-authentication-request.md | -The state parameter can also be used to encode information of the app's state before redirect. You can pass the user's state in the app, such as the page or view they were on, as input to this parameter. The MSAL.js library allows you to pass your custom state as state parameter in the `Request` object: +The state parameter can also be used to encode information of the app's state before redirect. You can pass the user's state in the app, such as the page or view they were on, as input to this parameter. The MSAL.js library allows you to pass your custom state as state parameter in the [Request](https://azuread.github.io/microsoft-authentication-library-for-js/ref/modules/_azure_msal_browser.html#redirectrequest) object. For example: ```javascript-// Request type -export type AuthenticationParameters = { - scopes?: Array<string>; - extraScopesToConsent?: Array<string>; - prompt?: string; - extraQueryParameters?: QPDict; - claimsRequest?: string; - authority?: string; - state?: string; - correlationId?: string; - account?: Account; - sid?: string; - loginHint?: string; - forceRefresh?: boolean; -}; -``` --> [!Note] -> If you would like to skip a cached token and go to the server, please pass in the boolean `forceRefresh` into the AuthenticationParameters object used to make a login/token request. -> `forceRefresh` should not be used by default, because of the performance impact on your application. -> Relying on the cache will give your users a better experience. -> Skipping the cache should only be used in scenarios where you know the currently cached data does not have up-to-date information. -> Such as an Admin tool that adds roles to a user that needs to get a new token with updated roles. +import {PublicClientApplication} from "@azure/msal-browser"; -For example: +const myMsalObj = new PublicClientApplication({ + clientId: "ENTER_CLIENT_ID_HERE" +}); -```javascript let loginRequest = {- scopes: ["user.read", "user.write"], + scopes: ["user.read"], state: "page_url" } -myMSALObj.loginPopup(loginRequest); +myMSALObj.loginRedirect(loginRequest); ``` -The passed in state is appended to the unique GUID set by MSAL.js when sending the request. When the response is returned, MSAL.js checks for a state match and then returns the custom passed in state in the `Response` object as `accountState`. --```javascript -export type AuthResponse = { - uniqueId: string; - tenantId: string; - tokenType: string; - idToken: IdToken; - accessToken: string; - scopes: Array<string>; - expiresOn: Date; - account: Account; - accountState: string; -}; -``` +The passed in state is appended to the unique GUID set by MSAL.js when sending the request. When the response is returned, MSAL.js checks for a state match and then returns the custom passed in state in the [Response](https://azuread.github.io/microsoft-authentication-library-for-js/ref/modules/_azure_msal_common.html#authenticationresult) object as `state`. -To learn more, read about [building a single-page application (SPA)](scenario-spa-overview.md) using MSAL.js. +To learn more, read about [building a single-page application (SPA)](scenario-spa-overview.md) using MSAL.js. |
active-directory | Msal Net Migration Public Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-public-client.md | result = await context.AcquireTokenAsync(resource, clientId, // to a URL to consent: https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id={clientId}&response_type=code&scope=user.read // AADSTS50079: The user is required to use multi-factor authentication.- // There is no mitigation - if MFA is configured for your tenant and AAD decides to enforce it, + // There is no mitigation - if MFA is configured for your tenant and Azure AD decides to enforce it, // you need to fallback to an interactive flows such as AcquireTokenInteractive or AcquireTokenByDeviceCode } catch (MsalServiceException ex) result = await context.AcquireTokenAsync(resource, clientId, catch (MsalClientException ex) { // Error Code: unknown_user Message: Could not identify logged in user- // Explanation: the library was unable to query the current Windows logged-in user or this user is not AD or AAD + // Explanation: the library was unable to query the current Windows logged-in user or this user is not AD or Azure AD // joined (work-place joined users are not supported). // Mitigation 1: on UWP, check that the application has the following capabilities: Enterprise Authentication, result = await context.AcquireTokenAsync(resource, clientId, // Error Code: integrated_windows_auth_not_supported_managed_user // Explanation: This method relies on a protocol exposed by Active Directory (AD). If a user was created in Azure // Active Directory without AD backing ("managed" user), this method will fail. Users created in AD and backed by- // AAD ("federated" users) can benefit from this non-interactive method of authentication. + // Azure AD ("federated" users) can benefit from this non-interactive method of authentication. // Mitigation: Use interactive authentication } } static async Task<AuthenticationResult> GetATokenForGraph() } catch (MsalUiRequiredException ex) {- // No token found in the cache or AAD insists that a form interactive auth is required (e.g. the tenant admin turned on MFA) + // No token found in the cache or Azure AD insists that a form interactive auth is required (e.g. the tenant admin turned on MFA) // If you want to provide a more complex user experience, check out ex.Classification return await AcquireByDeviceCodeAsync(pca); |
active-directory | Publisher Verification Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md | App developers must meet a few requirements to complete the publisher verificati - The app must be registered in an Azure AD tenant and have a [publisher domain](howto-configure-publisher-domain.md) set. -- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant.+- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified) - The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center. |
active-directory | Quickstart V2 Python Daemon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-daemon.md | -> logging.info("No suitable token exists in cache. Let's get a new one from AAD.") +> logging.info("No suitable token exists in cache. Let's get a new one from Azure AD.") > result = app.acquire_token_for_client(scopes=config["scope"]) > ``` > |
active-directory | Scenario Daemon Acquire Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-acquire-token.md | result = None result = app.acquire_token_silent(config["scope"], account=None) if not result:- logging.info("No suitable token exists in cache. Let's get a new one from AAD.") + logging.info("No suitable token exists in cache. Let's get a new one from Azure AD.") result = app.acquire_token_for_client(scopes=config["scope"]) if "access_token" in result: |
active-directory | Scenario Daemon App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-configuration.md | When you build a confidential client with client secrets, the [parameters.json]( "authority": "https://login.microsoftonline.com/<your_tenant_id>", "client_id": "your_client_id", "scope": [ "https://graph.microsoft.com/.default" ],- "secret": "The secret generated by AAD during your confidential app registration", + "secret": "The secret generated by Azure AD during your confidential app registration", "endpoint": "https://graph.microsoft.com/v1.0/users" } ``` When you build a confidential client with certificates, the [parameters.json](ht "authority": "https://login.microsoftonline.com/<your_tenant_id>", "client_id": "your_client_id", "scope": [ "https://graph.microsoft.com/.default" ],- "thumbprint": "790E... The thumbprint generated by AAD when you upload your public cert", + "thumbprint": "790E... The thumbprint generated by Azure AD when you upload your public cert", "private_key_file": "server.pem", "endpoint": "https://graph.microsoft.com/v1.0/users" } |
active-directory | Spa Quickstart Portal Javascript Auth Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code.md | -> > [!div class="alert alert-info"] +> > [!div id="appconfigured" class="alert alert-info"] > >  Your application is configured with these attributes. > > ### Step 2: Download the project-> > [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-javascript-auth-code.md) +> > [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-javascript-auth-code.md) |
active-directory | Web App Quickstart Portal Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet-core.md | +> > <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button> > > > [!div id="appconfigured" class="alert alert-info"]-> > [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/) +> > [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/) |
active-directory | Web App Quickstart Portal Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-java.md | +> > <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button> > > > [!div id="appconfigured" class="alert alert-info"] |
active-directory | Web App Quickstart Portal Node Js | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js.md | +> > <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button> > > > [!div id="appconfigured" class="alert alert-info"]-> > [Adding Auth to an existing web app - GitHub code sample >](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/auth-code) +> > [Adding Auth to an existing web app - GitHub code sample >](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/auth-code) |
active-directory | 3 Secure Access Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/3-secure-access-plan.md | Title: Create a security plan for external access to Azure Active Directory + Title: Create a security plan for external access to resources description: Plan the security for external access to your organization's resources. -# Create a security plan for external access +# Create a security plan for external access to resources -Before you create an external-access security plan, ensure the following conditions are met. +Before you create an external-access security plan, review the following two articles, which add context and information for the security plan. -* [Determine your security posture for external access](1-secure-access-posture.md) +* [Determine your security posture for external access with Azure AD](1-secure-access-posture.md) * [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) +## Security plan documentation + For your security plan, document the following information: -* Applications and resources to be grouped for access +* Applications and resources grouped for access * Sign-in conditions for external users- * Device state, sign-in location, client application requirements, and user risk -* Policies that determine when to review and remove access -* User populations to be grouped for a similar experience + * Device state, sign-in location, client application requirements, user risk, etc. +* Policies to determine timing for reviews and access removal +* User populations grouped for similar experiences -After you document the information, use Microsoft identity and access management policies, or another identity provider (IdP) to implement the plan. +To implement the security plan, you can use Microsoft identity and access management policies, or another identity provider (IdP). -## Resources to be grouped for access +Learn more: [Identity and access management overview](/compliance/assurance/assurance-identity-and-access-management) -To group resources for access: +## Use groups for access -* Microsoft Teams groups files, conversation threads, and other resources. Formulate an external access strategy for Microsoft Teams. - * See, [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md) -* Use entitlement management access packages to create and delegate management of packages of applications, groups, teams, SharePoint sites, etc. +See the following links to articles about resource grouping strategies: ++* Microsoft Teams groups files, conversation threads, and other resources + * Formulate an external access strategy for Teams + * See, [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business with Azure AD](9-secure-access-teams-sharepoint.md) +* Use entitlement management access packages to create and delegate package management of applications, groups, teams, SharePoint sites, etc. * [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md) * Apply Conditional Access policies to up to 250 applications, with the same access requirements * [What is Conditional Access?](../conditional-access/overview.md) -* Use Cross Tenant Access Settings Inbound Access to define access for application groups of external users +* Define access for external user application groups * [Overview: Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md) -Document the applications to be grouped. Considerations include: +Document the grouped applications. Considerations include: -* **Risk profile** - Assess the risk if a bad actor gains access to an application. - * Identify application as high, medium, or low risk. Avoid grouping high-risk with low-risk. +* **Risk profile** - assess the risk if a bad actor gains access to an application + * Identify application as High, Medium, or Low risk. We recommend you don't group High-risk with Low-risk. * Document applications that can't be shared with external users-* **Compliance frameworks** - Determine compliance frameworks for apps +* **Compliance frameworks** - determine compliance frameworks for apps * Identify access and review requirements-* **Applications for roles or departments** - Assess applications to be grouped for a role or department access -* **Collaboration applications** - Identify collaboration applications external users can access, such as Teams and SharePoint +* **Applications for roles or departments** - assess applications grouped for role, or department, access +* **Collaboration applications** - identify collaboration applications external users can access, such as Teams or SharePoint * For productivity applications, external users might have licenses, or you might provide access -For application and resource group access by external users, document the following information: +Document the following information for application and resource group access by external users. * Descriptive group name, for example High_Risk_External_Access_Finance * Applications and resources in the group-* Application and resource owners and contact information -* Access is controlled by IT, or delegated to a business owner +* Application and resource owners and their contact information +* The IT team controls access, or control is delegated to a business owner * Prerequisites for access: background check, training, etc. * Compliance requirements to access resources * Challenges, for example multi-factor authentication (MFA) for some resources-* Cadence for reviews, by whom, and where it's documented +* Cadence for reviews, by whom, and where results are documented > [!TIP] > Use this type of governance plan for internal access. Consider the following risk-based policies to trigger MFA. * **Low** - MFA for some application sets * **Medium** - MFA when other risks are present-* **High** - External users always use MFA +* **High** - external users always use MFA Learn more: Use the following table to help assess policy to address risk. | | | | Device| Require compliant devices | | Mobile apps| Require approved apps |-| Identity protection is high risk| Require user to change password | +| Identity protection is High risk| Require user to change password | | Network location| To access confidential projects, require sign-in from an IP address range | -To use device state as policy input, the device is registered or joined to your tenant. Configure cross-tenant access settings must be configured to trust the device claims from the home tenant. See, [Modify inbound access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings). +To use device state as policy input, register or join the device to your tenant. To trust the device claims from the home tenant, configure cross-tenant access settings. See, [Modify inbound access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings). -You can use identity-protection risk policies. However, mitigate issue in the user home tenant. See, [Common Conditional Access policy: Sign-in risk-based multifactor authentication](../conditional-access/howto-conditional-access-policy-risk.md). +You can use identity-protection risk policies. However, mitigate issues in the user home tenant. See, [Common Conditional Access policy: Sign-in risk-based multifactor authentication](../conditional-access/howto-conditional-access-policy-risk.md). -For network locations, you can restrict access to IP addresses ranges you own. Use this method if external partners access applications while at your location. See, [Conditional Access: Block access by location](../conditional-access/howto-conditional-access-policy-location.md) +For network locations, you can restrict access to IP addresses ranges that you own. Use this method if external partners access applications while at your location. See, [Conditional Access: Block access by location](../conditional-access/howto-conditional-access-policy-location.md) ## Document access review policies Document policies that dictate when to review resource access, and remove accoun * Internal business policies and processes * User behavior -Your policies will be customized, however consider the following parameters: +Generally, organizations customize policy, however consider the following parameters: * **Entitlement management access reviews**: * [Change lifecycle settings for an access package in entitlement management](../governance/entitlement-management-access-package-lifecycle-policy.md) * [Create an access review of an access package in entitlement management](../governance/entitlement-management-access-reviews-create.md) * [Add a connected organization in entitlement management](../governance/entitlement-management-organization.md): group users from a partner and schedule reviews-* **Microsoft 365 groups**: +* **Microsoft 365 groups** * [Microsoft 365 group expiration policy](/microsoft-365/solutions/microsoft-365-groups-expiration-policy?view=o365-worldwide&preserve-view=true) * **Options**: * If external users don't use access packages or Microsoft 365 groups, determine when accounts become inactive or deleted Your policies will be customized, however consider the following parameters: ## Access control methods -Some features, for example entitlement management, are available with an Azure AD Premium 2 (P2) license. Microsoft 365 E5 and Office 365 E5 licenses include Azure AD P2 licenses. --Other combinations of Microsoft 365, Office 365, and Azure AD have functionality to manage external users. See, [Microsoft 365 guidance for security & compliance](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance). +Some features, for example entitlement management, are available with an Azure AD Premium 2 (P2) license. Microsoft 365 E5 and Office 365 E5 licenses include Azure AD P2 licenses. Learn more in the following entitlement management section. > [!NOTE] > Licenses are for one user. Therefore users, administrators, and business owners can have delegated access control. This scenario can occur with Azure AD P2 or Microsoft 365 E5, and you don't have to enable licenses for all users. The first 50,000 external users are free. If you don't enable P2 licenses for other internal users, they can't use entitlement management. +Other combinations of Microsoft 365, Office 365, and Azure AD have functionality to manage external users. See, [Microsoft 365 guidance for security & compliance](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance). + ## Govern access with Azure AD P2 and Microsoft 365 or Office 365 E5 Azure AD P2 and Microsoft 365 E5 have all the security and governance tools. ### Provision, sign-in, review access, and deprovision access -Entries in bold are recommended. +Entries in bold are recommended actions. | Feature| Provision external users| Enforce sign-in requirements| Review access| Deprovision access | | - | - | - | - | - | Entries in bold are recommended. ### Resource access -Entries in bold are recommended. +Entries in bold are recommended actions. |Feature | App and resource access| SharePoint and OneDrive access| Teams access| Email and document security | | - |-|-|-|-| Entries in bold are recommended. ### Entitlement management  -Use entitlement management to provision and deprovision access to groups and teams, applications, and SharePoint sites. Define the connected organizations allowed access, self-service requests, and approval workflows. To ensure access ends correctly, define expiration policies and access reviews for packages. +Use entitlement management to provision and deprovision access to groups and teams, applications, and SharePoint sites. Define the connected organizations granted access, self-service requests, and approval workflows. To ensure access ends correctly, define expiration policies and access reviews for packages. Learn more: [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md) Learn more: [Create a new access package in entitlement management](../governanc ### Provision, sign-in, review access, and deprovision access -Items in bold are recommended. +Items in bold are recommended actions. |Feature | Provision external users| Enforce sign-in requirements| Review access| Deprovision access | | - |-|-|-|-| Items in bold are recommended. * [Manage external access with entitlement management](6-secure-access-entitlement-managment.md) * [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) * [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)-* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md) +* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md) |
active-directory | 5 Secure Access B2b | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md | Title: Transition to governed collaboration with Azure Active Directory B2B Collaboration -description: Move to governed collaboration with Azure Ad B2B collaboration. + Title: Transition to governed collaboration with Azure Active Directory B2B collaboration +description: Move to governed collaboration with Azure Ad B2B collaboration by using controls, tools, and settings. -# Transition to governed collaboration with Azure Active Directory B2B collaboration +# Transition to governed collaboration with Azure Active Directory B2B collaboration -Understanding collaboration helps secure external access to your resources. We recommend you read the following articles, first: +For context and needed information we recommend you read the first four articles in the series of ten articles. * [Determine your security posture for external access](1-secure-access-posture.md) * [Discover the current state of external collaboration in your organization](2-secure-access-current-state.md) * [Create a security plan for external access](3-secure-access-plan.md) * [Securing external access with groups](4-secure-access-groups.md) -Use the information in this article to move external collaboration into Azure Active Directory B2B (Azure AD B2B) collaboration. +Understanding collaboration helps secure external access to your resources. Use the information in this article to move external collaboration into Azure Active Directory B2B (Azure AD B2B) collaboration. * See, [B2B collaboration overview](../external-identities/what-is-b2b.md)-* Learn about: [External Identities in Azure Active Directory](../external-identities/external-identities-overview.md) +* Learn about: [External Identities in Azure AD](../external-identities/external-identities-overview.md) ## Control collaboration -You can limit the organizations your users collaborate with (inbound and outbound), and who in your organization can invite guests. Most organizations permit business units to decide collaboration, and delegate approval and oversight. For example, organizations in government, education, and financial often don't permit open collaboration. You can use Azure AD features to control collaboration. +You can limit the organizations your users collaborate with (inbound and outbound), and who in your organization can invite guests. Most organizations permit business units to decide collaboration, and delegate approval and oversight. For example, organizations in government, education, and finance often don't permit open collaboration. You can use Azure AD features to control collaboration. -You can control access your tenant, by deploying one or more of the following solutions: +To control access your tenant, deploy one or more of the following solutions: -- **External Collaboration Settings** – Restrict the email domains that invitations got to-- **Cross Tenant Access Settings** – Control application access by guests by user, group, or tenant (inbound). Control external Azure AD tenant and application access for users (outbound)-- **Connected Organizations** – Determine what organizations can request Access Packages in Entitlement Management +- **External collaboration settings** – restrict the email domains that invitations go to +- **Cross tenant access settings** – control application access by guests by user, group, or tenant (inbound). Control external Azure AD tenant and application access for users (outbound). +- **Connected organizations** – determine what organizations can request access packages in Entitlement Management ### Determine collaboration partners -Document the organizations you collaborate with, and organization users' domains, if needed. Domain-based restrictions might be impractical. One collaboration partner can have multiple domains, and a partner can add domains. For example, a partner with multiple business units, with separate domains, and add more domains as they configure synchronization. +Document the organizations you collaborate with, and organization users' domains, if needed. Domain-based restrictions might be impractical. One collaboration partner can have multiple domains, and a partner can add domains. For example, a partner with multiple business units, with separate domains, can add more domains as they configure synchronization. -If your users use Azure AD B2B, you can discover the external Azure AD tenants they're collaborating, with via the sign-in logs, PowerShell, or a workbook. Learn more: +If your users use Azure AD B2B, you can discover the external Azure AD tenants they're collaborating with, with the sign-in logs, PowerShell, or a workbook. Learn more: * [Get MsIdCrossTenantAccessActivity](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MSIDCrossTenantAccessActivity) * [Cross-tenant access activity workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md) You can enable future collaboration with: -- External organizations (most inclusive)-- External organizations (but not denied organizations)-- Specific external organizations (most restrictive)+- **External organizations** - most inclusive +- **External organizations, but not denied organizations** +- **Specific external organizations** - most restrictive > [!NOTE] > If your collaboration settings are highly restrictive, your users might go outside the collaboration framework. We recommend you enable a broad collaboration that your security requirements allow. -Limits to one domain can prevent authorized collaboration with organizations that have other unrelated domains. For example, the initial point of contact with Contoso might be a US-based employee with email that has a .com domain. However if you allow only the com domain. you can omit Canadian employees who have the ca domain. +Limits to one domain can prevent authorized collaboration with organizations that have other, unrelated domains. For example, the initial point of contact with Contoso might be a US-based employee with email that has a `.com` domain. However if you allow only the `.com` domain, you can omit Canadian employees who have the `.ca` domain. -You can allow specific collaboration partners for a subset of users. For example, a university restricts student accounts from accessing external tenants, but allows faculty to collaborate with external organizations. +You can allow specific collaboration partners for a subset of users. For example, a university might restrict student accounts from accessing external tenants, but can allow faculty to collaborate with external organizations. -### Allowlist and blocklist with External Collaboration Settings +### Allowlist and blocklist with external collaboration settings -You can use an allowlist or blocklist to from specific organizations. You can use only an allow or a blocklist, not both. +You can use an allowlist or blocklist for organizations. You can use an allowlist, or a blocklist, not both. -* **Allowlist** - Limit collaboration to a list of domains. All other domains are on the blocklist. -* **Blocklist** - Allow collaboration with domains not on the blocklist +* **Allowlist** - limit collaboration to a list of domains. Other domains are on the blocklist. +* **Blocklist** - allow collaboration with domains not on the blocklist Learn more: [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md) > [!IMPORTANT]-> These lists don't apply to users in your directory. By default, they don't apply to OneDrive for Business and SharePoint allowlist or blocklists. These lists are separate, but you can enable [SharePoint-OneDrive B2B integration](/sharepoint/sharepoint-azureb2b-integration). +> Allowlists and blocklists don't apply to users in your directory. By default, they don't apply to OneDrive for Business and SharePoint allowlist or blocklists; these lists are separate. However, you can enable [SharePoint-OneDrive B2B integration](/sharepoint/sharepoint-azureb2b-integration). -Some organizations have a blocklist of bad-actor domains from a managed security provider. For example, if the organization does business with Contoso and uses a com domain, an unrelated organization can use the org domain, and attempt a phishing attack. +Some organizations have a blocklist of bad-actor domains from a managed security provider. For example, if the organization does business with Contoso and uses a `.com` domain, an unrelated organization can use the `.org` domain, and attempt a phishing attack. -### Cross Tenant Access Settings +### Cross tenant access settings -You can control inbound and outbound access using Cross Tenant Access Settings. In addition, you can trust multi-factor authentication (MFA), a compliant device, and hybrid Azure Active Directory joined device (HAADJ) claims from external Azure AD tenants. When you configure an organizational policy, it applies to the Azure AD tenant and covers users in that tenant, regardless of domain suffix. +You can control inbound and outbound access using cross tenant access settings. In addition, you can trust multi-factor authentication (MFA), a compliant device, and hybrid Azure Active Directory joined device (HAAJD) claims from external Azure AD tenants. When you configure an organizational policy, it applies to the Azure AD tenant and applies to users in that tenant, regardless of domain suffix. -You can enable collaboration across Microsoft clouds such as Microsoft Azure operated by 21Vianet (Azure China) or Microsoft Azure Government. Determine if your collaboration partners reside in a different Microsoft cloud. Learn more: [Configure Microsoft cloud settings for B2B collaboration (Preview)](../external-identities/cross-cloud-settings.md). +You can enable collaboration across Microsoft clouds, such as Microsoft Azure operated by 21Vianet (Azure China) or Azure Government. Determine if your collaboration partners reside in a different Microsoft cloud. -You can allow inbound access to specific tenants (allowlist), and set the default policy to block access. You then create organizational policies that allow access by user, group, or application. +Learn more: -You can block access to tenants (blocklist). Set the default policy to Allow and then create organizational policies that block access to some tenants. +* [Microsoft Azure operated by 21Vianet](/azure/china/overview-operations) +* [Azure Government developer guide](/azure-government/documentation-government-developer-guide) +* [Configure Microsoft cloud settings for B2B collaboration (Preview)](../external-identities/cross-cloud-settings.md). ++You can allow inbound access to specific tenants (allowlist), and set the default policy to block access. Then, create organizational policies that allow access by user, group, or application. ++You can block access to tenants (blocklist). Set the default policy to **Allow** and then create organizational policies that block access to some tenants. > [!NOTE]-> Cross Tenant Access Settings Inbound Access does not prevent invitations from being sent or redeemed. However, it does control applications access and whether a token is issued to the guest user. If the guest can redeem an invitation, policy blocks application access. +> Cross tenant access settings, inbound access does not prevent users from sending invitations, nor prevent them from being redeemed. However, it does control application access and whether a token is issued to the guest user. If the guest can redeem an invitation, policy blocks application access. To control external organizations users access, configure outbound access policies similarly to inbound access: allowlist and blocklist. Configure default and organization-specific policies. Learn more: [Configure cross-tenant access settings for B2B collaboration](../external-identities/cross-tenant-access-settings-b2b-collaboration.md) > [!NOTE]-> Cross Tenant Access Settings apply to Azure AD tenants. To control access for partners not using Azure AD, use External Collaboration Settings. +> Cross tenant access settings apply to Azure AD tenants. To control access for partners not using Azure AD, use external collaboration settings. -### Entitlement Management and Connected Organizations +### Entitlement management and connected organizations -Use Entitlement Management to ensure automatic guest-lifecycle governance. Create Access Packages and publish them to external users or to Connected Organizations, which support Azure AD tenants and other domains. When you create an Access Package restrict access to specific Connected Organizations. +Use entitlement management to ensure automatic guest-lifecycle governance. Create access packages and publish them to external users or to connected organizations, which support Azure AD tenants and other domains. When you create an access package, restrict access to connected organizations. Learn more: [What is entitlement management?](../governance/entitlement-management-overview.md) Learn more: [What is entitlement management?](../governance/entitlement-manageme To begin collaboration, invite or enable a partner to access resources. Users gain access by: -* [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md) +* [Azure AD B2B collaboration invitation redemption](../external-identities/redemption-experience.md) * [Self-service sign-up](../external-identities/self-service-sign-up-overview.md) * [Requesting access to an access package in entitlement management](../governance/entitlement-management-request-access.md) -When you enable Azure AD B2B, you can invite guest users with links and email invitations. Self service sign-up, and publishing Access Packages to the My Access portal, require more configuration. +When you enable Azure AD B2B, you can invite guest users with links and email invitations. Self-service sign-up, and publishing access packages to the My Access portal, require more configuration. > [!NOTE]-> Self service sign-up enforces no allowlist or blocklist in External Collaboration Settings. Use Cross Tenant Access Settings. You can integrate allowlists and blocklists with self service sign-up using custom API connectors. See, [Add an API connector to a user flow](../external-identities/self-service-sign-up-add-api-connector.md). +> Self-service sign-up enforces no allowlist or blocklist in external collaboration settings. Instead, use cross tenant access settings. You can integrate allowlists and blocklists with self-service sign-up using custom API connectors. See, [Add an API connector to a user flow](../external-identities/self-service-sign-up-add-api-connector.md). ### Guest user invitations Determine who can invite guest users to access resources. * Most restrictive: Allow only administrators and users with the Guest Inviter role * See, [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md)-* If security requirements permit, allow all UserType of Member to invite guests -* Determine if UserType of Guest, the default Azure AD B2B user account, can invite guests +* If security requirements permit, allow all Member UserType to invite guests +* Determine if Guest UserType can invite guests + * Guest is the default Azure AD B2B user account  -### External users information +### External user information Use Azure AD entitlement management to configure questions that external users answer. The questions appear to approvers to help them make a decision. You can configure sets of questions for each access package policy, so approvers have relevant information for access they approve. For example, ask vendors for their vendor contract number. Learn more: ### Troubleshoot invitation redemption to Azure AD users -Invited guest users from a collaboration partner can have trouble redeeming an invitation. +Invited guest users from a collaboration partner can have trouble redeeming an invitation. See the following list for mitigations. * User domain isn't on an allowlist * The partner’s home tenant restrictions prevent external collaboration-* The user isn't in partner Azure AD tenant. For example, users at contoso.com are in Active Directory. - * They can redeem invitations with the email one-time password (OTP). +* The user isn't in the partner Azure AD tenant. For example, users at contoso.com are in Active Directory. + * They can redeem invitations with the email one-time password (OTP) * See, [Azure Active Directory B2B collaboration invitation redemption](../external-identities/redemption-experience.md) -## External users access +## External user access ++Generally, there are resources you can share with external users, and some you can't. You can control what external users access. -Generally, there are resources you can share with external users, and some you can't. You can control what external users access. See, [Manage external access with Entitlement Management](6-secure-access-entitlement-managment.md). +Learn more: [Manage external access with Entitlement Management](6-secure-access-entitlement-managment.md) By default, guest users see information and attributes about tenant members and other partners, including group memberships. Consider limiting external user access to this information. -  +  -We recommend the following guest-user restrictions. +We recommend the following guest-user restrictions: * Limit guest access to browsing groups and other properties in the directory- * Use the external collaboration settings to restrict guests from reading groups they aren't members of + * Use external collaboration settings to restrict guests from reading groups they aren't members of * Block access to employee-only apps * Create a Conditional Access policy to block access to Azure AD-integrated applications for non-guest users * Block access to the Azure portal * You can make needed exceptions - * Create a Conditional Access policy with All guest and external users. Implement a policy to block access. + * Create a Conditional Access policy with all guest and external users. Implement a policy to block access. Learn more: [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md) Learn more: [Use Azure AD Identity Governance to review and remove external user Some organizations add external users as members (vendors, partners, and contractors). Assign an attribute, or username: -* Vendors: **v-** -* Partners: **p-** -* Contractors: **c-** +* **Vendors** - v-alias@contoso.com +* **Partners** - p-alias@contoso.com +* **Contractors** - c-alias@contoso.com -Evaluate external users with member accounts to determine access. You might have guest users not invited through Entitlement Management or Azure AD B2B +Evaluate external users with member accounts to determine access. You might have guest users not invited through entitlement management or Azure AD B2B. To find these users: * [Use Azure AD Identity Governance to review and remove external users who no longer have resource access](../governance/access-reviews-external-users.md) * Use a sample PowerShell script on [access-reviews-samples/ExternalIdentityUse/](https://github.com/microsoft/access-reviews-samples/tree/master/ExternalIdentityUse) -## Transition current external users to B2B +## Transition current external users to Azure AD B2B If you don't use Azure AD B2B, you likely have non-employee users in your tenant. We recommend you transition these accounts to Azure AD B2B external user accounts and then change their UserType to Guest. Use Azure AD and Microsoft 365 to handle external users. Include or exclude: * Guest users in Conditional Access policies-* Guest users in Access Packages and Access Reviews -* External access to Teams, SharePoint, and other resources +* Guest users in access packages and access reviews +* External access to Microsoft Teams, SharePoint, and other resources -You can transition these internal users while maintaining current access, UPN, and group memberships. See [Invite external users to B2B collaboration](../external-identities/invite-internal-users.md). +You can transition these internal users while maintaining current access, user principal name (UPN), and group memberships. ++Lear more: [Invite external users to B2B collaboration](../external-identities/invite-internal-users.md) ## Decommission collaboration methods By default, Teams allows external access. The organization can communicate with ### Sharing through SharePoint and OneDrive -Sharing through SharePoint and OneDrive adds users not in the Entitlement Management process. +Sharing through SharePoint and OneDrive adds users not in the entitlement management process. * [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md) * [Block OneDrive use from Office](/office365/troubleshoot/group-policy/block-onedrive-use-from-office) -### Documents in email +### Emailed documents and sensitivity labels ++Users send documents to external users by email. You can use sensitivity labels to restrict and encrypt access to documents. -Users send documents to external users by email. You can use sensitivity labels to restrict and encrypt access to documents. See, [Learn about sensitivity labels](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide&preserve-view=true). +See, [Learn about sensitivity labels](/microsoft-365/compliance/sensitivity-labels?view=o365-worldwide&preserve-view=true). ### Unsanctioned collaboration tools -Your users likely use Google Docs, DropBox, Slack, or Zoom. You can block use of these tools from a corporate network, at the firewall level, and with mobile application management for organization-managed devices. However, this action blocks sanctioned instances and doesn't block access from unmanaged devices. Block tools you don’t want, and create policies for no unsanctioned usage. +Some users likely use Google Docs, DropBox, Slack, or Zoom. You can block use of these tools from a corporate network, at the firewall level, and with mobile application management for organization-managed devices. However, this action blocks sanctioned instances and doesn't block access from unmanaged devices. Block tools you don’t want, and create policies for no unsanctioned usage. For more information on governing applications, see: |
active-directory | Road To The Cloud Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-posture.md | In this state: * A plan to move apps that depend on Active Directory and are part of the vision for the future-state Azure AD environment is being executed. A plan to replace services that won't move (file, print, or fax services) is in place. -* On-premises workloads have been replaced with cloud alternatives such as Windows Virtual Desktop, Azure Files, or Google Cloud Print. Azure SQL Managed Instance replaces SQL Server. +* On-premises workloads have been replaced with cloud alternatives such as Windows Virtual Desktop, Azure Files, or Universal Print. Azure SQL Managed Instance replaces SQL Server. ### State 5: 100% cloud |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md | Azure AD receives improvements on an ongoing basis. To stay up to date with the - Deprecated functionality - Plans for changes -This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md). +This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md). ## January 2023 Cross-tenant synchronization allows you to set up a scalable and automated solut -### Public Preview - Devices Blade Self-Help Capability for Pending Devices +### Public Preview - Devices option Self-Help Capability for Pending Devices Cross-tenant synchronization allows you to set up a scalable and automated solut **Service category:** Device Access Management **Product capability:** End User Experiences -In the **All Devices** blade under the registered column, you can now select any pending devices you have, and it will open a context pane to help troubleshoot why the device may be pending. You can also offer feedback on if the summarized information is helpful or not. For more information, see: [Pending devices in Azure Active Directory](/troubleshoot/azure/active-directory/pending-devices). +In the **All Devices** options under the registered column, you can now select any pending devices you have, and it opens a context pane to help troubleshoot why the device may be pending. You can also offer feedback on if the summarized information is helpful or not. For more information, see: [Pending devices in Azure Active Directory](/troubleshoot/azure/active-directory/pending-devices). In the **All Devices** blade under the registered column, you can now select any **Service category:** Identity Protection **Product capability:** Identity Security & Protection -In the January 2023 release of Authenticator for iOS, there will be no companion app for watchOS due to it being incompatible with Authenticator security features. This means you won't be able to install or use Authenticator on Apple Watch. This change only impacts Apple Watch, so you'll still be able to use Authenticator on your other devices. For more information, see: [Common questions about the Microsoft Authenticator app](https://support.microsoft.com/account-billing/common-questions-about-the-microsoft-authenticator-app-12d283d1-bcef-4875-9ae5-ac360e2945dd). +In the January 2023 release of Authenticator for iOS, there's no companion app for watchOS due to it being incompatible with Authenticator security features, meaning you won't be able to install or use Authenticator on Apple Watch. This change only impacts Apple Watch, so you can still use Authenticator on your other devices. For more information, see: [Common questions about the Microsoft Authenticator app](https://support.microsoft.com/account-billing/common-questions-about-the-microsoft-authenticator-app-12d283d1-bcef-4875-9ae5-ac360e2945dd). In January 2023 we've added the following 10 new applications in our App gallery You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial. -For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest +For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest For more information about how to better secure your organization by using autom **Service category:** Azure AD Connect Cloud Sync **Product capability:** Identity Governance -Try out the new guided experience for syncing objects from AD to Azure AD using Azure AD Cloud Sync in Azure Portal. With this new experience, Hybrid Identity Administrators can easily determine which sync engine to use for their scenarios and learn more about the various options they have with our sync solutions. With a rich set of tutorials and videos, customers will be able to learn everything about Azure AD cloud sync in one single place. +Try out the new guided experience for syncing objects from AD to Azure AD using Azure AD Cloud Sync in Azure portal. With this new experience, Hybrid Identity Administrators can easily determine which sync engine to use for their scenarios and learn more about the various options they have with our sync solutions. With a rich set of tutorials and videos, customers are able to learn everything about Azure AD cloud sync in one single place. -This experience will also help administrators walk through the different steps involved in setting up a cloud sync configuration as well as an intuitive experience to help them easily manage it. Admins can also get insights into their sync configuration by using the "Insights" option which is integrated with Azure Monitor and Workbooks. +This experience helps administrators walk through the different steps involved in setting up a cloud sync configuration and an intuitive experience to help them easily manage it. Admins can also get insights into their sync configuration by using the "Insights" option, which integrates with Azure Monitor and Workbooks. -For more information:, see: +For more information, see: - [Create a new configuration for Azure AD Connect cloud sync](../cloud-sync/how-to-configure.md) - [Attribute mapping in Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md) For more information:, see: **Type:** New feature **Service category:** Provisioning -**Product capability:** AAD Connect Cloud Sync +**Product capability:** Azure AD Connect Cloud Sync -Hybrid IT Admins now can sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure AD, allowing customers to simply map the needed attributes using Cloud Sync's attribute mapping experience. +Hybrid IT Admins now can sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure AD, allowing customers to map the needed attributes using Cloud Sync's attribute mapping experience. -For more details on how to enable this feature, see: [Cloud Sync directory extensions and custom attribute mapping](../cloud-sync/custom-attribute-mapping.md) +For more information on how to enable this feature, see: [Cloud Sync directory extensions and custom attribute mapping](../cloud-sync/custom-attribute-mapping.md) This feature analyzes uploaded client-side logs, also known as diagnostic logs, -### General Availability - Multiple Password-less Phone Sign-in for iOS Devices +### General Availability - Multiple Password-less Phone Sign-ins for iOS Devices This feature analyzes uploaded client-side logs, also known as diagnostic logs, **Service category:** Authentications (Logins) **Product capability:** User Authentication -End users can now enable password-less phone sign-in for multiple accounts in the Authenticator App on any supported iOS device. Consultants, students, and others with multiple accounts in Azure AD can add each account to Microsoft Authenticator and use password-less phone sign-in for all of them from the same iOS device. The Azure AD accounts can be in the same tenant or different tenants. Guest accounts are not supported for multiple account sign-in from one device. +End users can now enable password-less phone sign-in for multiple accounts in the Authenticator App on any supported iOS device. Consultants, students, and others with multiple accounts in Azure AD can add each account to Microsoft Authenticator and use password-less phone sign-in for all of them from the same iOS device. The Azure AD accounts can be in the same tenant or different tenants. Guest accounts aren't supported for multiple account sign-ins from one device. -End users are not required to enable the optional telemetry setting in the Authenticator App. For more information, see: [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md). +End users aren't required to enable the optional telemetry setting in the Authenticator App. For more information, see: [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md). End users are not required to enable the optional telemetry setting in the Authe Conditional Access templates provide a convenient method to deploy new policies aligned with Microsoft recommendations. In total, there are 14 Conditional Access policy templates, filtered by five different scenarios; secure foundation, zero trust, remote work, protect administrators, and emerging threats. -In this Public Preview refresh, we have enhanced the user experience with an updated design and added four new improvements: +In this Public Preview refresh, we've enhanced the user experience with an updated design and added four new improvements: - Admins can create a Conditional Access policy by importing a JSON file. - Admins can duplicate existing policy. For more information, see: [Conditional Access templates (Preview)](../condition **Service category:** User Access Management **Product capability:** User Management -The ability for users to create tenants from the Manage Tenant overview has been present in Azure AD since almost the beginning of the Azure portal. This new capability in the User Settings blade allows admins to restrict their users from being able to create new tenants. There is also a new [Tenant Creator](../roles/permissions-reference.md#tenant-creator) role to allow specific users to create tenants. For more information, see [Default user permissions](../fundamentals/users-default-permissions.md#restrict-member-users-default-permissions). +The ability for users to create tenants from the Manage Tenant overview has been present in Azure AD since almost the beginning of the Azure portal. This new capability in the User Settings option allows admins to restrict their users from being able to create new tenants. There's also a new [Tenant Creator](../roles/permissions-reference.md#tenant-creator) role to allow specific users to create tenants. For more information, see [Default user permissions](../fundamentals/users-default-permissions.md#restrict-member-users-default-permissions). The ability for users to create tenants from the Manage Tenant overview has been **Service category:** My Apps **Product capability:** End User Experiences -We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Entra portals. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections blade by selecting App launchers. In addition, we have added a new App launchers Settings blade. This blade has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings blade also has controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature is turned on for your organization, and will be reflected in the My Apps portal and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md). +We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Entra portals. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md). We have consolidated relevant app launcher settings in a new App launchers secti **Service category:** MFA **Product capability:** User Authentication -The Converged Authentication Methods Policy enables you to manage all authentication methods used for MFA and SSPR in one policy, migrate off the legacy MFA and SSPR policies, and target authentication methods to groups of users instead of enabling them for all users in the tenant. For more information, see: [Manage authentication methods for Azure AD](../authentication/concept-authentication-methods-manage.md). +The Converged Authentication Methods Policy enables you to manage all authentication methods used for MFA and SSPR in one policy. You can migrate off the legacy MFA and SSPR policies, and target authentication methods to groups of users instead of enabling them for all users in the tenant. For more information, see: [Manage authentication methods for Azure AD](../authentication/concept-authentication-methods-manage.md). The Converged Authentication Methods Policy enables you to manage all authentica **Service category:** Directory Management **Product capability:** AuthZ/Access Delegation -You can now use administrative units to delegate management of specified devices in your tenant by adding devices to an administrative unit, and assigning built-in and custom device management roles scoped to that administrative unit. For more information, see: [Device management](../roles/administrative-units.md#device-management). +You can now use administrative units to delegate management of specified devices in your tenant by adding devices to an administrative unit. You are also able to assign built-in, and custom device management roles, scoped to that administrative unit. For more information, see: [Device management](../roles/administrative-units.md#device-management). -### Public Preview - Frontline workers using shared devices can now use Edge and Yammer apps on Android +### Public Preview - Frontline workers using shared devices can now use Microsoft Edge and Yammer apps on Android You can now use administrative units to delegate management of specified devices **Service category:** N/A **Product capability:** SSO -Companies often provide mobile devices to frontline workers that need to be shared between shifts. Microsoft’s shared device mode allows frontline workers to easily authenticate by automatically signing users in and out of all the apps that have enabled this feature. In addition to Microsoft Teams and Managed Home Screen being generally available, we are excited to announce that Edge and Yammer apps on Android are now in Public Preview. +Companies often provide mobile devices to frontline workers that need are shared between shifts. Microsoft’s shared device mode allows frontline workers to easily authenticate by automatically signing users in and out of all the apps that have enabled this feature. In addition to Microsoft Teams and Managed Home Screen being generally available, we're excited to announce that Microsoft Edge and Yammer apps on Android are now in Public Preview. -For further guidance on deploying frontline solutions, see: [frontline deployment documentation](https://aka.ms/frontlinewhitepaper). +For more information on deploying frontline solutions, see: [frontline deployment documentation](https://aka.ms/frontlinewhitepaper). For more information on shared-device mode, see: [Azure Active Directory Shared Device Mode documentation](../develop/msal-android-shared-devices.md#microsoft-applications-that-support-shared-device-mode). -For steps to setup shared device mode with Intune, see: [Intune setup blog](https://techcommunity.microsoft.com/t5/intune-customer-success/enroll-android-enterprise-dedicated-devices-into-azure-ad-shared/ba-p/1820093). +For steps to set up shared device mode with Intune, see: [Intune setup blog](https://techcommunity.microsoft.com/t5/intune-customer-success/enroll-android-enterprise-dedicated-devices-into-azure-ad-shared/ba-p/1820093). Azure AD supports provisioning users into applications hosted on-premises or in **Service category:** Enterprise Apps **Product capability:** 3rd Party Integration -In December 2022 we have added the following 44 new applications in our App gallery with Federation support +In December 2022 we've added the following 44 new applications in our App gallery with Federation support: [Bionexo IDM](https://login.bionexo.com/), [SMART Meeting Pro](https://www.smarttech.com/en/business/software/meeting-pro), [Venafi Control Plane – Datacenter](../saas-apps/venafi-control-plane-tutorial.md), [HighQ](../saas-apps/highq-tutorial.md), [Drawboard PDF](https://pdf.drawboard.com/), [ETU Skillsims](../saas-apps/etu-skillsims-tutorial.md), [TencentCloud IDaaS](../saas-apps/tencent-cloud-idaas-tutorial.md), [TeamHeadquarters Email Agent OAuth](https://thq.entry.com/), [Verizon MDM](https://verizonmdm.vzw.com/), [QRadar SOAR](../saas-apps/qradar-soar-tutorial.md), [Tripwire Enterprise](../saas-apps/tripwire-enterprise-tutorial.md), [Cisco Unified Communications Manager](../saas-apps/cisco-unified-communications-manager-tutorial.md), [Howspace](https://login.in.howspace.com/), [Flipsnack SAML](../saas-apps/flipsnack-saml-tutorial.md), [Albert](http://www.albertinvent.com/), [Altinget.no](https://www.altinget.no/), [Coveo Hosted Services](../saas-apps/coveo-hosted-services-tutorial.md), [Cybozu(cybozu.com)](../saas-apps/cybozu-tutorial.md), [BombBomb](https://app.bombbomb.com/app), [VMware Identity Service](../saas-apps/vmware-identity-service-tutorial.md), [Cimmaron Exchange Sync - Delegated](https://cimmaronsoftware.com/Mortgage-CRM-Exchange-Sync.aspx), [HexaSync](https://app-az.hexasync.com/login), [Trifecta Teams](https://app.trifectateams.net/), [VerosoftDesign](https://verosoft-design.vercel.app/), [Mazepay](https://app.mazepay.com/), [Wistia](../saas-apps/wistia-tutorial.md), [Begin.AI](https://app.begin.ai/), [WebCE](../saas-apps/webce-tutorial.md), [Dream Broker Studio](https://dreambroker.com/studio/login/), [PKSHA Chatbot](../saas-apps/pksha-chatbot-tutorial.md), [PGM-BCP](https://ups-pgm-bcp.4gfactor.com/azure/), [ChartDesk SSO](../saas-apps/chartdesk-sso-tutorial.md), [Elsevier SP](../saas-apps/elsevier-sp-tutorial.md), [GreenCommerce IdentityServer](https://identity.jem-id.nl/Account/Login), [Fullview](https://app.fullview.io/sign-in), [Aqua Platform](../saas-apps/aqua-platform-tutorial.md), [SpedTrack](../saas-apps/spedtrack-tutorial.md), [Pinpoint](https://pinpoint.ddiworld.com/psg2?sso=true), [Darzin Outlook Add-in](https://outlook.darzin.com/graph-login.html), [Simply Stakeholders Outlook Add-in](https://outlook.simplystakeholders.com/graph-login.html), [tesma](../saas-apps/tesma-tutorial.md), [Parkable](../saas-apps/parkable-tutorial.md), [Unite Us](../saas-apps/unite-us-tutorial.md) You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial, -For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest +For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest For listing your application in the Azure AD app gallery, please read the detail **Service category:** Other **Product capability:** Developer Experience -As part of our ongoing initiative to improve the developer experience, service reliability, and security of customer applications, we will end support for the Azure Active Directory Authentication Library (ADAL). The final deadline to migrate your applications to Microsoft Authentication Library (MSAL) has been extended to **June 30, 2023**. +As part of our ongoing initiative to improve the developer experience, service reliability, and security of customer applications, we'll end support for the Microsoft Authentication Library (ADAL). The final deadline to migrate your applications to Microsoft Authentication Library (MSAL) has been extended to **June 30, 2023**. ### Why are we doing this? -As we consolidate and evolve the Microsoft Identity platform, we are also investing in making significant improvements to the developer experience and service features that make it possible to build secure, robust and resilient applications. To make these features available to our customers we needed to update the architecture of our software development kits. As a result of this change, we’ve decided that the path forward requires us to sunset ADAL so that we can focus on developer experience investments with MSAL. +As we consolidate and evolve the Microsoft Identity platform, we're also investing in making significant improvements to the developer experience and service features that make it possible to build secure, robust and resilient applications. To make these features available to our customers, we needed to update the architecture of our software development kits. As a result of this change, we’ve decided that the path forward requires us to sunset Azure Active Directory Authentication Library. This allows us to focus on developer experience investments with Microsoft Authentication Library. ### What happens? -We recognize that changing libraries is not an easy task, and cannot be accomplished quickly. We are committed to helping customers plan their migrations to MSAL as well as execute them with minimal disruption. +We recognize that changing libraries isn't an easy task, and can't be accomplished quickly. We're committed to helping customers plan their migrations to Microsoft Authentication Library and execute them with minimal disruption. -- In June 2020 we [announced the 2-year end of support timeline for ADAL](https://devblogs.microsoft.com/microsoft365dev/end-of-support-timelines-for-azure-ad-authentication-library-adal-and-azure-ad-graph/). -- In December 2022 we’ve decided to extend the ADAL end of support to June 2023. -- Through the next six months (January 2023 – June 2023) we will continue informing customers about the upcoming end of support along with providing guidance on migration. -- On June 2023 we will officially sunset ADAL, removing library documentation and archiving all GitHub repositories related to the project. +- In June 2020, we [announced the 2-year end of support timeline for ADAL](https://devblogs.microsoft.com/microsoft365dev/end-of-support-timelines-for-azure-ad-authentication-library-adal-and-azure-ad-graph/). +- In December 2022, we’ve decided to extend the Azure Active Directory Authentication Library end of support to June 2023. +- Through the next six months (January 2023 – June 2023) we continue informing customers about the upcoming end of support along with providing guidance on migration. +- On June 2023 we'll officially sunset Azure Active Directory Authentication Library, removing library documentation and archiving all GitHub repositories related to the project. -### How to find out which applications in my tenant are using ADAL? +### How to find out which applications in my tenant are using Azure Active Directory Authentication Library? -Refer to our post on [Microsoft Q&A](/answers/questions/360928/information-how-to-find-apps-using-adal-in-your-te.html) for details on identifying ADAL apps with the help of [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md). -### If I’m using ADAL, what can I expect after the deadline? +Refer to our post on [Microsoft Q&A](/answers/questions/360928/information-how-to-find-apps-using-adal-in-your-te.html) for details on identifying Azure Active Directory Authentication Library apps with the help of [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md). +### If I’m using Azure Active Directory Authentication Library, what can I expect after the deadline? - There will be no new releases (security or otherwise) to the library after June 2023. -- We will not be accepting any incident reports or support requests for ADAL. ADAL to MSAL migration support would continue. -- The underpinning services will continue working and applications that depend on ADAL should continue working; however, applications and the resources they access will be at increased security and reliability risk due to not having the latest updates, service configuration, and enhancements made available through the Microsoft Identity platform. +- We won't accept any incident reports or support requests for Azure Active Directory Authentication Library. Azure Active Directory Authentication Library to Microsoft Authentication Library migration support would continue. +- The underpinning services continue working and applications that depend on Azure Active Directory Authentication Library should continue working. Applications, and the resources they access, are at increased security and reliability risk due to not having the latest updates, service configuration, and enhancements made available through the Microsoft Identity platform. -### What features can I only access with MSAL? +### What features can I only access with Microsoft Authentication Library? -The number of features and capabilities that we are adding to MSAL libraries are growing weekly. Some of them include: +The number of features and capabilities that we're adding to Microsoft Authentication Library libraries are growing weekly. Some of them include: - Support for Microsoft accounts (MSA) - Support for Azure AD B2C accounts - Handling throttling And more. For an up-to-date list, refer to our [migration guide](../develop/msal ### How to migrate? -To make the migration process easier we published a [comprehensive guide](../develop/msal-migration.md#how-to-migrate-to-msal) that documents the migration paths across different platforms and programming languages. +To make the migration process easier, we published a [comprehensive guide](../develop/msal-migration.md#how-to-migrate-to-msal) that documents the migration paths across different platforms and programming languages. -In addition to the ADAL to MSAL update, we recommend migrating from Azure AD Graph API to Microsoft Graph. This change will enable you to take advantage of the latest additions and enhancements, such as CAE, across the Microsoft service offering through a single, unified endpoint. You can read more in our [Migrate your apps from Azure AD Graph to Microsoft Graph](/graph/migrate-azure-ad-graph-overview) guide. Any questions can be posted to [Microsoft Q&A](/answers/topics/azure-active-directory.html) or [Stack Overflow](https://stackoverflow.com/questions/tagged/msal) +In addition to the Azure Active Directory Authentication Library to Microsoft Authentication Library update, we recommend migrating from Azure AD Graph API to Microsoft Graph. This change enables you to take advantage of the latest additions and enhancements, such as CAE, across the Microsoft service offering through a single, unified endpoint. You can read more in our [Migrate your apps from Azure AD Graph to Microsoft Graph](/graph/migrate-azure-ad-graph-overview) guide. You can post any questions to [Microsoft Q&A](/answers/topics/azure-active-directory.html) or [Stack Overflow](https://stackoverflow.com/questions/tagged/msal). In addition to the ADAL to MSAL update, we recommend migrating from Azure AD Gra **Service category:** N/A **Product capability:** User Authentication -For users who don't know or use a password, the Temporary Access Pass can now be used to recover Azure AD-joined PCs when the EnableWebSignIn policy is enabled on the device. For more information, see: [Authentication/EnableWebSignIn](/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin). +The Temporary Access Pass can now be used to recover Azure AD-joined PCs when the EnableWebSignIn policy is enabled on the device. This is useful for when your users do not know, or have, a password. For more information, see: [Authentication/EnableWebSignIn](/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin). In November 2022, we've added the following 22 new applications in our App galle You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial, -For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest +For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest Admins can now pause, and resume, the processing of individual dynamic groups in **Service category:** Authentications (Logins) **Product capability:** User Authentication -Update the Azure AD and Microsoft 365 sign in experience with new company branding capabilities. You can apply your company’s brand guidance to authentication experiences with pre-defined templates. For more information, see: [Configure your company branding](../fundamentals/customize-branding.md). +Update the Azure AD and Microsoft 365 sign-in experience with new company branding capabilities. You can apply your company’s brand guidance to authentication experiences with pre-defined templates. For more information, see: [Configure your company branding](../fundamentals/customize-branding.md). Update the Azure AD and Microsoft 365 sign in experience with new company brandi **Service category:** Directory Management **Product capability:** Directory -Update the company branding functionality on the Azure AD/Microsoft 365 sign in experience to allow customizing Self Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icon. For more information, see: [Configure your company branding](../fundamentals/customize-branding.md). +Update the company branding functionality on the Azure AD/Microsoft 365 sign-in experience to allow customizing Self Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icon. For more information, see: [Configure your company branding](../fundamentals/customize-branding.md). Update the company branding functionality on the Azure AD/Microsoft 365 sign in Administrative Units now support soft deletion. Admins can now list, view properties of, or restore deleted Administrative Units using the Microsoft Graph. This functionality restores all configuration for the Administrative Unit when restored from soft delete, including memberships, admin roles, processing rules, and processing rules state. -This functionality greatly enhances recoverability and resilience when using Administrative Units. Now, when an Administrative Unit is accidentally deleted it can be restored quickly to the same state it was at time of deletion-removing uncertainty around how things were configured and making restoration quick and easy. For more information, see: [List deletedItems (directory objects)](/graph/api/directory-deleteditems-list). +This functionality greatly enhances recoverability and resilience when using Administrative Units. Now, when an Administrative Unit is accidentally deleted, you can restore it quickly to the same state it was at time of deletion. This removes uncertainty around configuration and makes restoration quick and easy. For more information, see: [List deletedItems (directory objects)](/graph/api/directory-deleteditems-list). This functionality greatly enhances recoverability and resilience when using Adm **Service category:** Identity Protection **Product capability:** Platform -With the growing adoption and support of IPv6 across enterprise networks, service providers, and devices, many customers are wondering if their users can continue to access their services and applications from IPv6 clients and networks. Today, we’re excited to announce our plan to bring IPv6 support to Microsoft Azure Active Directory (Azure AD). This will allow customers to reach the Azure AD services over both IPv4 and IPv6 network protocols (dual stack). +With the growing adoption and support of IPv6 across enterprise networks, service providers, and devices, many customers are wondering if their users can continue to access their services and applications from IPv6 clients and networks. Today, we’re excited to announce our plan to bring IPv6 support to Microsoft Azure Active Directory (Azure AD). This allows customers to reach the Azure AD services over both IPv4 and IPv6 network protocols (dual stack). For most customers, IPv4 won't completely disappear from their digital landscape, so we aren't planning to require IPv6 or to de-prioritize IPv4 in any Azure Active Directory features or services. We'll begin introducing IPv6 support into Azure AD services in a phased approach, beginning March 31, 2023.-We have guidance below which is specifically for Azure AD customers who use IPv6 addresses and also use Named Locations in their Conditional Access policies. +We have guidance that is specifically for Azure AD customers who use IPv6 addresses and also use Named Locations in their Conditional Access policies. Customers who use named locations to identify specific network boundaries in their organization need to:-1. Conduct an audit of existing named locations to anticipate potential impact. +1. Conduct an audit of existing named locations to anticipate potential risk. 1. Work with your network partner to identify egress IPv6 addresses in use in your environment. 1. Review and update existing named locations to include the identified IPv6 ranges. Customers who use Conditional Access location based policies to restrict and secure access to their apps from specific networks need to:-1. Conduct an audit of existing Conditional Access policies to identify use of named locations as a condition to anticipate potential impact. +1. Conduct an audit of existing Conditional Access policies to identify use of named locations as a condition to anticipate potential risk. 1. Review and update existing Conditional Access location based policies to ensure they continue to meet your organization’s security requirements. -We'll continue to share additional guidance on IPv6 enablement in Azure AD at this easy to remember link https://aka.ms/azureadipv6. +We continue to share additional guidance on IPv6 enablement in Azure AD at this link: https://aka.ms/azureadipv6. We'll continue to share additional guidance on IPv6 enablement in Azure AD at th **Type:** Plan for change **Service category:** Provisioning -**Product capability:** AAD Connect Cloud Sync +**Product capability:** Azure AD Connect Cloud Sync -Microsoft will stop support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you're using Azure AD cloud sync, please make sure you have the latest version of the agent. You can info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller) +Microsoft stops support for Azure AD provisioning agent with versions 1.1.818.0 and below starting Feb 1,2023. If you're using Azure AD cloud sync, make sure you have the latest version of the agent. You can view info about the agent release history [here](../app-provisioning/provisioning-agent-release-version-history.md). You can download the latest version [here](https://download.msappproxy.net/Subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/Connector/provisioningAgentInstaller) You can find out which version of the agent you're using as follows: An IT admin can now add multiple domains to a single SAML/WS-Fed identity provid -### General Availability - Limits on the number of configured API permissions for an application registration will be enforced starting in October 2022 +### General Availability - Limits on the number of configured API permissions for an application registration enforced starting in October 2022 An IT admin can now add multiple domains to a single SAML/WS-Fed identity provid **Service category:** Other **Product capability:** Developer Experience -In the end of October, the total number of required permissions for any single application registration must not exceed 400 permissions across all APIs. Applications exceeding the limit won't be able to increase the number of permissions they're configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and may not exceed 50 APIs. +In the end of October, the total number of required permissions for any single application registration must not exceed 400 permissions across all APIs. Applications exceeding the limit are unable to increase the number of permissions configured for. The existing limit on the number of distinct APIs for permissions required remains unchanged and may not exceed 50 APIs. -In the Azure portal, the required permissions are listed under API Permissions within specific applications in the application registration menu. When using Microsoft Graph or Microsoft Graph PowerShell, the required permissions are listed in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. For more information, see: [Validation differences by supported account types (signInAudience)](../develop/supported-accounts-validation.md). +In the Azure portal, the required permissions list is under API Permissions within specific applications in the application registration menu. When using Microsoft Graph or Microsoft Graph PowerShell, the required permissions list is in the requiredResourceAccess property of an [application](/graph/api/resources/application) entity. For more information, see: [Validation differences by supported account types (signInAudience)](../develop/supported-accounts-validation.md). In the Azure portal, the required permissions are listed under API Permissions w **Service category:** Conditional Access **Product capability:** User Authentication -Announcing Public preview of Authentication strength, a Conditional Access control that allows administrators to specify which authentication methods can be used to access a resource. For more information, see: [Conditional Access authentication strength (preview)](../authentication/concept-authentication-strengths.md). You can use custom authentication strengths to restrict access by requiring specific FIDO2 keys using the Authenticator Attestation GUIDs (AAGUIDs), and apply this through conditional access policies. For more information, see: [FIDO2 security key advanced options](../authentication/concept-authentication-strengths.md#fido2-security-key-advanced-options). +We are announcing Public preview of Authentication strength, a Conditional Access control that allows administrators to specify which authentication methods can be used to access a resource. For more information, see: [Conditional Access authentication strength (preview)](../authentication/concept-authentication-strengths.md). You can use custom authentication strengths to restrict access by requiring specific FIDO2 keys using the Authenticator Attestation GUIDs (AAGUIDs), and apply this through conditional access policies. For more information, see: [FIDO2 security key advanced options](../authentication/concept-authentication-strengths.md#fido2-security-key-advanced-options). You can now require your business partner (B2B) guests across all Microsoft clou **Service category:** Authentications (Logins) **Product capability:** User Authentication -We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, we’ve made Windows Hello for Business much easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Hybrid Cloud Kerberos Trust Deployment](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust). +We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, we’ve made Windows Hello for Business easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Hybrid Cloud Kerberos Trust Deployment](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-kerberos-trust). This feature empowers users on Linux clients to register their devices with Azur - Users can register their Linux devices with Azure AD - Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops -- If compliant, users can use Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies.+- If compliant, users can use Microsoft Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies. For more information, see: For more information, see: -### General Availability - Deprecation of Azure Multi-Factor Authentication Server +### General Availability - Deprecation of Azure Active Directory Multi-Factor Authentication. For more information, see: **Service category:** MFA **Product capability:** Identity Security & Protection -Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multi-factor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services, and to remain in a supported state, organizations should migrate their users’ authentication data to the cloud-based Azure AD Multi-Factor Authentication service using the latest Migration Utility included in the most recent Azure AD Multi-Factor Authentication Server update. For more information, see: [Migrate from MFA Server to Azure AD Multi-Factor Authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md). +Beginning September 30, 2024, Azure Active Directory Multi-Factor Authentication Server deployments will no longer service multi-factor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services, and to remain in a supported state, organizations should migrate their users’ authentication data to the cloud-based Azure Active Directory Multi-Factor Authentication service using the latest Migration Utility included in the most recent Azure Active Directory Multi-Factor Authentication Server update. For more information, see: [Migrate from MFA Server to Azure AD Multi-Factor Authentication](../authentication/how-to-migrate-mfa-server-to-azure-mfa.md). When configuring writeback of attributes from Azure AD to SAP SuccessFactors Emp To prevent accidental notification approvals, admins can now require users to enter the number displayed on the sign-in screen when approving an MFA notification in the Microsoft Authenticator app. We've also refreshed the Azure portal admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update we have also added the highly requested ability for admins to exclude user groups from each feature. -The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature applying the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting 27th of February 2023. +The number matching feature greatly up-levels the security posture of the Microsoft Authenticator app and protects organizations from MFA fatigue attacks. We highly encourage our customers to adopt this feature applying the rollout controls we have built. Number Matching will begin to be enabled for all users of the Microsoft Authenticator app starting February 27 2023. For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md). For more information, see: [How to use number matching in multifactor authentica **Service category:** Microsoft Authenticator App **Product capability:** User Authentication -Reduce accidental approvals by showing users additional context in Microsoft Authenticator app notifications. Customers can enhance notifications with the following: +Reduce accidental approvals by showing users additional context in Microsoft Authenticator app notifications. Customers can enhance notifications with the following steps: -- Application Context: This feature will show users which application they're signing into.-- Geographic Location Context: This feature will show users their sign-in location based on the IP address of the device they're signing into. +- Application Context: This feature shows users which application they're signing into. +- Geographic Location Context: This feature shows users their sign-in location based on the IP address of the device they're signing into. The feature is available for both MFA and Password-less Phone Sign-in notifications and greatly increases the security posture of the Microsoft Authenticator app. We've also refreshed the Azure portal Admin UX and Microsoft Graph APIs to make it easier for customers to manage Authenticator app feature roll-outs. As part of this update, we've also added the highly requested ability for admins to exclude user groups from certain features. In October 2022 we've added the following 15 new applications in our App gallery You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial, -For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest +For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest This feature empowers users on Linux clients to register their devices with Azur - Users can register their Linux devices with Azure AD. - Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops.-- If compliant, users can use Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies.+- If compliant, users can use Microsoft Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies. For more information, see: For more information, see: [Tutorial: Validate a SCIM endpoint](../app-provision -Accidental deletion of users in any system could be disastrous. We’re excited to announce the general availability of the accidental deletions prevention capability as part of the Azure AD provisioning service. When the number of deletions to be processed in a single provisioning cycle spikes above a customer defined threshold, the Azure AD provisioning service will pause, provide you visibility into the potential deletions, and allow you to accept or reject the deletions. This functionality has historically been available for Azure AD Connect, and Azure AD Connect Cloud Sync. It's now available across the various provisioning flows, including both HR-driven provisioning and application provisioning. +Accidental deletion of users in any system could be disastrous. We’re excited to announce the general availability of the accidental deletions prevention capability as part of the Azure AD provisioning service. When the number of deletions to be processed in a single provisioning cycle spikes above a customer defined threshold the following will happen. The Azure AD provisioning service pauses, provide you with visibility into the potential deletions, and allow you to accept or reject the deletions. This functionality has historically been available for Azure AD Connect, and Azure AD Connect Cloud Sync. It's now available across the various provisioning flows, including both HR-driven provisioning and application provisioning. For more information, see: [Enable accidental deletions prevention in the Azure AD provisioning service](../app-provisioning/accidental-deletions.md) For more information, see: [Enable accidental deletions prevention in the Azure -Identity protection expands its Anonymous and Malicious IP detections to protect ADFS sign-ins. This will automatically apply to all customers who have AD Connect Health deployed and enabled, and will show up as the existing "Anonymous IP" or "Malicious IP" detections with a token issuer type of "AD Federation Services". +Identity protection expands its Anonymous and Malicious IP detections to protect ADFS sign-ins. This automatically applies to all customers who have AD Connect Health deployed and enabled, and show up as the existing "Anonymous IP" or "Malicious IP" detections with a token issuer type of "AD Federation Services". For more information, see: [What is risk?](../identity-protection/concept-identity-protection-risks.md) In September 2022 we've added the following 15 new applications in our App galle You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial, -For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest ------## August 2022 --### General Availability - Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users ----**Type:** New feature -**Service category:** Conditional Access -**Product capability:** Identity Security & Protection ----Customers can now require a fresh authentication each time a user performs a certain action. Forced reauthentication supports requiring a user to reauthenticate during Intune device enrollment, password change for risky users, and risky sign-ins. --For more information, see: [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time) ----### General Availability - Multi-Stage Access Reviews --**Type:** Changed feature -**Service category:** Access Reviews -**Product capability:** Identity Governance --Customers can now meet their complex audit and recertification requirements through multiple stages of reviews. For more information, see: [Create a multi-stage access review](../governance/create-access-review.md#create-a-multi-stage-access-review). ------### Public Preview - External user leave settings --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** B2B/B2C --Currently, users can self-service leave for an organization without the visibility of their IT administrators. Some organizations may want more control over this self-service process. --With this feature, IT administrators can now allow or restrict external identities to leave an organization by Microsoft provided self-service controls via Azure Active Directory in the Microsoft Entra portal. In order to restrict users to leave an organization, customers need to include "Global privacy contact" and "Privacy statement URL" under tenant properties. - -A new policy API is available for the administrators to control tenant wide policy: -[externalIdentitiesPolicy resource type](/graph/api/resources/externalidentitiespolicy?view=graph-rest-beta&preserve-view=true) -- For more information, see: --- [Leave an organization as an external user](../external-identities/leave-the-organization.md)-- [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md)-+For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest ---### Public Preview - Restrict self-service BitLocker for devices --**Type:** New feature -**Service category:** Device Registration and Management -**Product capability:** Access Control --In some situations, you may want to restrict the ability for end users to self-service BitLocker keys. With this new functionality, you can now turn off self-service of BitLocker keys, so that only specific individuals with right privileges can recover a BitLocker key. --For more information, see: [Block users from viewing their BitLocker keys (preview)](../devices/device-management-azure-portal.md#block-users-from-viewing-their-bitlocker-keys-preview) - -### Public Preview- Identity Protection Alerts in Microsoft 365 Defender --**Type:** New feature -**Service category:** Identity Protection -**Product capability:** Identity Security & Protection --Identity Protection risk detections (alerts) are now also available in Microsoft 365 Defender to provide a unified investigation experience for security professionals. For more information, see: [Investigate alerts in Microsoft 365 Defender](/microsoft-365/security/defender/investigate-alerts?view=o365-worldwide#alert-sources&preserve-view=true) -------### New Federated Apps available in Azure AD Application gallery - August 2022 --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In August 2022, we've added the following 40 new applications in our App gallery with Federation support --[Albourne Castle](https://village.albourne.com/castle), [Adra by Trintech](../saas-apps/adra-by-trintech-tutorial.md), [workhub](../saas-apps/workhub-tutorial.md), [4DX](../saas-apps/4dx-tutorial.md), [Ecospend IAM V1](https://iamapi.sb.ecospend.com/account/login), [TigerGraph](../saas-apps/tigergraph-tutorial.md), [Sketch](../saas-apps/sketch-tutorial.md), [Lattice](../saas-apps/lattice-tutorial.md), [snapADDY Single Sign On](https://app.snapaddy.com/login), [RELAYTO Content Experience Platform](https://relayto.com/signin), [oVice](https://tour.ovice.in/login), [Arena](../saas-apps/arena-tutorial.md), [QReserve](../saas-apps/qreserve-tutorial.md), [Curator](../saas-apps/curator-tutorial.md), [NetMotion Mobility](../saas-apps/netmotion-mobility-tutorial.md), [HackNotice](../saas-apps/hacknotice-tutorial.md), [ERA_EHS_CORE](../saas-apps/era-ehs-core-tutorial.md), [AnyClip Teams Connector](https://videomanager.anyclip.com/login), [Wiz SSO](../saas-apps/wiz-sso-tutorial.md), [Tango Reserve by AgilQuest (EU Instance)](../saas-apps/tango-reserve-tutorial.md), [valid8Me](../saas-apps/valid8me-tutorial.md), [Ahrtemis](../saas-apps/ahrtemis-tutorial.md), [KPMG Leasing Tool](../saas-apps/kpmg-tool-tutorial.md) [Mist Cloud Admin SSO](../saas-apps/mist-cloud-admin-tutorial.md), [Work-Happy](https://live.work-happy.com/?azure=true), [Ediwin SaaS EDI](../saas-apps/ediwin-saas-edi-tutorial.md), [LUSID](../saas-apps/lusid-tutorial.md), [Next Gen Math](https://nextgenmath.com/), [Total ID](https://www.tokyo-shoseki.co.jp/ict/), [Cheetah For Benelux](../saas-apps/cheetah-for-benelux-tutorial.md), [Live Center Australia](https://au.livecenter.com/), [Shop Floor Insight](https://www.dmsiworks.com/apps/shop-floor-insight), [Warehouse Insight](https://www.dmsiworks.com/apps/warehouse-insight), [myAOS](../saas-apps/myaos-tutorial.md), [Hero](https://admin.linc-ed.com/), [FigBytes](../saas-apps/figbytes-tutorial.md), [VerosoftDesign](https://verosoft-design.vercel.app/), [ViewpointOne - UK](https://identity-uk.team.viewpoint.com/), [EyeRate Reviews](https://azure-login.eyeratereviews.com/), [Lytx DriveCam](../saas-apps/lytx-drivecam-tutorial.md) --You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial, --For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest -------### Public preview - New provisioning connectors in the Azure AD Application Gallery - August 2022 --**Type:** New feature -**Service category:** App Provisioning -**Product capability:** 3rd Party Integration --You can now automate creating, updating, and deleting user accounts for these newly integrated apps: --- [Ideagen Cloud](../saas-apps/ideagen-cloud-provisioning-tutorial.md)-- [Lucid (All Products)](../saas-apps/lucid-all-products-provisioning-tutorial.md)-- [Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service](../saas-apps/palo-alto-networks-cloud-identity-engine-provisioning-tutorial.md)-- [SuccessFactors Writeback](../saas-apps/sap-successfactors-writeback-tutorial.md)-- [Tableau Cloud](../saas-apps/tableau-online-provisioning-tutorial.md)--For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md). -----### General Availability - Workload Identity Federation with App Registrations are available now --**Type:** New feature -**Service category:** Other -**Product capability:** Developer Experience --Entra Workload Identity Federation allows developers to exchange tokens issued by another identity provider with Azure AD tokens, without needing secrets. It eliminates the need to store, and manage, credentials inside the code or secret stores to access Azure AD protected resources such as Azure and Microsoft Graph. By removing the secrets required to access Azure AD protected resources, workload identity federation can improve the security posture of your organization. This feature also reduces the burden of secret management and minimizes the risk of service downtime due to expired credentials. --For more information on this capability and supported scenarios, see [Workload identity federation](../develop/workload-identity-federation.md). -----### Public Preview - Entitlement management automatic assignment policies --**Type:** Changed feature -**Service category:** Entitlement Management -**Product capability:** Identity Governance --In Azure AD entitlement management, a new form of access package assignment policy is being added. The automatic assignment policy includes a filter rule, similar to a dynamic group, that specifies the users in the tenant who should have assignments. When users come into scope of matching that filter rule criteria, an assignment is automatically created, and when they no longer match, the assignment is removed. -- For more information, see: [Configure an automatic assignment policy for an access package in Azure AD entitlement management (Preview)](../governance/entitlement-management-access-package-auto-assignment-policy.md). ---- |
active-directory | Entitlement Management Access Package First | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md | A resource directory has one or more resources to share. In this step, you creat | **Admin1** | Global administrator, or User administrator. This user can be the user you're currently signed in. | | **Requestor1** | User | -4. [Create an Azure AD security group](../fundamentals/active-directory-groups-create-azure-portal.md) named **Marketing resources** with a membership type of **Assigned**. This group will be the target resource for entitlement management. The group should be empty of members to start. +4. [Create an Azure AD security group](../fundamentals/active-directory-groups-create-azure-portal.md) named **Marketing resources** with a membership type of **Assigned**. This group is the target resource for entitlement management. The group should be empty of members to start. ## Step 2: Create an access package An *access package* is a bundle of resources that a team or project needs and is 1. For **Enable requests**, select **Yes** to enable this access package to be requested as soon as it's created. +1. To add a Verified ID requirement to the access package, select on **Add issuer** in the **Required Verified IDs** section. If you don't have the Verified ID service set up in your tenant, navigate to the **Verified ID** section of the Azure portal. ++ :::image type="content" source="media/entitlement-management-access-package-first/verified-id-picker.png" alt-text="Screenshot of the Verified ID picker selection."::: ++1. Search for an issuer in the dropdown and select the credential type you want users to present when requesting access. ++ > [!NOTE] + > If you select multiple issuers / credential types, users requesting access will be required to present **all** of the credential types you have included in this policy. To give users the option of presenting one of many credential types, please include each acceptable option in a separate policy. + 1. Select **Next** to open the **Requestor information** tab.  |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | For anyone that has used Azure AD to [provision identities into a SaaS applicati In the source tenant: Using this feature requires Azure AD Premium P1 licenses. Each user who is synchronized with cross-tenant synchronization must have a P1 license in their home/source tenant. To find the right license for your requirements, see [Compare generally available features of Azure AD](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). -In the target tenant: Cross-tenant sync relies on the Azure AD External Identities billing model. To understand the external identities licensing model, see [MAU billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md) +In the target tenant: Cross-tenant sync relies on the Azure AD External Identities billing model. To understand the external identities licensing model, see [MAU billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md). You will also need at least one Azure AD Premium P1 license in the target tenant to enable auto-redemption. ## Frequently asked questions |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | This role also grants the ability to consent for delegated permissions and appli > | microsoft.directory/servicePrincipals/enable | Enable service principals | > | microsoft.directory/servicePrincipals/getPasswordSingleSignOnCredentials | Manage password single sign-on credentials on service principals | > | microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials |-> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning syncronization jobs | -> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning syncronization jobs and schema | +> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning synchronization jobs | +> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning synchronization jobs and schema | > | microsoft.directory/servicePrincipals/managePasswordSingleSignOnCredentials | Read password single sign-on credentials on service principals | > | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.microsoft-application-admin | Grant consent for application permissions and delegated permissions on behalf of any user or all users, except for application permissions for Microsoft Graph | > | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | This role also grants the ability to consent for delegated permissions and appli > | microsoft.directory/servicePrincipals/enable | Enable service principals | > | microsoft.directory/servicePrincipals/getPasswordSingleSignOnCredentials | Manage password single sign-on credentials on service principals | > | microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials |-> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning syncronization jobs | -> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning syncronization jobs and schema | +> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning synchronization jobs | +> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning synchronization jobs and schema | > | microsoft.directory/servicePrincipals/managePasswordSingleSignOnCredentials | Read password single sign-on credentials on service principals | > | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.microsoft-application-admin | Grant consent for application permissions and delegated permissions on behalf of any user or all users, except for application permissions for Microsoft Graph | > | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | Users in this role can read and update basic information of users, groups, and s > | microsoft.directory/oAuth2PermissionGrants/create | Create OAuth 2.0 permission grants | > | microsoft.directory/oAuth2PermissionGrants/basic/update | Update OAuth 2.0 permission grants | > | microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials |-> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning syncronization jobs | -> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning syncronization jobs and schema | +> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning synchronization jobs | +> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning synchronization jobs and schema | > | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/users/assignLicense | Manage user licenses | > | microsoft.directory/users/create | Add users | Users with this role have access to all administrative features in Azure Active > | microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directory roles, and read and update all properties | > | microsoft.directory/directoryRoleTemplates/allProperties/allTasks | Create and delete Azure AD role templates, and read and update all properties | > | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties |+> | microsoft.directory/domains/federationConfiguration/standard/read | Read standard properties of federation configuration for domains | +> | microsoft.directory/domains/federationConfiguration/basic/update | Update basic federation configuration for domains | +> | microsoft.directory/domains/federationConfiguration/create | Create federation configuration for domains | +> | microsoft.directory/domains/federationConfiguration/delete | Delete federation configuration for domains | > | microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management | > | microsoft.directory/groups/allProperties/allTasks | Create and delete groups, and read and update all properties | > | microsoft.directory/groupsAssignableToRoles/create | Create role-assignable groups | Users with this role **cannot** do the following: > | microsoft.directory/directoryRoles/allProperties/read | Read all properties of directory roles | > | microsoft.directory/directoryRoleTemplates/allProperties/read | Read all properties of directory role templates | > | microsoft.directory/domains/allProperties/read | Read all properties of domains |+> | microsoft.directory/domains/federationConfiguration/standard/read | Read standard properties of federation configuration for domains | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/groups/allProperties/read | Read all properties (including privileged properties) on Security groups and Microsoft 365 groups, including role-assignable groups | > | microsoft.directory/groupSettings/allProperties/read | Read all properties of group settings | Users in this role can create, manage and deploy provisioning configuration setu > | microsoft.directory/deletedItems.applications/restore | Restore soft deleted applications to original state | > | microsoft.directory/domains/allProperties/read | Read all properties of domains | > | microsoft.directory/domains/federation/update | Update federation property of domains |+> | microsoft.directory/domains/federationConfiguration/standard/read | Read standard properties of federation configuration for domains | +> | microsoft.directory/domains/federationConfiguration/basic/update | Update basic federation configuration for domains | +> | microsoft.directory/domains/federationConfiguration/create | Create federation configuration for domains | +> | microsoft.directory/domains/federationConfiguration/delete | Delete federation configuration for domains | > | microsoft.directory/hybridAuthenticationPolicy/allProperties/allTasks | Manage hybrid authentication policy in Azure AD | > | microsoft.directory/organization/dirSync/update | Update the organization directory sync property | > | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD | Users in this role can create, manage and deploy provisioning configuration setu > | microsoft.directory/servicePrincipals/disable | Disable service principals | > | microsoft.directory/servicePrincipals/enable | Enable service principals | > | microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials |-> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning syncronization jobs | -> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning syncronization jobs and schema | +> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning synchronization jobs | +> | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning synchronization jobs and schema | > | microsoft.directory/servicePrincipals/audience/update | Update audience properties on service principals | > | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals | > | microsoft.directory/servicePrincipals/basic/update | Update basic properties on service principals | Azure Advanced Threat Protection | Monitor and respond to suspicious security ac > | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners | > | microsoft.directory/domains/federation/update | Update federation property of domains |+> | microsoft.directory/domains/federationConfiguration/standard/read | Read standard properties of federation configuration for domains | +> | microsoft.directory/domains/federationConfiguration/basic/update | Update basic federation configuration for domains | +> | microsoft.directory/domains/federationConfiguration/create | Create federation configuration for domains | +> | microsoft.directory/domains/federationConfiguration/delete | Delete federation configuration for domains | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection | > | microsoft.directory/identityProtection/allProperties/update | Update all resources in Azure AD Identity Protection | In | Can do > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |+> | microsoft.directory/domains/federationConfiguration/standard/read | Read standard properties of federation configuration for domains | > | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection | > | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations | |
active-directory | Oracle Cloud Infrastructure Console Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md | This section guides you through the steps to configure the Azure AD provisioning |urn:ietf:params:scim:schemas:oracle:idcs:extension:user:User:bypassNotification|Boolean| |urn:ietf:params:scim:schemas:oracle:idcs:extension:user:User:isFederatedUser|Boolean| +> [!NOTE] +> Additional extension attributes must begin with "urn:ietf:params:scim:api:" + 10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Oracle Cloud Infrastructure Console**. 11. Review the group attributes that are synchronized from Azure AD to Oracle Cloud Infrastructure Console in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Oracle Cloud Infrastructure Console for update operations. Select the **Save** button to commit any changes. |
active-directory | Oracle Fusion Erp Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-fusion-erp-provisioning-tutorial.md | The objective of this tutorial is to demonstrate the steps to be performed in Or > [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).-> -> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) ## Prerequisites |
active-directory | Windchill Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/windchill-tutorial.md | + + Title: Azure Active Directory SSO integration with Windchill +description: Learn how to configure single sign-on between Azure Active Directory and Windchill. ++++++++ Last updated : 02/22/2023+++++# Azure Active Directory SSO integration with Windchill ++In this article, you'll learn how to integrate Windchill with Azure Active Directory (Azure AD). Windchill PLM Software - Realize value quickly with out-of-the-box functionality across a comprehensive portfolio of core Product Data Management and advanced Product Lifecycle Management applications. When you integrate Windchill with Azure AD, you can: ++* Control in Azure AD who has access to Windchill. +* Enable your users to be automatically signed-in to Windchill with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Windchill in a test environment. Windchill supports **SP** and **IDP** initiated single sign-on. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with Windchill, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Windchill single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Windchill application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Windchill from the Azure AD gallery ++Add Windchill from the Azure AD application gallery to configure single sign-on with Windchill. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Windchill** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type a URL using the following pattern: + `https://<hostname:port>/Shibboleth.sso/Login` ++ > [!NOTE] + > This value is not real. Update this value with the actual Sign on URL. Contact [Windchill Client support team](mailto:support@ptc.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Windchill** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Windchill SSO ++To configure single sign-on on **Windchill** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Windchill support team](mailto:support@ptc.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Windchill test user ++In this section, you create a user called Britta Simon at Windchill. Work with [Windchill support team](mailto:support@ptc.com) to add the users in the Windchill platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++1. Click on **Test this application** in Azure portal. This will redirect to Windchill Sign-on URL where you can initiate the login flow. ++1. Go to Windchill Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++1. Click on **Test this application** in Azure portal and you should be automatically signed in to the Windchill for which you set up the SSO. ++1. You can also use Microsoft My Apps to test the application in any mode. When you click the Windchill tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Windchill for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Windchill you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
aks | Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md | description: Learn how to create a cluster that distributes nodes across availab Previously updated : 03/31/2022 Last updated : 02/22/2023 # Create an Azure Kubernetes Service (AKS) cluster that uses availability zones -An Azure Kubernetes Service (AKS) cluster distributes resources such as nodes and storage across logical sections of underlying Azure infrastructure. This deployment model when using availability zones, ensures nodes in a given availability zone are physically separated from those defined in another availability zone. AKS clusters deployed with multiple availability zones configured across a cluster provide a higher level of availability to protect against a hardware failure or a planned maintenance event. +An Azure Kubernetes Service (AKS) cluster distributes resources such as nodes and storage across logical sections of underlying Azure infrastructure. Using availability zones physically separates nodes from other nodes deployed to different availability zones. AKS clusters deployed with multiple availability zones configured across a cluster provide a higher level of availability to protect against a hardware failure or a planned maintenance event. -By defining node pools in a cluster to span multiple zones, nodes in a given node pool are able to continue operating even if a single zone has gone down. Your applications can continue to be available even if there is a physical failure in a single datacenter if orchestrated to tolerate failure of a subset of nodes. +By defining node pools in a cluster to span multiple zones, nodes in a given node pool are able to continue operating even if a single zone has gone down. Your applications can continue to be available even if there's a physical failure in a single datacenter if orchestrated to tolerate failure of a subset of nodes. This article shows you how to create an AKS cluster and distribute the node components across availability zones. ## Before you begin -You need the Azure CLI version 2.0.76 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. +You need the Azure CLI version 2.0.76 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. ## Limitations and region availability -AKS clusters can be created using availability zones in any Azure region that has availability zones. +AKS clusters can use availability zones in any Azure region that has availability zones. The following limitations apply when you create an AKS cluster using availability zones: -* You can only define availability zones when the cluster or node pool is created. -* Availability zone settings can't be updated after the cluster is created. You also can't update an existing, non-availability zone cluster to use availability zones. +* You can only define availability zones during creation of the cluster or node pool. +* It is not possible to update an existing non-availability zone cluster to use availability zones after creating the cluster. * The chosen node size (VM SKU) selected must be available across all availability zones selected.-* Clusters with availability zones enabled require use of Azure Standard Load Balancers for distribution across zones. This load balancer type can only be defined at cluster create time. For more information and the limitations of the standard load balancer, see [Azure load balancer standard SKU limitations][standard-lb-limitations]. +* Clusters with availability zones enabled require using Azure Standard Load Balancers for distribution across zones. You can only define this load balancer type at cluster create time. For more information and the limitations of the standard load balancer, see [Azure load balancer standard SKU limitations][standard-lb-limitations]. ### Azure disk availability zone support + - Volumes that use Azure managed LRS disks aren't zone-redundant resources, attaching across zones and aren't supported. You need to co-locate volumes in the same zone as the specified node hosting the target pod. + - Volumes that use Azure managed ZRS disks (supported by Azure Disk CSI driver v1.5.0 and later) are zone-redundant resources. You can schedule those volumes on all zone and non-zone agent nodes. -Kubernetes is aware of Azure availability zones since version 1.12. You can deploy a PersistentVolumeClaim object referencing an Azure Managed Disk in a multi-zone AKS cluster and [Kubernetes will take care of scheduling](https://kubernetes.io/docs/setup/best-practices/multiple-zones/#storage-access-for-zones) any pod that claims this PVC in the correct availability zone. +Kubernetes is aware of Azure availability zones since version 1.12. You can deploy a PersistentVolumeClaim object referencing an Azure Managed Disk in a multi-zone AKS cluster and [Kubernetes takes care of scheduling](https://kubernetes.io/docs/setup/best-practices/multiple-zones/#storage-access-for-zones) any pod that claims this PVC in the correct availability zone. ### Azure Resource Manager templates and availability zones -When *creating* an AKS cluster, if you explicitly define a [null value in a template][arm-template-null] with syntax such as `"availabilityZones": null`, the Resource Manager template treats the property as if it doesn't exist, which means your cluster won’t have availability zones enabled. Also, if you create a cluster with a Resource Manager template that omits the availability zones property, availability zones are disabled. +When *creating* an AKS cluster, understand the following details about specifying availability zones in a template: -You can't update settings for availability zones on an existing cluster, so the behavior is different when updating an AKS cluster with Resource Manager templates. If you explicitly set a null value in your template for availability zones and *update* your cluster, there are no changes made to your cluster for availability zones. However, if you omit the availability zones property with syntax such as `"availabilityZones": []`, the deployment attempts to disable availability zones on your existing AKS cluster and **fails**. +* If you explicitly define a [null value in a template][arm-template-null], for example by specifying `"availabilityZones": null`, the Resource Manager template treats the property as if it doesn't exist. This means your cluster doesn't deploy in an availability zone. +* If you don't include the `"availabilityZones":` property in your Resource Manager template, your cluster doesn't deploy in an availability zone. +* You can't update settings for availability zones on an existing cluster, the behavior is different when you update an AKS cluster with Resource Manager templates. If you explicitly set a null value in your template for availability zones and *update* your cluster, it doesn't update your cluster for availability zones. However, if you omit the availability zones property with syntax such as `"availabilityZones": []`, the deployment attempts to disable availability zones on your existing AKS cluster and **fails**. ## Overview of availability zones for AKS clusters -Availability zones are a high-availability offering that protects your applications and data from datacenter failures. Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's always more than one zone in all zone enabled regions. The physical separation of availability zones within a region protects applications and data from datacenter failures. +Availability zones are a high-availability offering that protects your applications and data from datacenter failures. Zones are unique physical locations within an Azure region. Each zone includes one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's always more than one zone in all zone enabled regions. The physical separation of availability zones within a region protects applications and data from datacenter failures. For more information, see [What are availability zones in Azure?][az-overview]. -AKS clusters that are deployed using availability zones can distribute nodes across multiple zones within a single region. For example, a cluster in the *East US 2* region can create nodes in all three availability zones in *East US 2*. This distribution of AKS cluster resources improves cluster availability as they're resilient to failure of a specific zone. +AKS clusters deployed using availability zones can distribute nodes across multiple zones within a single region. For example, a cluster in the *East US 2* region can create nodes in all three availability zones in *East US 2*. This distribution of AKS cluster resources improves cluster availability as they're resilient to failure of a specific zone.  -If a single zone becomes unavailable, your applications continue to run if the cluster is spread across multiple zones. +If a single zone becomes unavailable, your applications continue to run on clusters configured to spread across multiple zones. ## Create an AKS cluster across availability zones -When you create a cluster using the [az aks create][az-aks-create] command, the `--zones` parameter defines which zones agent nodes are deployed into. The control plane components such as etcd or the API are spread across the available zones in the region if you define the `--zones` parameter at cluster creation time. The specific zones which the control plane components are spread across are independent of what explicit zones are selected for the initial node pool. +When you create a cluster using the [az aks create][az-aks-create] command, the `--zones` parameter specifies the zones to deploy agent nodes into. The control plane components such as etcd or the API spread across the available zones in the region during cluster deployment. The specific zones that the control plane components spread across, are independent of what explicit zones you select for the initial node pool. -If you don't define any zones for the default agent pool when you create an AKS cluster, control plane components are not guaranteed to spread across availability zones. You can add additional node pools using the [az aks nodepool add][az-aks-nodepool-add] command and specify `--zones` for new nodes, but it will not change how the control plane has been spread across zones. Availability zone settings can only be defined at cluster or node pool create-time. +If you don't specify any zones for the default agent pool when you create an AKS cluster, the control plane components aren't present in availability zones. You can add more node pools using the [az aks nodepool add][az-aks-nodepool-add] command and specify `--zones` for new nodes. The command converts the AKS control plane to spread across availability zones. -The following example creates an AKS cluster named *myAKSCluster* in the resource group named *myResourceGroup*. A total of *3* nodes are created - one agent in zone *1*, one in *2*, and then one in *3*. +The following example creates an AKS cluster named *myAKSCluster* in the resource group named *myResourceGroup* with a total of three nodes. One agent in zone *1*, one in *2*, and then one in *3*. ```azurecli-interactive az group create --name myResourceGroup --location eastus2 az aks create \ It takes a few minutes to create the AKS cluster. -When deciding what zone a new node should belong to, a given AKS node pool will use a [best effort zone balancing offered by underlying Azure Virtual Machine Scale Sets][vmss-zone-balancing]. A given AKS node pool is considered "balanced" if each zone has the same number of VMs or +\- 1 VM in all other zones for the scale set. +When deciding what zone a new node should belong to, a specified AKS node pool uses a [best effort zone balancing offered by underlying Azure Virtual Machine Scale Sets][vmss-zone-balancing]. The AKS node pool is "balanced" when each zone has the same number of VMs or +\- one VM in all other zones for the scale set. ## Verify node distribution across zones -When the cluster is ready, list the agent nodes in the scale set to see what availability zone they're deployed in. +When the cluster is ready, list what availability zone the agent nodes in the scale set are in. First, get the AKS cluster credentials using the [az aks get-credentials][az-aks-get-credentials] command: az aks get-credentials --resource-group myResourceGroup --name myAKSCluster Next, use the [kubectl describe][kubectl-describe] command to list the nodes in the cluster and filter on the `topology.kubernetes.io/zone` value. The following example is for a Bash shell. -```console +```bash kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone" ``` Name: aks-nodepool1-28993262-vmss000002 topology.kubernetes.io/zone=eastus2-3 ``` -As you add additional nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones. +As you add more nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones. -Note that in newer Kubernetes versions (1.17.0 and later), AKS is using the newer label `topology.kubernetes.io/zone` in addition to the deprecated `failure-domain.beta.kubernetes.io/zone`. You can get the same result as above with by running the following script: +With Kubernetes versions 1.17.0 and later, AKS uses the newer label `topology.kubernetes.io/zone` and the deprecated `failure-domain.beta.kubernetes.io/zone`. You can get the same result from running the `kubelet describe nodes` command in the previous step, by running the following script: -```console + ```console kubectl get nodes -o custom-columns=NAME:'{.metadata.name}',REGION:'{.metadata.labels.topology\.kubernetes\.io/region}',ZONE:'{metadata.labels.topology\.kubernetes\.io/zone}' ``` -Which will give you a more succinct output: +The following example resembles the output with more verbose details: ```console NAME REGION ZONE aks-nodepool1-34917322-vmss000002 eastus eastus-3 ## Verify pod distribution across zones -As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `topology.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. In order to test this, you can scale up your cluster from 3 to 5 nodes, to verify correct pod spreading: +As documented in [Well-Known Labels, Annotations and Taints][kubectl-well_known_labels], Kubernetes uses the `topology.kubernetes.io/zone` label to automatically distribute pods in a replication controller or service across the different zones available. To test the label and scale your cluster from 3 to 5 nodes, run the following command to verify the pod correctly spreads: ```azurecli-interactive az aks scale \ az aks scale \ --node-count 5 ``` -When the scale operation completes after a few minutes, the command `kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"` in a Bash shell should give an output similar to this sample: +When the scale operation completes after a few minutes, run the command `kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"` in a Bash shell. The following output resembles the results: ```console Name: aks-nodepool1-28993262-vmss000000 Name: aks-nodepool1-28993262-vmss000004 topology.kubernetes.io/zone=eastus2-2 ``` -We now have two additional nodes in zones 1 and 2. You can deploy an application consisting of three replicas. We will use NGINX as an example: +You now have two more nodes in zones 1 and 2. You can deploy an application consisting of three replicas. The following example uses NGINX: -```console +```bash kubectl create deployment nginx --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine kubectl scale deployment nginx --replicas=3 ``` -By viewing nodes where your pods are running, you see pods are running on the nodes corresponding to three different availability zones. For example, with the command `kubectl describe pod | grep -e "^Name:" -e "^Node:"` in a Bash shell you would get an output similar to this: +By viewing nodes where your pods are running, you see pods are running on the nodes corresponding to three different availability zones. For example, with the command `kubectl describe pod | grep -e "^Name:" -e "^Node:"` in a Bash shell, you see the following example output: ```console Name: nginx-6db489d4b7-ktdwg Name: nginx-6db489d4b7-xz6wj Node: aks-nodepool1-28993262-vmss000004/10.240.0.8 ``` -As you can see from the previous output, the first pod is running on node 0, which is located in the availability zone `eastus2-1`. The second pod is running on node 2, which corresponds to `eastus2-3`, and the third one in node 4, in `eastus2-2`. Without any additional configuration, Kubernetes is spreading the pods correctly across all three availability zones. +As you can see from the previous output, the first pod is running on node 0 located in the availability zone `eastus2-1`. The second pod is running on node 2, corresponding to `eastus2-3`, and the third one in node 4, in `eastus2-2`. Without any extra configuration, Kubernetes spreads the pods correctly across all three availability zones. ## Next steps -This article detailed how to create an AKS cluster that uses availability zones. For more considerations on highly available clusters, see [Best practices for business continuity and disaster recovery in AKS][best-practices-bc-dr]. +This article described how to create an AKS cluster using availability zones. For more considerations on highly available clusters, see [Best practices for business continuity and disaster recovery in AKS][best-practices-bc-dr]. <!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli |
aks | Concepts Clusters Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md | To configure or directly access a control plane, deploy a self-managed Kubernete For associated best practices, see [Best practices for cluster security and upgrades in AKS][operator-best-practices-cluster-security]. +For AKS cost management information, see [AKS cost basics](https://learn.microsoft.com/azure/architecture/aws-professional/eks-to-aks/cost-management#aks-cost-basics) and [Pricing for AKS](https://azure.microsoft.com/pricing/details/kubernetes-service/#pricing). + ## Nodes and node pools To run your applications and supporting services, you need a Kubernetes *node*. An AKS cluster has at least one node, an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime. To run your applications and supporting services, you need a Kubernetes *node*.  -The Azure VM size for your nodes defines CPUs, memory, size, and the storage type available (such as high-performance SSD or regular HDD). Plan the node size around whether your applications may require large amounts of CPU and memory or high-performance storage. Scale out the number of nodes in your AKS cluster to meet demand. +The Azure VM size for your nodes defines CPUs, memory, size, and the storage type available (such as high-performance SSD or regular HDD). Plan the node size around whether your applications may require large amounts of CPU and memory or high-performance storage. Scale out the number of nodes in your AKS cluster to meet demand. For more information on scaling, see [Scaling options for applications in AKS](/concepts-scale.md). In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux or Windows Server 2019. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs. Agent nodes are billed as standard VMs, so any VM size discounts (including [Azure reservations][reservation-discounts]) are automatically applied. |
aks | Concepts Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md | The following behavior differences exist between kubenet and Azure CNI: Regarding DNS, with both kubenet and Azure CNI plugins DNS are offered by CoreDNS, a deployment running in AKS with its own autoscaler. For more information on CoreDNS on Kubernetes, see [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/). CoreDNS by default is configured to forward unknown domains to the DNS functionality of the Azure Virtual Network where the AKS cluster is deployed. Hence, Azure DNS and Private Zones will work for pods running in AKS. -For more information on Azure CNI and kubenet and to help determine which option is best for you, see [Configure Azure CNI networking in AKS][azure-cni-aks] and [Use kubenet networking in AKS][kubenet-aks]. +For more information on Azure CNI and kubenet and to help determine which option is best for you, see [Configure Azure CNI networking in AKS][azure-cni-aks] and [Use kubenet networking in AKS][aks-configure-kubenet-networking]. ### Support scope between network models For more information on core Kubernetes and AKS concepts, see the following arti [ip-preservation]: https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-client-source-ip-preservation-works-for-loadbalancer/ba-p/3033722#:~:text=Enable%20Client%20source%20IP%20preservation%201%20Edit%20loadbalancer,is%20the%20same%20as%20the%20source%20IP%20%28srjumpbox%29. [nsg-traffic]: ../virtual-network/network-security-group-how-it-works.md [azure-cni-aks]: configure-azure-cni.md-[kubenet-aks]: configure-kubenet.md |
aks | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md | description: Learn about security in Azure Kubernetes Service (AKS), including m Previously updated : 01/20/2022 Last updated : 02/22/2023 Each evening, Linux nodes in AKS get security patches through their distro secur Nightly updates apply security updates to the OS on the node, but the node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node will receive all the security and kernel updates available during the automatic check every night but will remain unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more details on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][node-image-upgrade]. +For AKS clusters on auto upgrade channel "node-image" will not pull security updates through unattended upgrade. They will get security updates through the weekly node image upgrade. + #### Windows Server nodes For Windows Server nodes, Windows Update doesn't automatically run and apply the latest updates. Schedule Windows Server node pool upgrades in your AKS cluster around the regular Windows Update release cycle and your own validation process. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. For more information on this process, see [Upgrade a node pool in AKS][nodepool-upgrade]. |
aks | Concepts Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md | Like using a secret: Volumes defined and created as part of the pod lifecycle only exist until you delete the pod. Pods often expect their storage to remain if a pod is rescheduled on a different host during a maintenance event, especially in StatefulSets. A *persistent volume* (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod. -You can use Azure Disks or Files to provide the PersistentVolume. As noted in the [Volumes](#volumes) section, the choice of Disks or Files is often determined by the need for concurrent access to the data or the performance tier. +You can use [Azure Disks](/azure-csi-disk-storage-provision.md) or [Azure Files](/azure-csi-files-storage-provision.md) to provide the PersistentVolume. As noted in the [Volumes](#volumes) section, the choice of Disks or Files is often determined by the need for concurrent access to the data or the performance tier.  A PersistentVolume can be *statically* created by a cluster administrator, or *d ## Storage classes -To define different tiers of storage, such as Premium and Standard, you can create a *StorageClass*. +To define different tiers of storage, such as Premium and Standard, you can create a *StorageClass*. -The StorageClass also defines the *reclaimPolicy*. When you delete the pod and the persistent volume is no longer required, the reclaimPolicy controls the behavior of the underlying Azure storage resource. The underlying storage resource can either be deleted or kept for use with a future pod. +The StorageClass also defines the *reclaimPolicy*. When you delete the persistent volume, the reclaimPolicy controls the behavior of the underlying Azure storage resource. The underlying storage resource can either be deleted or kept for use with a future pod. For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-drivers] the following extra `StorageClasses` are created: allowVolumeExpansion: true > [!NOTE] > AKS reconciles the default storage classes and will overwrite any changes you make to those storage classes. +For more information about storage classes, see [StorageClass in Kubernetes](https://kubernetes.io/docs/concepts/storage/storage-classes/). + ## Persistent volume claims A PersistentVolumeClaim requests storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying Azure storage resource if no existing resource can fulfill the claim based on the defined StorageClass. |
aks | Configure Azure Cni | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md | This article shows you how to use Azure CNI networking to create and use a virtu * `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read` * `Microsoft.Authorization/roleAssignments/write`-* The subnet assigned to the AKS node pool cannot be a [delegated subnet](../virtual-network/subnet-delegation-overview.md). -* AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic within the node CIDR range. For more details, see [Network security groups][aks-network-nsg]. +* The subnet assigned to the AKS node pool can't be a [delegated subnet](../virtual-network/subnet-delegation-overview.md). +* AKS doesn't apply Network Security Groups (NSGs) to its subnet and won't modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic within the node CIDR range. For more details, see [Network security groups][aks-network-nsg]. ## Plan IP addressing for your cluster Clusters configured with Azure CNI networking require additional planning. The s IP addresses for the pods and the cluster's nodes are assigned from the specified subnet within the virtual network. Each node is configured with a primary IP address. By default, 30 additional IP addresses are pre-configured by Azure CNI that are assigned to pods scheduled on the node. When you scale out your cluster, each node is similarly configured with IP addresses from the subnet. You can also view the [maximum pods per node](#maximum-pods-per-node). > [!IMPORTANT]-> The number of IP addresses required should include considerations for upgrade and scaling operations. If you set the IP address range to only support a fixed number of nodes, you cannot upgrade or scale your cluster. +> The number of IP addresses required should include considerations for upgrade and scaling operations. If you set the IP address range to only support a fixed number of nodes, you can't upgrade or scale your cluster. > > * When you **upgrade** your AKS cluster, a new node is deployed into the cluster. Services and workloads begin to run on the new node, and an older node is removed from the cluster. This rolling upgrade process requires a minimum of one additional block of IP addresses to be available. Your node count is then `n + 1`. > * This consideration is particularly important when you use Windows Server node pools. Windows Server nodes in AKS do not automatically apply Windows Updates, instead you perform an upgrade on the node pool. This upgrade deploys new nodes with the latest Window Server 2019 base node image and security patches. For more information on upgrading a Windows Server node pool, see [Upgrade a node pool in AKS][nodepool-upgrade]. The IP address plan for an AKS cluster consists of a virtual network, at least o | | - | | Virtual network | The Azure virtual network can be as large as /8, but is limited to 65,536 configured IP addresses. Consider all your networking needs, including communicating with services in other virtual networks, before configuring your address space. For example, if you configure too large of an address space, you may run into issues with overlapping other address spaces within your network.| | Subnet | Must be large enough to accommodate the nodes, pods, and all Kubernetes and Azure resources that might be provisioned in your cluster. For example, if you deploy an internal Azure Load Balancer, its front-end IPs are allocated from the cluster subnet, not public IPs. The subnet size should also take into account upgrade operations or future scaling needs.<p />To calculate the *minimum* subnet size including an additional node for upgrade operations: `(number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)`<p/>Example for a 50 node cluster: `(51) + (51 * 30 (default)) = 1,581` (/21 or larger)<p/>Example for a 50 node cluster that also includes provision to scale up an additional 10 nodes: `(61) + (61 * 30 (default)) = 1,891` (/21 or larger)<p>If you don't specify a maximum number of pods per node when you create your cluster, the maximum number of pods per node is set to *30*. The minimum number of IP addresses required is based on that value. If you calculate your minimum IP address requirements on a different maximum value, see [how to configure the maximum number of pods per node](#configure-maximumnew-clusters) to set this value when you deploy your cluster. |-| Kubernetes service address range | This range should not be used by any network element on or connected to this virtual network. Service address CIDR must be smaller than /12. You can reuse this range across different AKS clusters. | +| Kubernetes service address range | This range shouldn't be used by any network element on or connected to this virtual network. Service address CIDR must be smaller than /12. You can reuse this range across different AKS clusters. | | Kubernetes DNS service IP address | IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the *kubernetes.default.svc.cluster.local* address. |-| Docker bridge address | The Docker bridge network address represents the default *docker0* bridge network address present in all Docker installations. While *docker0* bridge is not used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as *docker build* within the AKS cluster. It is required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically, which could conflict with other CIDRs. You must pick an address space that does not collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR. Default of 172.17.0.1/16. You can reuse this range across different AKS clusters. | +| Docker bridge address | The Docker bridge network address represents the default *docker0* bridge network address present in all Docker installations. While *docker0* bridge isn't used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as *docker build* within the AKS cluster. it's required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically, which could conflict with other CIDRs. You must pick an address space that doesn't collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR. Default of 172.17.0.1/16. You can reuse this range across different AKS clusters. | ## Maximum pods per node A minimum value for maximum pods per node is enforced to guarantee space for sys ### Configure maximum - existing clusters -The maxPod per node setting can be defined when you create a new node pool. If you need to increase the maxPod per node setting on an existing cluster, add a new node pool with the new desired maxPod count. After migrating your pods to the new pool, delete the older pool. To delete any older pool in a cluster, ensure you are setting node pool modes as defined in the [system node pools document][system-node-pools]. +The maxPod per node setting can be defined when you create a new node pool. If you need to increase the maxPod per node setting on an existing cluster, add a new node pool with the new desired maxPod count. After migrating your pods to the new pool, delete the older pool. To delete any older pool in a cluster, ensure you're setting node pool modes as defined in the [system node pools document][system-node-pools]. ## Deployment parameters When you create an AKS cluster, the following parameters are configurable for Azure CNI networking: -**Virtual network**: The virtual network into which you want to deploy the Kubernetes cluster. If you want to create a new virtual network for your cluster, select *Create new* and follow the steps in the *Create virtual network* section. If you want to select an existing virtual network, make sure it is in the same location and Azure subscription as your Kubernetes cluster. For information about the limits and quotas for an Azure virtual network, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits). +**Virtual network**: The virtual network into which you want to deploy the Kubernetes cluster. If you want to create a new virtual network for your cluster, select *Create new* and follow the steps in the *Create virtual network* section. If you want to select an existing virtual network, make sure it's in the same location and Azure subscription as your Kubernetes cluster. For information about the limits and quotas for an Azure virtual network, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits). **Subnet**: The subnet within the virtual network where you want to deploy the cluster. If you want to create a new subnet in the virtual network for your cluster, select *Create new* and follow the steps in the *Create subnet* section. For hybrid connectivity, the address range shouldn't overlap with any other virtual networks in your environment. -**Azure Network Plugin**: When Azure network plugin is used, the internal LoadBalancer service with "externalTrafficPolicy=Local" can't be accessed from VMs with an IP in clusterCIDR that does not belong to AKS cluster. +**Azure Network Plugin**: When Azure network plugin is used, the internal LoadBalancer service with "externalTrafficPolicy=Local" can't be accessed from VMs with an IP in clusterCIDR that doesn't belong to AKS cluster. **Kubernetes service address range**: This parameter is the set of virtual IPs that Kubernetes assigns to internal [services][services] in your cluster. You can use any private address range that satisfies the following requirements: When you create an AKS cluster, the following parameters are configurable for Az * Must not overlap with any on-premises IPs * Must not be within the ranges `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` -Although it's technically possible to specify a service address range within the same virtual network as your cluster, doing so is not recommended. Unpredictable behavior can result if overlapping IP ranges are used. For more information, see the [FAQ](#frequently-asked-questions) section of this article. For more information on Kubernetes services, see [Services][services] in the Kubernetes documentation. +Although it's technically possible to specify a service address range within the same virtual network as your cluster, doing so isn't recommended. Unpredictable behavior can result if overlapping IP ranges are used. For more information, see the [FAQ](#frequently-asked-questions) section of this article. For more information on Kubernetes services, see [Services][services] in the Kubernetes documentation. **Kubernetes DNS service IP address**: The IP address for the cluster's DNS service. This address must be within the *Kubernetes service address range*. Don't use the first IP address in your address range. The first address in your subnet range is used for the *kubernetes.default.svc.cluster.local* address. -**Docker Bridge address**: The Docker bridge network address represents the default *docker0* bridge network address present in all Docker installations. While *docker0* bridge is not used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as *docker build* within the AKS cluster. It is required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically which could conflict with other CIDRs. You must pick an address space that does not collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR. +**Docker Bridge address**: The Docker bridge network address represents the default *docker0* bridge network address present in all Docker installations. While *docker0* bridge isn't used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as *docker build* within the AKS cluster. it's required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically which could conflict with other CIDRs. You must pick an address space that doesn't collide with the rest of the CIDRs on your networks, including the cluster's service CIDR and pod CIDR. ## Configure networking - CLI Set the variables for subscription, resource group and cluster. Consider the fol ## Next steps -To configure Azure CNI networking with dynamic IP allocation and enhanced subnet support, see [Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS](/configure-azure-cni-dynamic-ip-allocation.md). +To configure Azure CNI networking with dynamic IP allocation and enhanced subnet support, see [Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS](configure-azure-cni-dynamic-ip-allocation.md). Learn more about networking in AKS in the following articles: |
aks | Internal Lb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md | As with any Kubernetes resource, you can directly delete a service, such as `kub ## Next steps -Learn more about Kubernetes services in the [Kubernetes services documentation][kubernetes-services]. +To learn more about Kubernetes services, see the [Kubernetes services documentation][kubernetes-services]. <!-- LINKS - External --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply |
aks | Quick Kubernetes Deploy Bicep Extensibility Kubernetes Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md | + + Title: Quickstart - Deploy Azure applications to Azure Kubernetes Service clusters using Bicep extensibility Kubernetes provider +description: Learn how to quickly create a Kubernetes cluster and deploy Azure applications in Azure Kubernetes Service (AKS) using Bicep extensibility Kubernetes provider. + Last updated : 02/21/2023+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure. +++# Quickstart: Deploy Azure applications to Azure Kubernetes Service (AKS) clusters using Bicep extensibility Kubernetes provider (Preview) ++Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you'll deploy a sample multi-container application with a web front-end and a Redis instance to an AKS cluster. ++This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. +++> [!IMPORTANT] +> The Bicep Kubernetes provider is currently in preview. You can enable the feature from the [Bicep configuration file](../../azure-resource-manager/bicep/bicep-config.md#enable-experimental-features) by adding: +> +> ```json +> { +> "experimentalFeaturesEnabled": { +> "extensibility": true, +> } +> } +> ``` ++## Prerequisites +++* To set up your environment for Bicep development, see [Install Bicep tools](../../azure-resource-manager/bicep/install.md). After completing those steps, you'll have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az). ++* To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see [Create an SSH key pair](#create-an-ssh-key-pair). If not, skip to [Review the Bicep file](#review-the-bicep-file). ++* The identity you use to create your cluster has the appropriate minimum permissions. For more information on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). ++* To deploy a Bicep file, you need write access on the resources you deploy and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write and Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). ++### Create an SSH key pair ++To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command will overwrite any SSH key pair with the same name already existing in the given location. ++1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser. ++1. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 4096: ++ ```console + ssh-keygen -t rsa -b 4096 + ``` ++For more information about creating SSH keys, see [Create and manage SSH keys for authentication in Azure][ssh-keys]. ++## Review the Bicep file ++The Bicep file used to create an AKS cluster is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/aks/). For more AKS samples, see the [AKS quickstart templates][aks-quickstart-templates] site. +++The resource defined in the Bicep file is [**Microsoft.ContainerService/managedClusters**](/azure/templates/microsoft.containerservice/managedclusters?tabs=bicep&pivots=deployment-language-bicep). ++Save a copy of the file as `main.bicep` to your local computer. ++## Add the application definition ++A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run. ++In this quickstart, you use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]: ++* The sample Azure Vote Python applications +* A Redis instance ++Two [Kubernetes Services][kubernetes-service] are also created: ++* An internal service for the Redis instance +* An external service to access the Azure Vote application from the internet ++Use the following procedure to add the application definition: ++1. Create a file named `azure-vote.yaml` in the same folder as `main.bicep` with the following YAML definition: ++ ```yaml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: azure-vote-back + spec: + replicas: 1 + selector: + matchLabels: + app: azure-vote-back + template: + metadata: + labels: + app: azure-vote-back + spec: + nodeSelector: + "kubernetes.io/os": linux + containers: + - name: azure-vote-back + image: mcr.microsoft.com/oss/bitnami/redis:6.0.8 + env: + - name: ALLOW_EMPTY_PASSWORD + value: "yes" + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 250m + memory: 256Mi + ports: + - containerPort: 6379 + name: redis + + apiVersion: v1 + kind: Service + metadata: + name: azure-vote-back + spec: + ports: + - port: 6379 + selector: + app: azure-vote-back + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: azure-vote-front + spec: + replicas: 1 + selector: + matchLabels: + app: azure-vote-front + template: + metadata: + labels: + app: azure-vote-front + spec: + nodeSelector: + "kubernetes.io/os": linux + containers: + - name: azure-vote-front + image: mcr.microsoft.com/azuredocs/azure-vote-front:v1 + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 250m + memory: 256Mi + ports: + - containerPort: 80 + env: + - name: REDIS + value: "azure-vote-back" + + apiVersion: v1 + kind: Service + metadata: + name: azure-vote-front + spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: azure-vote-front + ``` ++ For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). ++1. Open `main.bicep` in Visual Studio Code. +1. Press <kbd>Ctrl+Shift+P</kbd> to open **Command Palette**. +1. Search for **bicep**, and then select **Bicep: Import Kubernetes Manifest**. ++ :::image type="content" source="./media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/bicep-extensibility-kubernetes-provider-import-kubernetes-manifest.png" alt-text="Screenshot of Visual Studio Code import Kubernetes Manifest."::: ++1. Select `azure-vote.yaml` from the prompt. This process creates an `azure-vote.bicep` file in the same folder. +1. Open `azure-vote.bicep` and add the following line at the end of the file to output the load balancer public IP: ++ ```bicep + output frontendIp string = coreService_azureVoteFront.status.loadBalancer.ingress[0].ip + ``` ++1. Before the `output` statement in `main.bicep`, add the following Bicep to reference the newly created `azure-vote.bicep` module: ++ ```bicep + module kubernetes './azure-vote.bicep' = { + name: 'buildbicep-deploy' + params: { + kubeConfig: aks.listClusterAdminCredential().kubeconfigs[0].value + } + } + ``` ++1. At the bottom of `main.bicep`, add the following line to output the load balancer public IP: ++ ```bicep + output lbPublicIp string = kubernetes.outputs.frontendIp + ``` ++1. Save both `main.bicep` and `azure-vote.bicep`. ++## Deploy the Bicep file ++1. Deploy the Bicep file using either Azure CLI or Azure PowerShell. ++ # [CLI](#tab/CLI) ++ ```azurecli + az group create --name myResourceGroup --location eastus + az deployment group create --resource-group myResourceGroup --template-file main.bicep --parameters clusterName=<cluster-name> dnsPrefix=<dns-previs> linuxAdminUsername=<linux-admin-username> sshRSAPublicKey='<ssh-key>' + ``` ++ # [PowerShell](#tab/PowerShell) ++ ```azurepowershell + New-AzResourceGroup -Name myResourceGroup -Location eastus + New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./main.bicep -clusterName=<cluster-name> -dnsPrefix=<dns-prefix> -linuxAdminUsername=<linux-admin-username> -sshRSAPublicKey="<ssh-key>" + ``` ++ ++ Provide the following values in the commands: ++ * **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*. + * **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*. + * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*. + * **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*). ++ It takes a few minutes to create the AKS cluster. Wait for the cluster to be successfully deployed before you move on to the next step. ++2. From the deployment output, look for the `outputs` section. For example: ++ ```json + "outputs": { + "controlPlaneFQDN": { + "type": "String", + "value": "myaks0201-d34ae860.hcp.eastus.azmk8s.io" + }, + "lbPublicIp": { + "type": "String", + "value": "52.179.23.131" + } + }, + ``` ++3. Take note of the value of lbPublicIp. ++## Validate the Bicep deployment ++To see the Azure Vote app in action, open a web browser to the external IP address of your service. +++## Clean up resources ++### [Azure CLI](#tab/azure-cli) ++To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [`az group delete`][az-group-delete] command to remove the resource group, container service, and all related resources. ++```azurecli-interactive +az group delete --name myResourceGroup --yes --no-wait +``` ++### [Azure PowerShell](#tab/azure-powershell) ++To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources. ++```azurepowershell-interactive +Remove-AzResourceGroup -Name myResourceGroup +``` ++++> [!NOTE] +> In this quickstart, the AKS cluster was created with a system-assigned managed identity (the default identity option). This identity is managed by the platform and doesn't require removal. ++## Next steps ++In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it. ++To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial: ++> [!div class="nextstepaction"] +> [Kubernetes on Azure tutorial: Prepare an application][aks-tutorial] ++<!-- LINKS - external --> +[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git +[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/ +[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply +[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get +[azure-dev-spaces]: /previous-versions/azure/dev-spaces/ +[aks-quickstart-templates]: https://azure.microsoft.com/resources/templates/?term=Azure+Kubernetes+Service ++<!-- LINKS - internal --> +[kubernetes-concepts]: ../concepts-clusters-workloads.md +[aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md +[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md +[az-aks-browse]: /cli/azure/aks#az_aks_browse +[az-aks-create]: /cli/azure/aks#az_aks_create +[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials +[import-azakscredential]: /powershell/module/az.aks/import-azakscredential +[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli +[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool +[az-group-create]: /cli/azure/group#az_group_create +[az-group-delete]: /cli/azure/group#az_group_delete +[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup +[azure-cli-install]: /cli/azure/install-azure-cli +[install-azure-powershell]: /powershell/azure/install-az-ps +[connect-azaccount]: /powershell/module/az.accounts/Connect-AzAccount +[sp-delete]: ../kubernetes-service-principal.md#additional-considerations +[azure-portal]: https://portal.azure.com +[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests +[kubernetes-service]: ../concepts-network.md#services +[ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md +[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac |
aks | Load Balancer Standard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md | description: Learn how to use a public load balancer with a Standard SKU to expo Previously updated : 11/14/2020 Last updated : 02/22/2023 az aks create \ ### Configure the allocated outbound ports > [!IMPORTANT]+> > If you have applications on your cluster that can establish a large number of connections to small set of destinations, like many instances of a frontend application connecting to a database, you may have a scenario susceptible to encounter SNAT port exhaustion. SNAT port exhaustion happens when an application runs out of outbound ports to use to establish a connection to another application or host. If you have a scenario susceptible to encounter SNAT port exhaustion, we highly recommended you increase the allocated outbound ports and outbound frontend IPs on the load balancer.+> +> For more information on SNAT, see [Use SNAT for outbound connections](../load-balancer/load-balancer-outbound-connections.md). By default, AKS sets *AllocatedOutboundPorts* on its load balancer to `0`, which enables [automatic outbound port assignment based on backend pool size][azure-lb-outbound-preallocatedports] when creating a cluster. For example, if a cluster has 50 or fewer nodes, 1024 ports are allocated to each node. As the number of nodes in the cluster increases, fewer ports are available per node. To show the *AllocatedOutboundPorts* value for the AKS cluster load balancer, use `az network lb outbound-rule list`. |
api-management | Api Management Howto Use Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-azure-monitor.md | To configure resource logs: 1. After configuring details for the log destination or destinations, select **Save**. +> [!NOTE] +> Adding a diagnostic setting object might result in a failure if the [MinApiVersion property](/dotnet/api/microsoft.azure.management.apimanagement.models.apiversionconstraint.minapiversion) of your API Management service is set to any API version higher than 2019-12-01. + For more information, see [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md). ## View diagnostic data in Azure Monitor |
application-gateway | Configuration Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md | You should check your [Azure role-based access control](../role-based-access-con If a [built-in](../role-based-access-control/built-in-roles.md) role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md) for this purpose. Also, [allow sufficient time](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after you make changes to a role assignments. > [!NOTE]-> As a temporary extension, we have introduced a subscription-level Azure Feature Exposure Control (AFEC) flag to help you fix the permissions for all your users and/or service principals' permissions. Register for this interim feature on your own through a subscription owner, contributor, or custom role. </br> +> As a temporary extension, we have introduced a subscription-level [Azure Feature Exposure Control (AFEC)](../azure-resource-manager/management/preview-features.md?tabs=azure-portal) flag to help you fix the permissions for all your users and/or service principals' permissions. Register for this interim feature on your own through a subscription owner, contributor, or custom role. </br> > > "**name**": "Microsoft.Network/DisableApplicationGatewaySubnetPermissionCheck", </br> > "**description**": "Disable Application Gateway Subnet Permission Check", </br> > "**providerNamespace**": "Microsoft.Network", </br> > "**enrollmentType**": "AutoApprove" </br> > -> The provision to circumvent the virtual network permission check by using this feature control is **available only for a limited period, until 6th April 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions. Read more about [Preview Feature registration](../azure-resource-manager/management/preview-features.md?tabs=azure-portal). +> The provision to circumvent the virtual network permission check by using this feature control is **available only for a limited period, until 6th April 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions. [Set up this flag in your Azure subscription](../azure-resource-manager/management/preview-features.md?tabs=azure-portal). ## Network security groups |
applied-ai-services | Concept Id Document | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md | -Form Recognizer Identity document (ID) model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from identity documents such as US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state IDs, social security cards, and permanent resident cards and more. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation. +Form Recognizer Identity document (ID) model combines Optical Character Recognition (OCR) with deep learning models to analyze and extract key information from identity documents. The API analyzes identity documents (including the following) and returns a structured JSON data representation: ++* US Drivers Licenses (all 50 states and District of Columbia) +* International passport biographical pages +* US state IDs +* Social Security cards +* Permanent resident cards ::: moniker-end The prebuilt IDs service extracts the key values from worldwide passports and U. ## Development options ::: moniker range="form-recog-3.0.0"-The following tools are supported by Form Recognizer v3.0: +Form Recognizer v3.0 supports the following tools: | Feature | Resources | Model ID | |-|-|--| The following tools are supported by Form Recognizer v3.0: ::: moniker range="form-recog-2.1.0" -The following tools are supported by Form Recognizer v2.1: +Form Recognizer v2.1 supports the following tools: | Feature | Resources | |-|-| The following tools are supported by Form Recognizer v2.1: ::: moniker range="form-recog-2.1.0" * Supported file formats: JPEG, PNG, PDF, and TIFF-* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed. +* Form Recognizer processes PDF and TIFF files up to 2000 pages or only the first two pages for free-tier subscribers. * The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels. ::: moniker-end ### Try Form Recognizer -Extract data, including name, birth date, and expiration date, from ID documents. You'll need the following resources: +Extract data, including name, birth date, and expiration date, from ID documents. You need the following resources: * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) Extract data, including name, birth date, and expiration date, from ID documents 1. In the **key** field, paste the key you obtained from your Form Recognizer resource. - :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot: select form type dropdown menu."::: + :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot: select document type dropdown menu."::: -1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document. +1. Select **Run analysis**. The Form Recognizer Sample Labeling tool calls the Analyze Prebuilt API and analyzes the document. 1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected. Extract data, including name, birth date, and expiration date, from ID documents * The "readResults" node contains every line of text with its respective bounding box placement on the page. * The "selectionMarks" node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected".- * The "pageResults" section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted. + * The "pageResults" section includes the tables extracted. For each table, Form Recognizer extracts the text, row, and column index, row and column spanning, bounding box, and more. * The "documentResults" field contains key/value pairs information and line items information for the most relevant parts of the document. > [!NOTE] Extract data, including name, birth date, and expiration date, from ID documents ::: moniker range="form-recog-3.0.0" -## Supported languages and locales -->[!NOTE] - > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image. +## Supported document types -| Model | LanguageΓÇöLocale code | Default | -|--|:-|:| -|ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li><li>English (United States)ΓÇöen-US (state ID)</li><li>English (United States)ΓÇöen-US (social security card)</li><li>English (United States)ΓÇöen-US (permanent resident card)</li></ul></br>|English (United States)ΓÇöen-US| +| Region | Document Types | +|--|-| +|Worldwide|Passport Book, Passport Card| +|`United States (US)`|Driver License, Identification Card, Residency Permit (Green card), Social Security Card, Military ID| +|`India (IN)`|Driver License, PAN Card, Aadhaar Card| +|`Canada (CA)`|Driver License, Identification Card, Residency Permit (Maple Card)| +|`United Kingdom (GB)`|Driver License, National Identity Card| +|`Australia (AU)`|Driver License, Photo Card, Key-pass ID (including digital version)| ## Field extractions -Below are the fields extracted per document type. The Azure Form Recognizer ID model `prebuilt-idDocument` extracts the below fields in the `documents.*.fields`. It also extracts all the text in the documents, words, lines, and styles that are included in the JSON output in the different sections. +The following are the fields extracted per document type. The Azure Form Recognizer ID model `prebuilt-idDocument` extracts the following fields in the `documents.*.fields`. The json output includes all the extracted text in the documents, words, lines, and styles. >[!NOTE] > Below are the fields extracted per document type. The Azure Form Recognizer ID m |`PlaceOfBirth`|`string`|Place of birth|MASSACHUSETTS, U.S.A.| |`PlaceOfIssue`|`string`|Place of issue|LA PAZ| |`IssuingAuthority`|`string`|Issuing authority|United States Department of State|-|`PersonalNumber`|`string`|Personal Id. No.|A234567893| +|`PersonalNumber`|`string`|Personal ID. No.|A234567893| |`MachineReadableZone`|`object`|Machine readable zone (MRZ)|P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816| |`MachineReadableZone.FirstName`|`string`|Given name and middle initial if applicable|JENNIFER| |`MachineReadableZone.LastName`|`string`|Surname|BROOKS| Below are the fields extracted per document type. The Azure Form Recognizer ID m #### `idDocument` field extracted -|Name| Type | Description | Standardized output| -|:--|:-|:-|:-| -| DateOfIssue | Date | Issue date | yyyy-mm-dd | -| Height | String | Height of the holder. | | -| Weight | String | Weight of the holder. | | -| EyeColor | String | Eye color of the holder. | | -| HairColor | String | Hair color of the holder. | | -| DocumentDiscriminator | String | Document discriminator is a security code that identifies where and when the license was issued. | | -| Endorsements | String | More driving privileges granted to a driver such as Motorcycle or School bus. | | -| Restrictions | String | Restricted driving privileges applicable to suspended or revoked licenses.| | -| VehicleClassification | String | Types of vehicles that can be driven by a driver. || -| CountryRegion | countryRegion | Country or region code compliant with ISO 3166 standard | | -| DateOfBirth | Date | DOB | yyyy-mm-dd | -| DateOfExpiration | Date | Expiration date DOB | yyyy-mm-dd | -| DocumentNumber | String | Relevant passport number, driver's license number, etc. | | -| FirstName | String | Extracted given name and middle initial if applicable | | -| LastName | String | Extracted surname | | -| Nationality | countryRegion | Country or region code compliant with ISO 3166 standard (Passport only) | | -| Sex | String | Possible extracted values include "M", "F" and "X" | | -| MachineReadableZone | Object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" | -| DocumentType | String | Document type, for example, Passport, Driver's License, Social security card and more | "passport" | -| Address | String | Extracted address, address is also parsed to its components - address, city, state, country, zip code || -| Region | String | Extracted region, state, province, etc. (Driver's License only) | | +| Field | Type | Description | Example | +|:|:--|:|:--| +|`Address`|`address`|Address|123 STREET ADDRESS YOUR CITY WA 99999-1234| +|`DocumentNumber`|`string`|Driver license number|WDLABCD456DG| +|`FirstName`|`string`|Given name and middle initial if applicable|LIAM R.| +|`LastName`|`string`|Surname|TALBOT| +|`DateOfBirth`|`date`|Date of birth|01/06/1958| +|`DateOfExpiration`|`date`|Date of expiration|08/12/2020| ::: moniker-end |
applied-ai-services | Concept Receipt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md | The Form Recognizer receipt model combines powerful Optical Character Recognitio ## Receipt data extraction -Receipt digitization is the process of converting scanned receipts into digital form for downstream processing. Azure Form Recognizer OCR-powered receipt data extraction helps to automate the conversion and save time and effort. The output from the receipt data extraction is used for accounts payable and receivables automation, sales data analytics, and other business scenarios. +Receipt digitization is the process of converting scanned receipts into digital form for downstream processing. Azure Form Recognizer OCR-powered receipt data extraction helps to automate the conversion and save time and effort. ::: moniker range="form-recog-3.0.0" Receipt digitization is the process of converting scanned receipts into digital ## Development options ::: moniker range="form-recog-3.0.0"-The following tools are supported by Form Recognizer v3.0: +Form Recognizer v3.0 Supports the following tools: | Feature | Resources | Model ID | |-|-|--| The following tools are supported by Form Recognizer v3.0: ::: moniker range="form-recog-2.1.0" -The following tools are supported by Form Recognizer v2.1: +Form Recognizer v2.1 supports the following tools: | Feature | Resources | |-|-| The following tools are supported by Form Recognizer v2.1: ::: moniker range="form-recog-2.1.0" * Supported file formats: JPEG, PNG, PDF, and TIFF-* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed. +* For PDF and TIFF, Form Recognizer can process up to 2000 pages for standard tier subscribers or only the first two pages for free-tier subscribers. * The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels. ::: moniker-end ### Try receipt data extraction -See how data, including time and date of transactions, merchant information, and amount totals, is extracted from receipts. You need the following resources: +See how Form Recognizer extracts data, including time and date of transactions, merchant information, and amount totals from receipts. You need the following resources: * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) The receipt model supports all English receipts and the following locales: |Supported Languages| Details | |:--|:-:|-|• English| United States (-us), Australia (-au), Great Britain (-gb), India (-in), United Arab Emirates (-ae)| -|• Dutch| Netherlands (nl-nl)| -|• French | France (fr-fr), Canada (fr-ca) | -|• German | Germany (de-de) | -|• Italian | Italy (it-it) | -|• Japanese | Japan (ja-ja)| -|• Portuguese| Portugal (pt-pt), Brazil (pt-br)| -|• Spanish | Spain (es-es) | +|• English| United States (-US), Australia (-AU), Great Britain (-GB), India (-IN), United Arab Emirates (-AE)| +|• Dutch| Netherlands (nl-NL)| +|• French | France (fr-FR), Canada (fr-CA) | +|• German | Germany (de-DE) | +|• Italian | Italy (it-IT) | +|• Japanese | Japan (ja-JP)| +|• Portuguese| Portugal (pt-PT), Brazil (pt-BR)| +|• Spanish | Spain (es-ES) | ::: moniker-end ::: moniker range="form-recog-2.1.0" |
applied-ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md | -Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation. +Form Recognizer service updates on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation. >[!NOTE] > With the release of the 2022-08-31 GA API, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview or the 2022-01-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-migration-guide.md). Form Recognizer service is updated on an ongoing basis. Bookmark this page to st The **prebuilt receipt model** now has added support for the following languages: - * English - United Arab Emirates (en-ae) - * Dutch - Netherlands (nl-nl) - * French - Canada (fr-ca) - * German - (de-de) - * Italian - (it-it) - * Japanese - Japan (ja-jp) - * Portuguese - Brazil (pt-br) + * English - United Arab Emirates (en-AE) + * Dutch - Netherlands (nl-NL) + * French - Canada (fr-CA) + * German - (de-DE) + * Italian - (it-IT) + * Japanese - Japan (ja-JP) + * Portuguese - Brazil (pt-BR) * **[Prebuilt invoice model](concept-invoice.md)ΓÇöadditional language support and field extractions** Form Recognizer service is updated on an ongoing basis. Bookmark this page to st The **prebuilt ID document model** now has added support for the following document types: - * Passport, driver's license, and residence permit ID expansion + * Driver's license expansion supporting India, Canada, United Kingdom and Australia * US military ID cards and documents- * India ID cards and documents - * Australia ID cards and documents - * Canada ID cards and documents - * United Kingdom ID cards and documents + * India ID cards and documents (PAN and Aadhaar) + * Australia ID cards and documents (photo card, Key-pass ID) + * Canada ID cards and documents (identification card, Maple card) + * United Kingdom ID cards and documents (national identity card) ## December 2022 Form Recognizer service is updated on an ongoing basis. Bookmark this page to st * **Label subtypes and second-level subtypes** The Studio now supports subtypes for table columns, table rows, and second-level subtypes for types such as dates and numbers. -* Building custom neural models is now supported in the US Gov Virginia region. +* The US Gov Virginia region now supports building custom neural models. -* Preview API versions ```2022-01-30-preview``` and ```2021-09-30-preview``` will be retired January 31 2023. Update to the ```2022-08-31``` API version to avoid any service disruptions. +* Preview API versions ```2022-01-30-preview``` and ```2021-09-30-preview``` retires January 31 2023. Update to the ```2022-08-31``` API version to avoid any service disruptions. Form Recognizer service is updated on an ongoing basis. Bookmark this page to st ## October 2022 * **Form Recognizer versioned content**- * Form Recognizer documentation has been updated to present a versioned experience. Now, you can choose to view content targeting the v3.0 GA experience or the v2.1 GA experience. The v3.0 experience is the default. + * Form Recognizer documentation now presents a versioned experience. You can choose to view content targeting the v3.0 GA experience or the v2.1 GA experience. The v3.0 experience is the default. :::image type="content" source="media/versioning-and-monikers.png" alt-text="Screenshot of the Form Recognizer landing page denoting the version dropdown menu."::: Form Recognizer service is updated on an ongoing basis. Bookmark this page to st > * UK South > * West US2 - * For a complete list of regions where training is supported see [custom neural models](concept-custom-neural.md). + * For a complete list of supported training regions, see [custom neural models](concept-custom-neural.md). * Form Recognizer SDK version 4.0.0 GA release * **Form Recognizer SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!** Form Recognizer service is updated on an ongoing basis. Bookmark this page to st * [**prebuilt-read**](concept-read.md). Read OCR model is now also available in Form Recognizer with paragraphs and language detection as the two new features. Form Recognizer Read targets advanced document scenarios aligned with the broader document intelligence capabilities in Form Recognizer. * [**prebuilt-layout**](concept-layout.md). The Layout model extracts paragraphs and whether the extracted text is a paragraph, title, section heading, footnote, page header, page footer, or page number. * [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields now resolve to the existing fields TotalTax and Line/Tax respectively.- * [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards. Support for passport visa information. + * [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards. * [**prebuilt-receipt**](concept-receipt.md). Expanded locale support for French (fr-FR), Spanish (es-ES), Portuguese (pt-PT), Italian (it-IT) and German (de-DE). * [**prebuilt-businessCard**](concept-business-card.md). Address parsing support to extract subfields for address components like address, city, state, country, and zip code. Form Recognizer service is updated on an ongoing basis. Bookmark this page to st * [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales). * [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales). * [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).- * [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction](concept-read.md#microsoft-office-and-html-text-extraction). + * [**Read model now supports common Microsoft Office document types**](concept-read.md). Read API supports document types like Word (docx) and PowerPoint (ppt). See [Microsoft Office and HTML text extraction](concept-read.md#microsoft-office-and-html-text-extraction). Form Recognizer service is updated on an ongoing basis. Bookmark this page to st * [**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**. * [**W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios. * [**Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.- * [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents. + * [**General document**](concept-general-document.md) pre-trained model now support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents. * [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices. * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models. * [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean. Form Recognizer service is updated on an ongoing basis. Bookmark this page to st -* Form Recognizer containers v2.1 released in gated preview and are now supported by six feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom**. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and receive approval. +* Form Recognizer containers v2.1 released in gated preview and now supports six feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom**. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and receive approval. * *See* [**Install and run Docker containers for Form Recognizer**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout) and [**Configure Form Recognizer containers**](containers/form-recognizer-container-configuration.md?branch=main) Form Recognizer service is updated on an ongoing basis. Bookmark this page to st * To get started, try the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/) and follow the [quickstart](./quickstarts/try-sample-label-tool.md). -* The updated Layout API table feature adds header recognition with column headers that can span multiple rows. Each table cell has an attribute that indicates whether it's part of a header or not. This update can be used to identify which rows make up the table header. +* The updated Layout API table feature adds header recognition with column headers that can span multiple rows. Each table cell has an attribute that indicates whether it's part of a header or not. This update identifies which rows make up the table header. Form Recognizer service is updated on an ongoing basis. Bookmark this page to st For a list of field values, _see_ [Fields extracted](./concept-id-document.md) in our Form Recognizer documentation. -* Expanded the set of document languages that can be provided to the **[StartRecognizeContent](/dotnet/api/azure.ai.formrecognizer.formrecognizerclient.startrecognizecontent?view=azure-dotnet-preview&preserve-view=true)** method. +* Expanded the set of document languages provided to the **[StartRecognizeContent](/dotnet/api/azure.ai.formrecognizer.formrecognizerclient.startrecognizecontent?view=azure-dotnet-preview&preserve-view=true)** method. * **New property `Pages` supported by the following classes**: Form Recognizer service is updated on an ongoing basis. Bookmark this page to st **[RecognizeContentOptions](/dotnet/api/azure.ai.formrecognizer.recognizecontentoptions?view=azure-dotnet-preview&preserve-view=true)** - The `ReadingOrder` property is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`. + The `ReadingOrder` property is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöapplies to order the extraction of text elements. If not specified, the default value is `basic`. ### [**Java**](#tab/java) Form Recognizer service is updated on an ongoing basis. Bookmark this page to st * **[beginRecognizeContent](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontent?preserve-view=true&view=azure-java-preview)**</br> * **[beginRecognizeContentFromUrl](/java/api/com.azure.ai.formrecognizer.formrecognizerclient.beginrecognizecontentfromurl?view=azure-java-preview&preserve-view=true)**</br>- * The `ReadingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`. + * The `ReadingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöapplies to order the extraction of text elements. If not specified, the default value is `basic`. * The client defaults to the latest supported service version, which currently is **2.1-preview.3**. Form Recognizer service is updated on an ongoing basis. Bookmark this page to st * New option `pages` supported by all form recognition methods (custom forms and all prebuilt models). The argument allows you to select individual or a range of pages for multi-page PDF and TIFF documents. For individual pages, enter the page number, for example, `3`. For a range of pages (like page 2 and pages 5-7) enter the page numbers and ranges separated by commas: `2, 5-7`. -* Added support for a **[ReadingOrder](/javascript/api/@azure/ai-form-recognizer/formreadingorder?view=azure-node-latest&preserve-view=true to the URL)** type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`. +* Added support for a **[ReadingOrder](/javascript/api/@azure/ai-form-recognizer/formreadingorder?view=azure-node-latest&preserve-view=true to the URL)** type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how the order of recognized lines of text. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöapplies to order the extraction of text elements. If not specified, the default value is `basic`. * Split **FormField** type into several different interfaces. This update shouldn't cause any API compatibility issues except in certain edge cases (undefined valueType). Form Recognizer service is updated on an ongoing basis. Bookmark this page to st **[begin_recognize_content_from_url](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formrecognizerclient?view=azure-python-preview&preserve-view=true#azure-ai-formrecognizer-formrecognizerclient-begin-recognize-content-from-url)** - The `readingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`. + The `readingOrder` keyword argument is an optional parameter that allows you to specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöapplies to order the extraction of text elements. If not specified, the default value is `basic`. Form Recognizer service is updated on an ongoing basis. Bookmark this page to st [Learn more about the invoice model](./concept-invoice.md) -* **Supervised table labeling and training, empty-value labeling** - In addition to Form Recognizer's [state-of-the-art deep learning automatic table extraction capabilities](https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011), it now enables customers to label and train on tables. This new release includes the ability to label and train on line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained, the model extracts line items as part of the JSON output in the documentResults section. +* **Supervised table labeling and training, empty-value labeling** - In addition to Form Recognizer's [state-of-the-art deep learning automatic table extraction capabilities](https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011), it now enables customers to label and train on tables. This new release includes the ability to label and train on line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. A trained model extracts line items as part of the JSON output in the documentResults section. :::image type="content" source="./media/table-labeling.png" alt-text="Screenshot of the table labeling feature." lightbox="./media/table-labeling.png"::: Form Recognizer service is updated on an ongoing basis. Bookmark this page to st > [Learn more about Layout extraction](concept-layout.md) * **Client library update** - The latest versions of the [client libraries](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) for .NET, Python, Java, and JavaScript support the Form Recognizer 2.1 API.- * **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md) - * **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages. + * **New language supported: Japanese** - Language support for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md) + * **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature supports only Latin languages. * **Quality improvements** - Extraction improvements including single digit extraction improvements.- * **New try-it-out feature in the Form Recognizer Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Form Recognizer Sample Labeling tool. See how your data is extracted without writing any code. + * **New try-it-out feature in the Form Recognizer Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Form Recognizer Sample Labeling tool. See how to extract your data without writing any code. * [**Try the Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net) Form Recognizer service is updated on an ongoing basis. Bookmark this page to st ## August 2020 -* **Form Recognizer v2.1-preview.1 has been released and includes the following features: +* **The Form Recognizer v2.1-preview.1** release includes the following features: * **REST API reference is available** - View the [`v2.1-preview.1 reference`](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-1/operations/AnalyzeBusinessCardAsync)- * **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`). - * **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks. - * **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_. + * **New languages supported In addition to English**, supported [languages](language-support.md) for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`). + * **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Extract selection marks with `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks. + * **Model Compose** - allows you to compose multiple models called with a single model ID. When you submit a document with a composed model ID, an initial classification step routes it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_. * **Model name** - add a friendly name to your custom models for easier management and tracking. * **[New prebuilt model for Business Cards](./concept-business-card.md)** for extracting common fields in English, language business cards. * **[New locales for prebuilt Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN Form Recognizer service is updated on an ongoing basis. Bookmark this page to st **New samples** are available on GitHub. * The [Knowledge Extraction Recipes - Forms Playbook](https://github.com/microsoft/knowledge-extraction-recipes-forms) collects best practices from real Form Recognizer customer engagements and provides usable code samples, checklists, and sample pipelines used in developing these projects.- * The [Sample Labeling tool](https://github.com/microsoft/OCR-Form-Tools) has been updated to support the new v2.1 functionality. See this [quickstart](label-tool.md) for getting started with the tool. + * The [Sample Labeling tool](https://github.com/microsoft/OCR-Form-Tools) update supports the new v2.1 functionality. See this [quickstart](label-tool.md) for getting started with the tool. * The [Intelligent Kiosk](https://github.com/microsoft/Cognitive-Samples-IntelligentKiosk/blob/master/Documentation/FormRecognizer.md) Form Recognizer sample shows how to integrate `Analyze Receipt` and `Train Custom Model` - _Train without Labels_. The new SDK supports all the features of the v2.0 REST API for Form Recognizer. ## March 2020 -* **Value types for labeling** You can now specify the types of values you're labeling with the Form Recognizer Sample Labeling tool. The following value types and variations are currently supported: +* **Value types for labeling** You can now specify the types of values you're labeling with the Form Recognizer Sample Labeling tool. Supported value types and variations: * `string` * default, `no-whitespaces`, `alphanumeric` * `number` The new SDK supports all the features of the v2.0 REST API for Form Recognizer. See the [Sample Labeling tool](label-tool.md#specify-tag-value-types) guide to learn how to use this feature. -* **Table visualization** The Sample Labeling tool now displays tables that were recognized in the document. This feature lets you view recognized and extracted tables from the document prior to labeling and analyzing. This feature can be toggled on/off using the layers option. +* **Table visualization** The Sample Labeling tool now displays recognized tables in the document. This feature lets you view recognized and extracted tables from the document prior to labeling and analyzing. This feature can be toggled on/off using the layers option. -* The following image is an example of how tables are recognized and extracted: +* The following image is an example of recognized and extracted tables: :::image type="content" source="media/whats-new/table-viz.png" alt-text="Screenshot of table visualization using the Sample Labeling tool."::: See the [Sample Labeling tool](label-tool.md#specify-tag-value-types) guide to l * TLS 1.2 enforcement -* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../../cognitive-services/security-features.md). +* TLS 1.2 enforces for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../../cognitive-services/security-features.md). This release introduces the Form Recognizer 2.0. In the next sections, you'll fi * Custom model API changes - All of the APIs for training and using custom models have been renamed, and some synchronous methods are now asynchronous. The following are major changes: + All of APIs for training and using custom models renamed and some synchronous methods are now asynchronous. The following are major changes: * The process of training a model is now asynchronous. You initiate training through the **/custom/models** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}** to return the training results.- * Key/value extraction is now initiated by the **/custom/models/{modelID}/analyze** API call. This call returns an operation ID, which you can pass into **custom/models/{modelID}/analyzeResults/{resultID}** to return the extraction results. - * Operation IDs for the Train operation are now found in the **Location** header of HTTP responses, not the **Operation-Location** header. + * The **/custom/models/{modelID}/analyze** API call initiates key-value pair extraction. This call returns an operation ID, which you can pass into **custom/models/{modelID}/analyzeResults/{resultID}** to return the extraction results. + * Operation IDs for the Train operation are now in the **Location** header of HTTP responses, not the **Operation-Location** header. * Receipt API changes - * The APIs for reading sales receipts have been renamed. + * Renamed APIs for reading sales receipts. - * Receipt data extraction is now initiated by the **/prebuilt/receipt/analyze** API call. This call returns an operation ID, which you can pass into **/prebuilt/receipt/analyzeResults/{resultID}** to return the extraction results. + * The **/prebuilt/receipt/analyze** API call initiates receipt data extraction. This call returns an operation ID, which you can pass into **/prebuilt/receipt/analyzeResults/{resultID}** to return the extraction results. * Output format changes - * The JSON responses for all API calls have new formats. Some keys and values have been added, removed, or renamed. See the quickstarts for examples of the current JSON formats. + * The JSON responses for all API calls have new formats and some keys and values added, removed, or renamed. See the quickstarts for examples of the current JSON formats. This release introduces the Form Recognizer 2.0. In the next sections, you'll fi * Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice. ::: moniker-end- |
automation | Automation Runbook Authoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-authoring.md | The test matrix includes the following operating systems: ## Key Features of v1.0.8 - **Local directory configuration settings** - You can define the working directory that you want to save runbooks locally.- - **Change Directory:Base Path** - You use the changed directory path when you reopen Visual Studio code IDE. To change the directory using the Command Palette, use **Ctrl+Shift+P -> select Change Directory**. To change the base path from extension configuration settings, select **Manage** icon in the activity bar on the left and go to **Settings > Extensions > Azure Automation > Directory:Base Path**. + - **Change Directory:Base Path** - You use the changed directory path when you reopen Visual Studio Code IDE. To change the directory using the Command Palette, use **Ctrl+Shift+P -> select Change Directory**. To change the base path from extension configuration settings, select **Manage** icon in the activity bar on the left and go to **Settings > Extensions > Azure Automation > Directory:Base Path**. - **Change Directory:Folder Structure** - You can change the local directory folder structure from *vscodeAutomation/accHash* to *subscription/resourceGroup/automationAccount*. Select **Manage** icon in the activity bar on the left and go to **Settings > Extensions > Azure Automation > Directory:Folder Structure**. You can change the default configuration setting from *vscodeAutomation/accHash* to *subscription/resourceGroupe/automationAccount* format. >[!NOTE] >If your automation account is integrated with source control you can provide the runbook folder path of your GitHub repo as the directory path. For example: changing directory to *C:\abc* would store runbooks in *C:\abc\vscodeAutomation..* or *C:\abc//subscriptionName//resourceGroupName//automationAccountName//runbookname.ps1*. |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md | To see how all Azure Arc-enabled components are validated, see [Validation progr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-|DataON AZS-6224|1.23.8|1.12.0_2022-10-11|16.0.537.5223| postgres 12.3 (Ubuntu 12.3-1) | +|DataON AZS-6224|1.23.8|1.12.0_2022-10-11|16.0.537.5223| 12.3 (Ubuntu 12.3-1) | ### Dell To see how all Azure Arc-enabled components are validated, see [Validation progr |--|--|--|--|--| | [Unity XT](https://www.dell.com/en-us/dt/storage/unity.htm) |1.25.4|1.15.0_2023-01-10|16.0.816.19223 |Not validated| | [PowerStore T](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance.htm) |1.25.4|1.15.0_2023-01-10|16.0.816.19223 |Not validated|-| Dell EMC PowerFlex |1.21.5|1.4.1_2022-03-08|15.0.2255.119 | postgres 12.3 (Ubuntu 12.3-1) | -| PowerFlex version 3.6 |1.21.5|1.4.1_2022-03-08|15.0.2255.119 | postgres 12.3 (Ubuntu 12.3-1) | -| PowerFlex CSI version 1.4 |1.21.5|1.4.1_2022-03-08 | 15.0.2255.119 | postgres 12.3 (Ubuntu 12.3-1) | -| PowerStore X|1.20.6|1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1) | +| [PowerFlex](https://www.dell.com/en-us/dt/storage/powerflex.htm) |1.21.5|1.4.1_2022-03-08|15.0.2255.119 | 12.3 (Ubuntu 12.3-1) | +| [PowerStore X](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance/powerstore-x-series.htm)|1.20.6|1.0.0_2021-07-30|15.0.2148.140 | 12.3 (Ubuntu 12.3-1) | ### HPE |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-|HPE Superdome Flex 280 | 1.26.0 | 1.15.0_2023-01-10 | 16.0.816.19223 | Postgres 14.5(ubuntu 20.04)| +|HPE Superdome Flex 280 | 1.26.0 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04)| |HPE Apollo 4200 Gen10 Plus | 1.22.6 | 1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|-|HPE Superdome Flex 280|1.20.0|1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1) ### Kublr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-|Kublr 1.21.2 | 1.22.10 | 1.9.0_2022-07-12 | 16.0.312.4243 |PostgreSQL 12.3 (Ubuntu 12.3-1) | +|Kublr 1.21.2 | 1.22.10 | 1.9.0_2022-07-12 | 16.0.312.4243 |12.3 (Ubuntu 12.3-1) | ### Lenovo |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--| |Lenovo ThinkAgile MX1020 |1.24.6| 1.14.0_2022-12-13 |16.0.816.19223|Not validated|-|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2| 1.10.0_2022-08-09 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1)| +|Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2| 1.10.0_2022-08-09 |16.0.312.4243| 12.3 (Ubuntu 12.3-1)| ### Nutanix |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | 1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)| +| Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | 1.0.0_2021-07-30 | 15.0.2148.140| 12.3 (Ubuntu 12.3-1)| ### PureStorage To see how all Azure Arc-enabled components are validated, see [Validation progr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--| | Portworx Enterprise 2.7 1.22.5 | 1.20.7 | 1.1.0_2021-11-02 | 15.0.2148.140 | Not validated |-| Portworx Enterprise 2.9 | 1.22.5 | 1.1.0_2021-11-02 | 15.0.2195.191 | postgres 12.3 (Ubuntu 12.3-1) | +| Portworx Enterprise 2.9 | 1.22.5 | 1.1.0_2021-11-02 | 15.0.2195.191 | 12.3 (Ubuntu 12.3-1) | ### Red Hat |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-| OpenShift 4.10.16 | 1.23.5 | 1.11.0_2022-09-13 | 16.0.312.4243 | postgres 12.3 (Ubuntu 12.3-1)| +| OpenShift 4.10.16 | 1.23.5 | 1.11.0_2022-09-13 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)| ### VMware |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-| TKG 2.1.0 | 1.26.0 | 1.15.0_2023-01-10 | 16.0.816.19223 | postgres 14.5 (Ubuntu 20.04) -| TKG-1.6.0 | 1.23.8 | 1.11.0_2022-09-13 | 16.0.312.4243 | postgres 12.3 (Ubuntu 12.3-1) -| TKGm v1.5.3 | 1.22.8 | 1.9.0_2022-07-12 | 16.0.312.4243 | postgres 12.3 (Ubuntu 12.3-1)| +| TKG 2.1.0 | 1.26.0 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04) +| TKG-1.6.0 | 1.23.8 | 1.11.0_2022-09-13 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1) +| TKGm v1.5.3 | 1.22.8 | 1.9.0_2022-07-12 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)| ### Wind River |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-|Wind River Cloud Platform 22.12 | 1.24.4|1.14.0_2022-12-13 |16.0.816.19223|Postgres 14.5(ubuntu 20.04) | -|Wind River Cloud Platform 22.06 | 1.23.1|1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) | +|Wind River Cloud Platform 22.12 | 1.24.4|1.14.0_2022-12-13 |16.0.816.19223| 14.5 (Ubuntu 20.04) | +|Wind River Cloud Platform 22.06 | 1.23.1|1.9.0_2022-07-12 |16.0.312.4243| 12.3 (Ubuntu 12.3-1) | ## Data services validation process |
azure-arc | What Is Azure Arc Enabled Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/what-is-azure-arc-enabled-postgresql.md | +- As a customer-managed service with Azure Arc as it is operated by customers or their partners/vendors ### Features Microsoft offers PostgreSQL database services in Azure in two ways: - On-premises - Cloud providers like AWS, GCP, and Azure - Edge deployments (including lightweight Kubernetes [K3S](https://k3s.io/))-- Integrate with Azure (optional)+- Integrate with Azure - Direct connectivity mode - Deploy Azure Arc-enabled PostgreSQL server from the Azure portal - Indirect connectivity mode - Deploy Azure Arc-enabled PostgreSQL server from the infrastructure that hosts it+- Secure + - Supports Active Directory + - Server and Client TLS + - System and user managed certificates - Pay for what you use (per usage billing) - Get support from Microsoft on PostgreSQL Follow these steps to create on your own Kubernetes cluster: - [Azure Arc-enabled Data Services overview](overview.md) - [Azure Arc Hybrid Data Services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services) - [Connectivity modes](connectivity.md)++ |
azure-arc | Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md | The Azure Connected Machine agent enables you to manage your Windows and Linux m :::image type="content" source="media/agent-overview/connected-machine-agent.png" alt-text="Azure Arc-enabled servers agent architectural overview." border="false"::: -The Azure Connected Machine agent package contains several logical components, which are bundled together: +The Azure Connected Machine agent package contains several logical components bundled together: * The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity. The Azure Connected Machine agent package contains several logical components, w * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied. * Assignments are deleted after 14 days, and are not reassigned to the machine after the 14-day period. -* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Extensions are downloaded from Azure and copied to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension is installed to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension is installed to `/var/lib/waagent/<extension>`. +* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Azure downloads extensions and copies them to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension installs to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension installs to `/var/lib/waagent/<extension>`. >[!NOTE] > The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines. The following information describes the directories and user accounts used by th ### Windows agent installation details -The Windows agent is distributed as a Windows Installer package (MSI) and can be downloaded from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent). -After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied. +The Windows agent is distributed as a Windows Installer package (MSI). Download the Windows agent from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent). +Installing the Connected Machine agent for Window applies the following system-wide configuration changes: -* The following installation folders are created during setup. +* The installation process creates the following folders during setup. | Directory | Description | |--|-| After installing the Connected Machine agent for Windows, the following system-w | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| | %SYSTEMDRIVE%\packages | Extension package executables | -* The following Windows services are created on the target machine during installation of the agent. +* Installing the agent creates the following Windows services on the target machine. | Service name | Display name | Process name | Description | |--|--|--|-| After installing the Connected Machine agent for Windows, the following system-w | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. | | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. | -* The following virtual service account is created during agent installation. +* Agent installation creates the following virtual service account. | Virtual Account | Description | ||-| After installing the Connected Machine agent for Windows, the following system-w > [!TIP] > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function. -* The following local security group is created during agent installation. +* Agent installation creates the following local security group. | Security group name | Description | ||-| | Hybrid agent extension applications | Members of this security group can request Azure Active Directory tokens for the system-assigned managed identity | -* The following environmental variables are created during agent installation. +* Agent installation creates the following environmental variables | Name | Default value | Description | |||| | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | | IMDS_ENDPOINT | `http://localhost:40342` | -* There are several log files available for troubleshooting. They are described in the following table. +* There are several log files available for troubleshooting, described in the following table. | Log | Description | |--|-| After installing the Connected Machine agent for Windows, the following system-w | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). | | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. | -* The local security group **Hybrid agent extension applications** is created. +* The process creates the local security group **Hybrid agent extension applications**. -* During uninstall of the agent, the following artifacts are not removed. +* After uninstalling the agent, the following artifacts remain. * %ProgramData%\AzureConnectedMachineAgent\Log * %ProgramData%\AzureConnectedMachineAgent After installing the Connected Machine agent for Windows, the following system-w ### Linux agent installation details -The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent). +The preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configurs the agent. -Installing, upgrading, and removing the Connected Machine agent will not require you to restart your server. +Installing, upgrading, and removing the Connected Machine agent is not required after server restart. -After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied. +Installing the Connected Machine agent for Linux applies the following system-wide configuration changes. -* The following installation folders are created during setup. +* Setup creates the following installation folders. | Directory | Description | |--|-| After installing the Connected Machine agent for Linux, the following system-wid | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.| | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| -* The following daemons are created on the target machine during installation of the agent. +* Installing the agent creates the following daemons. | Service name | Display name | Process name | Description | |--|--|--|-| After installing the Connected Machine agent for Linux, the following system-wid | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. | | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. | -* There are several log files available for troubleshooting. They are described in the following table. +* There are several log files available for troubleshooting, described in the following table. | Log | Description | |--|-| After installing the Connected Machine agent for Linux, the following system-wid | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). | | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. | -* The following environment variables are created during agent installation. These variables are set in `/lib/systemd/system.conf.d/azcmagent.conf`. +* Agent installation creates the following environment variables, set in `/lib/systemd/system.conf.d/azcmagent.conf`. | Name | Default value | Description | |||-| | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | | IMDS_ENDPOINT | `http://localhost:40342` | -* During uninstall of the agent, the following artifacts are not removed. +* After uninstalling the agent, the following artifacts remain. * /var/opt/azcmagent * /var/lib/GuestConfig After installing the Connected Machine agent for Linux, the following system-wid The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions: -* The Guest Configuration agent is limited to use up to 5% of the CPU to evaluate policies. -* The Extension Service agent is limited to use up to 5% of the CPU to install, upgrade, run, and delete extensions. The following exceptions apply: +* The Guest Configuration agent can use up to 5% of the CPU to evaluate policies. +* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. The following exceptions apply: - * If the extension installs background services that run independent of Azure Arc, such as the Microsoft Monitoring Agent, those services will not be subject to the resource governance constraints listed above. - * The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems. + * If the extension installs background services that run independent of Azure Arc, such as the Microsoft Monitoring Agent, those services are not subject to the resource governance constraints listed above. + * The Log Analytics agent and Azure Monitor Agent can use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems. * The Azure Monitor Agent can use up to 30% of the CPU during normal operations. * The Linux OS Update Extension (used by Azure Update Management Center) can use up to 30% of the CPU to patch the server. * The Microsoft Defender for Endpoint extension can use up to 30% of the CPU during installation, upgrades, and removal operations.+ * The Microsoft Sentinel DNS extension can use up to 30% of the CPU to collect logs from DNS servers ## Instance metadata Metadata information about a connected machine is collected after the Connected * Service accounts * Zone -The following metadata information is requested by the agent from Azure: +The agent requests the following metadata information from Azure: * Resource location (region) * Virtual machine ID The following metadata information is requested by the agent from Azure: ## Deployment options and requirements -To deploy the agent and connect a machine, certain [prerequisites](prerequisites.md) must be met. There are also [networking requirements](network-requirements.md) to be aware of. +Agent deployment and machine connection requires certain [prerequisites](prerequisites.md). There are also [networking requirements](network-requirements.md) to be aware of. We provide several options for deploying the agent. For more information, see [Plan for deployment](plan-at-scale-deployment.md) and [Deployment options](deployment-options.md). |
azure-arc | Agent Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md | Title: Archive for What's new with Azure Arc-enabled servers agent -description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. +description: Release notes for Azure Connected Machine agent versions older than six months Last updated 01/23/2023 The Azure Connected Machine agent receives improvements on an ongoing basis. Thi - Known issues - Bug fixes +## Version 1.22 - September 2022 ++### Known issues ++- The 'connect' command uses the value of the last tag for all tags. You will need to fix the tags after onboarding to use the correct values. ++### New features ++- The default login flow for Windows computers now loads the local web browser to authenticate with Azure Active Directory instead of providing a device code. You can use the `--use-device-code` flag to return to the old behavior or [provide service principal credentials](onboard-service-principal.md) for a non-interactive authentication experience. +- If the resource group provided to `azcmagent connect` does not exist, the agent tries to create it and continue connecting the server to Azure. +- Added support for Ubuntu 22.04 +- Added `--no-color` flag for all azcmagent commands to suppress the use of colors in terminals that do not support ANSI codes. ++### Fixed ++- The agent now supports Red Hat Enterprise Linux 8 servers that have FIPS mode enabled. +- Agent telemetry uses the proxy server when configured. +- Improved accuracy of network connectivity checks +- The agent retains extension allow and blocklists when switching the agent from monitoring mode to full mode. Use [azcmagent clear](manage-agent.md#config) to reset individual configuration settings to the default state. + ## Version 1.21 - August 2022 ### New features - `azcmagent connect` usability improvements: - The `--subscription-id (-s)` parameter now accepts friendly names in addition to subscription IDs- - Automatic registration of any missing resource providers for first-time users (additional user permissions required to register resource providers) - - A progress bar now appears while the resource is being created and connected + - Automatic registration of any missing resource providers for first-time users (extra user permissions required to register resource providers) + - Added a progress bar during onboarding - The onboarding script now supports both the yum and dnf package managers on RPM-based Linux systems-- You can now restrict which URLs can be used to download machine configuration (formerly Azure Policy guest configuration) packages by setting the `allowedGuestConfigPkgUrls` tag on the server resource and providing a comma-separated list of URL patterns to allow.+- You can now restrict the URLs used to download machine configuration (formerly Azure Policy guest configuration) packages by setting the `allowedGuestConfigPkgUrls` tag on the server resource and providing a comma-separated list of URL patterns to allow. ### Fixed -- Extension installation failures are now reported to Azure more reliably to prevent extensions from being stuck in the "creating" state-- Metadata for Google Cloud Platform virtual machines can now be retrieved when the agent is configured to use a proxy server+- Improved reliability when reporting extension installation failures to prevent extensions from staying in the "creating" state +- Support for retrieving metadata for Google Cloud Platform virtual machines when the agent uses a proxy server - Improved network connection retry logic and error handling - Linux only: resolves local escalation of privilege vulnerability [CVE-2022-38007](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-38007) The Azure Connected Machine agent receives improvements on an ongoing basis. Thi ### Fixed -- Agents configured to use private endpoints will now download extensions over the private endpoint-- The `--use-private-link` flag on [azcmagent check](manage-agent.md#check) has been renamed to `--enable-pls-check` to more accurately represent its function+- Agents configured to use private endpoints correctly download extensions over the private endpoint +- Renamed the `--use-private-link` flag on [azcmagent check](manage-agent.md#check) to `--enable-pls-check` to more accurately represent its function ## Version 1.19 - June 2022 ### Known issues -- Agents configured to use private endpoints will incorrectly try to download extensions from a public endpoint. [Upgrade the agent](manage-agent.md#upgrade-the-agent) to version 1.20 or later to restore correct functionality.+- Agents configured to use private endpoints incorrectly download extensions from a public endpoint. [Upgrade the agent](manage-agent.md#upgrade-the-agent) to version 1.20 or later to restore correct functionality. - Some systems may incorrectly report their cloud provider as Azure Stack HCI. ### New features -- When installed on a Google Compute Engine virtual machine, the agent will now detect and report Google Cloud metadata in the "detected properties" of the Azure Arc-enabled servers resource. [Learn more](agent-overview.md#instance-metadata) about the new metadata.+- When installed on a Google Compute Engine virtual machine, the agent detects and reports Google Cloud metadata in the "detected properties" of the Azure Arc-enabled servers resource. [Learn more](agent-overview.md#instance-metadata) about the new metadata. ### Fixed -- An issue that could cause the extension manager to hang during extension installation, update, and removal operations has been resolved.+- Resolved an issue that could cause the extension manager to hang during extension installation, update, and removal operations. - Improved support for TLS 1.3 ## Version 1.18 - May 2022 ### New features -- The agent can now be configured to operate in [monitoring mode](security-overview.md#agent-modes), which simplifies configuration of the agent for scenarios where you only want to use Arc for monitoring and security scenarios. This mode disables other agent functionality and prevents use of extensions that could make changes to the system (for example, the Custom Script Extension).+- You can configure the agent to operate in [monitoring mode](security-overview.md#agent-modes), which simplifies configuration of the agent for scenarios where you only want to use Arc for monitoring and security scenarios. This mode disables other agent functionality and prevents use of extensions that could make changes to the system (for example, the Custom Script Extension). - VMs and hosts running on Azure Stack HCI now report the cloud provider as "HCI" when [Azure benefits are enabled](/azure-stack/hci/manage/azure-benefits#enable-azure-benefits). ### Fixed -- `systemd` is now an official prerequisite on Linux and your package manager will alert you if you try to install the Azure Connected Machine agent on a server without systemd.+- `systemd` is now an official prerequisite on Linux - Guest configuration policies no longer create unnecessary files in the `/tmp` directory on Linux servers - Improved reliability when extracting extensions and guest configuration policy packages - Improved reliability for guest configuration policies that have child processes The Azure Connected Machine agent receives improvements on an ongoing basis. Thi ### Fixed -- If you attempt to run `azcmagent connect` on a server that is already connected to Azure, the resource ID is now printed to the console to help you locate the resource in Azure.-- The `azcmagent connect` timeout has been extended to 10 minutes.+- If you attempt to run `azcmagent connect` on a server already connected to Azure, the resource ID is shown on the console to help you locate the resource in Azure. +- Extended the `azcmagent connect` timeout to 10 minutes. - `azcmagent show` no longer prints the private link scope ID. You can check if the server is associated with an Azure Arc private link scope by reviewing the machine details in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/servers), [CLI](/cli/azure/connectedmachine?view=azure-cli-latest#az-connectedmachine-show&preserve-view=true), [PowerShell](/powershell/module/az.connectedmachine/get-azconnectedmachine), or [REST API](/rest/api/hybridcompute/machines/get).-- `azcmagent logs` collects only the 2 most recent logs for each service to reduce ZIP file size.+- `azcmagent logs` collects only the two most recent logs for each service to reduce ZIP file size. - `azcmagent logs` collects Guest Configuration logs again. ## Version 1.16 - March 2022 The Azure Connected Machine agent receives improvements on an ongoing basis. Thi ### New features -- You can now granularly control which extensions are allowed to be deployed to your server and whether or not Guest Configuration should be enabled. See [local agent controls to enable or disable capabilities](security-overview.md#local-agent-security-controls) for more information.+- You can now granularly control allowed and blocked extensions on your server and disable the Guest Configuration agent. See [local agent controls to enable or disable capabilities](security-overview.md#local-agent-security-controls) for more information. ### Fixed -- The "Arc" proxy bypass keyword no longer includes Azure Active Directory endpoints on Linux. Azure Storage endpoints for extension downloads are now included with the "Arc" keyword.+- The "Arc" proxy bypass keyword no longer includes Azure Active Directory endpoints on Linux +- The "Arc" proxy bypass keyword now includes Azure Storage endpoints for extension downloads ## Version 1.15 - February 2022 ### Known issues -- The "Arc" proxy bypass feature on Linux includes some endpoints that belong to Azure Active Directory. As a result, if you only specify the "Arc" bypass rule, traffic destined for Azure Active Directory endpoints will not use the proxy server as expected. This issue will be fixed in an upcoming release.+- The "Arc" proxy bypass feature on Linux includes some endpoints that belong to Azure Active Directory. As a result, if you only specify the "Arc" bypass rule, traffic destined for Azure Active Directory endpoints will not use the proxy server as expected. ### New features - Network check improvements during onboarding: - Added TLS 1.2 check- - Azure Arc network endpoints are now required, onboarding will abort if they are not accessible + - Onboarding aborts when required networking endpoints are inaccessible - New `--skip-network-check` flag to override the new network check behavior - On-demand network check now available using `azcmagent check`-- [Proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) is now available for customers using private endpoints. This allows you to send Azure Active Directory and Azure Resource Manager traffic through a proxy server, but skip the proxy server for traffic that should stay on the local network to reach private endpoints.+- [Proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) is now available for customers using private endpoints. This feature allows you to send Azure Active Directory and Azure Resource Manager traffic through a proxy server, but skip the proxy server for traffic that should stay on the local network to reach private endpoints. - Oracle Linux 8 is now supported ### Fixed The Azure Connected Machine agent receives improvements on an ongoing basis. Thi ### Fixed -- A state corruption issue in the extension manager that could cause extension operations to get stuck in transient states has been fixed. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).+- Fixed a state corruption issue in the extension manager that could cause extension operations to get stuck in transient states. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). ## Version 1.13 - November 2021 The Azure Connected Machine agent receives improvements on an ongoing basis. Thi ### New features - Local configuration of agent settings now available using the [azcmagent config command](manage-agent.md#config).-- Proxy server settings can be [configured using agent-specific settings](manage-agent.md#update-or-remove-proxy-settings) instead of environment variables.-- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached.+- Support for configuring proxy server settings [using agent-specific settings](manage-agent.md#update-or-remove-proxy-settings) instead of environment variables. +- Extension operations execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager falls back to the existing behavior of checking every 5 minutes when the notification service is inaccessible. - Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services. - ## Version 1.12 - October 2021 ### Fixed The Azure Connected Machine agent receives improvements on an ongoing basis. Thi ### Fixed -- The agent can now be installed on Windows systems with the [System objects: Require case insensitivity for non-Windows subsystems](/windows/security/threat-protection/security-policy-settings/system-objects-require-case-insensitivity-for-non-windows-subsystems) policy set to Disabled.-- The guest configuration policy agent will now automatically retry if an error is encountered during service start or restart events.+- The agent now supports on Windows systems with the [System objects: Require case insensitivity for non-Windows subsystems](/windows/security/threat-protection/security-policy-settings/system-objects-require-case-insensitivity-for-non-windows-subsystems) policy set to Disabled. +- The guest configuration policy agent automatically retries if an error occurs during service start or restart events. - Fixed an issue that prevented guest configuration audit policies from successfully executing on Linux machines. ## Version 1.10 - August 2021 Fixed a bug that prevented extension management in the West US 3 region ### Fixed -- The agent will continue running if it is unable to write service start/stop events to the Windows application event log+- The agent continues running if it is unable to write service start/stop events to the Windows Application event log ## Version 1.7 - June 2021 Fixed a bug that prevented extension management in the West US 3 region - Improved reliability during onboarding: - Improved retry logic when HIMDS is unavailable- - Onboarding continues instead of aborting if OS information cannot be obtained + - Onboarding continues instead of aborting if OS information isn't available - Improved reliability when installing the Log Analytics agent for Linux extension on Red Hat and CentOS systems ## Version 1.6 - May 2021 Fixed a bug that prevented extension management in the West US 3 region - Added support for SUSE Enterprise Linux 12 - Updated Guest Configuration agent to version 1.26.12.0 to include:- - Policies are executed in a separate process. + - Policies execute in a separate process. - Added V2 signature support for extension validation. - Minor update to data logging. Fixed a bug that prevented extension management in the West US 3 region - Added support for private endpoints, which is currently in limited preview. - Expanded list of exit codes for azcmagent.-- Agent configuration parameters can now be read from a file with the `--config` parameter.-- Collect new instance metadata to determine if Microsoft SQL Server is installed on the server+- You can pass agent configuration parameters from a file with the `--config` parameter. +- Automatically detects the presence of Microsoft SQL Server on the server ### Fixed Resolved issue preventing the Custom Script Extension on Linux from installing s ### Fixed -Resolved issue where proxy configuration could be lost after upgrade on RPM-based distributions. +Resolved issue where proxy configuration resets after upgrade on RPM-based distributions. ## Version 1.1 - October 2020 This version is the first generally available release of the Azure Connected Mac - Support for preview agents (all versions older than 1.0) will be removed in a future service update. - Removed support for fallback endpoint `.azure-automation.net`. If you have a proxy, you need to allow the endpoint `*.his.arc.azure.com`.-- If the Connected Machine agent is installed on a virtual machine hosted in Azure, VM extensions can't be installed or modified from the Arc-enabled servers resource. This is to avoid conflicting extension operations being performed from the virtual machine's **Microsoft.Compute** and **Microsoft.HybridCompute** resource. Use the **Microsoft.Compute** resource for the machine for all extension operations.+- VM extensions can't be installed or modified from Azure Arc if the agent detects it's running in an Azure VM. This is to avoid conflicting extension operations being performed from the virtual machine's **Microsoft.Compute** and **Microsoft.HybridCompute** resource. Use the **Microsoft.Compute** resource for the machine for all extension operations. - Name of guest configuration process has changed, from *gcd* to *gcad* on Linux, and *gcservice* to *gcarcservice* on Windows. ### New features |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | The Azure Connected Machine agent receives improvements on an ongoing basis. To This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md). +## Version 1.27 - February 2023 ++### Fixed ++- The extension service now correctly restarts when the Azure Connected Machine agent is being upgraded by Update Management Center +- Resolved issues with the hybrid connectivity component that could result in the "himds" service crashing, the server showing as "disconnected" in Azure, and connectivity issues with Windows Admin Center and SSH +- Improved handling of resource move scenarios that could impact Windows Admin Center and SSH connectivity +- Improved reliability when changing the [agent configuration mode](security-overview.md#local-agent-security-controls) from "monitor" mode to "full" mode. +- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Sentinel DNS extension to improve log collection reliability +- Tenant IDs are now validated during onboarding for correctness + ## Version 1.26 - January 2023 > [!NOTE] This page is updated monthly, so revisit it regularly. If you're looking for ite - Improved logging during the installation process. - The install script for Windows now saves the MSI to the TEMP directory instead of the current directory. -## Version 1.22 - September 2022 --### Known issues --- When connecting a server and specifying multiple tags, the value of the last tag is used for all tags. You will need to fix the tags after onboarding to use the correct values.--### New features --- The default login flow for Windows computers now loads the local web browser to authenticate with Azure Active Directory instead of providing a device code. You can use the `--use-device-code` flag to return to the old behavior or [provide service principal credentials](onboard-service-principal.md) for a non-interactive authentication experience.-- If the resource group provided to `azcmagent connect` does not exist, the agent will try to create it and continue connecting the server to Azure.-- Added support for Ubuntu 22.04-- Added `--no-color` flag for all azcmagent commands to suppress the use of colors in terminals that do not support ANSI codes.--### Fixed --- The agent can now be installed on Red Hat Enterprise Linux 8 servers that have FIPS mode enabled.-- Agent telemetry is now sent through the proxy server if one is configured.-- Improved accuracy of network connectivity checks-- When switching the agent from monitoring mode to full mode, existing restrictions are now retained. Use [azcmagent clear](manage-agent.md#config) to reset individual configuration settings to the default state.- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods. |
azure-cache-for-redis | Cache Best Practices Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-kubernetes.md | Currently, Azure Cache for Redis uses ports 15000-15019 for clustered caches to To avoid connection interference, we recommend: -- Consider using a non-clustered cache instead+- Consider using a non-clustered cache or an Enterprise tier cache instead - Avoid configuring *Istio* sidecars on pods running Azure Cache for Redis client code ## Next steps |
azure-functions | Durable Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md | Durable Functions is designed to work with all Azure Functions programming langu | Language stack | Azure Functions Runtime versions | Language worker version | Minimum bundles version | | - | - | - | - |-| .NET / C# / F# | Functions 1.0+ | In-process (GA) <br/> Out-of-process ([preview](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions)) | n/a | +| .NET / C# / F# | Functions 1.0+ | In-process <br/> Out-of-process| n/a | | JavaScript/TypeScript | Functions 2.0+ | Node 8+ | 2.x bundles | | Python | Functions 2.0+ | Python 3.7+ | 2.x bundles | | PowerShell | Functions 3.0+ | PowerShell 7+ | 2.x bundles | |
azure-functions | Functions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md | Identity-based connections are supported by the following components: | Azure Event Hubs triggers and bindings | All | [Azure Event Hubs extension version 5.0.0 or later][eventhubv5],<br/>[Extension bundle 3.3.0 or later][eventhubv5] | | Azure Service Bus triggers and bindings | All | [Azure Service Bus extension version 5.0.0 or later][servicebusv5],<br/>[Extension bundle 3.3.0 or later][servicebusv5] | | Azure Cosmos DB triggers and bindings | All | [Azure Cosmos DB extension version 4.0.0 or later][cosmosv4],<br/> [Extension bundle 4.0.2 or later][cosmosv4]|-| Durable Functions storage provider (Azure Storage) - Preview | All | [Durable Functions extension version 2.7.0 or later][durable-identity],<br/>[Extension bundle 3.3.0 or later][durable-identity] | +| Durable Functions storage provider (Azure Storage) | All | [Durable Functions extension version 2.7.0 or later][durable-identity],<br/>[Extension bundle 3.3.0 or later][durable-identity] | | Host-required storage ("AzureWebJobsStorage") - Preview | All | [Connecting to host storage with an identity](#connecting-to-host-storage-with-an-identity-preview) | [blobv5]: ./functions-bindings-storage-blob.md#install-extension Identity-based connections are supported by the following components: [servicebusv5]: ./functions-bindings-service-bus.md [cosmosv4]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4 [tablesv1]: ./functions-bindings-storage-table.md#table-api-extension-[durable-identity]: ./durable/durable-functions-storage-providers.md#identity-based-connections-preview +[durable-identity]: ./durable/durable-functions-configure-durable-functions-with-credentials.md [!INCLUDE [functions-identity-based-connections-configuration](../../includes/functions-identity-based-connections-configuration.md)] |
azure-maps | How To Create Data Registries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md | + + Title: Create Data Registry (preview) ++description: Learn how to create Data Registry. ++ Last updated : 2/14/2023++++++# How to create data registry (preview) ++The [data registry] service enables you to register data content in an Azure Storage Account with your Azure Maps account. An example of data might include a collection of Geofences used in the Azure Maps Geofencing Service. Another example is ZIP files containing drawing packages (DWG) or GeoJSON files that Azure Maps Creator uses to create or update indoor maps. ++## Prerequisites ++- [Azure Maps account] +- [Subscription key] +- An [Azure storage account][create storage account] ++>[!IMPORTANT] +> +> - This article uses the `us.atlas.microsoft.com` geographical URL. If your account wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). +> - In the URL examples in this article you will need to replace: +> - `{Azure-Maps-Subscription-key}` with your Azure Maps [subscription key]. +> - `{udid}` with the user data ID of your data registry. For more information, see [The user data ID](#the-user-data-id). ++## Prepare to register data in Azure Maps ++Before you can register data in Azure Maps, you need to create an environment containing all of the required components. You need a storage account with one or more containers that hold the files you wish to register and managed identities for authentication. This section explains how to prepare your Azure environment to register data in Azure Maps. ++### Create managed identities ++There are two types of managed identities: **system-assigned** and **user-assigned**. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. For more information, see [managed identities for Azure resources][managed identity]. ++Use the following steps to create a managed identity, add it to your Azure Maps account. ++# [system-assigned](#tab/System-assigned) ++Create a system assigned managed identity: ++1. Go to your Azure Maps account in the [Azure portal]. +1. Select **Identity** from the left menu. +1. Toggle the **Status** to **On**. ++# [user-assigned](#tab/User-assigned) ++Create a user assigned managed identity: ++1. Go to the [Azure portal] and select **Create a resource**. +1. In the **Search services and marketplace** control, enter **user assigned managed identity**. +1. In the **Create User Assigned Managed Identity** page, select your subscription, resource group, region and a name for your managed identify. +1. Select **Review + create**, then once ready, **Create**. ++ :::image type="content" source="./media/data-registry/create-user-assigned-managed-identity.png" lightbox="./media/data-registry/create-user-assigned-managed-identity.png" alt-text="A screenshot of the Create User Assigned Managed Identity page."::: ++1. In your Azure Maps account, select **Identity** in the **Settings** section of the left menu. +1. Select the **User assigned** tab. +1. Select **+ Add**. +1. In the **Add user assigned managed identity** screen, select the desired **Subscription** and managed identity. +1. Select **Add** ++ :::image type="content" source="./media/data-registry/add-user-assigned-managed-identity.png" lightbox="./media/data-registry/add-user-assigned-managed-identity.png" alt-text="A screenshot that demonstrates how to add a user assigned managed identity."::: ++The user defined managed identity should now be added to your Azure Maps account. ++++For more information, see [managed identities for Azure resources][managed identity]. ++### Create a container and upload data files ++Before adding files to a data registry, you must upload them into a container in your [Azure storage account][storage account overview]. Containers are similar to a directory in a file system, they're how your files are organized in your Azure storage account. ++To create a container in the [Azure portal], follow these steps: ++1. From within your Azure storage account, select **Containers** from the **Data storage** section in the navigation pane. +1. Select **+ Container** in the **Containers** pane to bring up the **New container** pane. +1. Select **Create** to create the container. ++ :::image type="content" source="./media/data-registry/create-container.png" lightbox="./media/data-registry/create-container.png" alt-text="A screenshot of the new container page in an Azure storage account."::: ++ Once your container has been created, you can upload files into it. ++1. Once the container is created, select it. ++ :::image type="content" source="./media/data-registry/select-container.png" lightbox="./media/data-registry/select-container.png" alt-text="A screenshot showing the new container just created in an Azure storage account."::: ++1. Select **Upload** from the toolbar, select one or more files +1. Select the **Upload** button. ++ :::image type="content" source="./media/data-registry/upload-blob-container.png" lightbox="./media/data-registry/upload-blob-container.png" alt-text="A screenshot of the upload blob page when creating a container."::: ++### Add a datastore ++Once you've created an Azure storage account with files uploaded into one or more containers, you're ready to create the datastore that links the storage accounts to your Azure Maps account. ++> [!IMPORTANT] +> All storage accounts linked to an Azure Maps account must be in the same geographic location. For more information, see [Azure Maps service geographic scope][geographic scope]. +> [!NOTE] +> If you do not have a storage account see [Create a storage account][create storage account]. ++1. Select **Datastore** from the left menu in your Azure Maps account. +1. Select the **Add** button. An **Add datastore** screen appears on the right side. +1. Enter the desired **Datastore ID** then select the **Subscription name** and **Storage account** from the drop-down lists. +1. Select **Add**. ++ :::image type="content" source="./media/data-registry/add-datastore.png" lightbox="./media/data-registry/add-datastore.png" alt-text="A screenshot showing the add datastore screen."::: ++The new datastore will now appear in the list of datastores. ++### Assign roles to managed identities and add them to the datastore ++Once your managed identities and datastore are created, you can add the managed identities to the datastore and simultaneously assign them the **Contributor** and **Storage Blob Data Reader** roles. While it's possible to add roles to your managed identities directly in your managed identities or storage account, you can easily do this while simultaneously associating them with your Azure Maps datastore directly in the datastore pane. ++> [!NOTE] +> Each managed identity associated with the datastore will need the **Contributor** and **Storage Blob Data Reader** roles granted to them. If you do not have the required permissions to grant roles to managed identities, consult your Azure administrator. +To assign roles to your managed identities and associate them with a datastore: ++1. Select **Datastore** from the left menu in your **Azure Maps account**. +1. Select one or more datastores from the list, then **Assign roles**. +1. Select the **Managed identity** to associate to the selected datastore(s) from the drop-down list. +1. Select both **Contributor** and **Storage Blob Data Reader** in the **Roles to assign** drop-down list. ++ :::image type="content" source="./media/data-registry/assign-role-datastore.png" lightbox="./media/data-registry/assign-role-datastore.png" alt-text="A screenshot showing the assign roles to datastore screen."::: ++1. Select the **Assign** button. ++## Data registry properties ++With a datastore created in your Azure Maps account, you're ready to gather the properties required to create the data registry. ++There are the AzureBlob properties that you'll pass in the body of the HTTP request, and [The user data ID](#the-user-data-id) passed in the URL. ++### The AzureBlob ++The `AzureBlob` is a JSON object that defines properties required to create the data registry. ++| Property | Description | +|-|| +| `kind` | Defines what type of object being registered. Currently **AzureBlob** is the only kind that is supported. | +| `dataFormat` | The data format of the file located in **blobUrl**. Its format can either be **GeoJSON** for the spatial service or **ZIP** for the conversion service. | +| `msiClientId` | The ID of the managed identity being used to create the data registry. | +|`linkedResource`| The ID of the datastore registered in the Azure Maps account.<BR>The datastore contains a link to the file being registered. | +| `blobUrl` | A URL pointing to the Location of the AzurebBlob, the file imported into your container. | ++The following two sections will provide you with details how to get the values to use for the [msiClientId](#the-msiclientid-property), [blobUrl](#the-bloburl-property) properties. ++#### The msiClientId property ++The `msiClientId` property is the ID of the managed identity used to create the data registry. There are two types of managed identities: **system-assigned** and **user-assigned**. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. For more information, see [What are managed identities for Azure resources?][managed identity]. ++# [system-assigned](#tab/System-assigned) ++When using System-assigned managed identities, you don't need to provide a value for the `msiClientId` property. The data registry service will automatically use the system assigned identity of the Azure Maps account when `msiClientId` is null. ++# [user-assigned](#tab/User-assigned) ++The value used for the `msiClientId` property is the client ID of a user assigned managed identity. ++1. In your Azure Maps account, select **Identity** from the left menu. +1. Hover over the name of the managed identify until it appears as a link, then select it. ++ :::image type="content" source="./media/data-registry/select-managed-identity.png" lightbox="./media/data-registry/select-managed-identity.png" alt-text="A screenshot showing the identity page in the Azure Maps account with the new identity selected in the user assigned tab."::: ++1. Copy the **Client ID**. ++ :::image type="content" source="./media/data-registry/client-id.png" lightbox="./media/data-registry/client-id.png" alt-text="A screenshot showing how to select the client ID in the managed identities pane in Azure."::: ++++#### The blobUrl property ++The `blobUrl` property is the path to the file being registered. You can get this value from the container that it was added to. +[data registry] +1. Select your **storage account** in the **Azure portal**. +1. Select **Containers** from the left menu. +1. A list of containers will appear. Select the container that contains the file you wish to register. +1. The container opens, showing a list of the files previously uploaded. +1. Select the desired file, then copy the URL. ++ :::image type="content" source="./media/data-registry/blobUrl.png" lightbox="./media/data-registry/blobUrl.png" alt-text="A screenshot showing how to select the URL used as the blobUrl property."::: ++### The user data ID ++The user data ID (`udid`) of the data registry is a user-defined GUID that must conform to the following Regex pattern: ++```azurepowershell +^[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}$ +``` ++> [!TIP] +> The `udid` is a user-defined GUID that must be supplied when creating a data registry. If you want to be certain you have a globally unique identifier (GUID), consider creating it by running a GUID generating tool such as the Guidgen.exe command line program (Available with [Visual Studio][Visual Studio]). ++## Create a data registry ++Now that you have your storage account with the desired files linked to your Azure Maps account through the datastore and have gathered all required properties, you're ready to use the [data registry] API to register those files. If you have multiple files in your Azure storage account that you want to register, you'll need to run the register request for each file (`udid`). ++> [!NOTE] +> The maximum size of a file that can be registered with an Azure Maps datastore is one gigabyte. +To create a data registry: ++1. Provide the information needed to reference the storage account that is being added to the data registry in the body of your HTTP request. The information must be in JSON format and contain the following fields: ++ ```json + { + "kind": "AzureBlob", + "azureBlob": { + "dataFormat": "geojson", + "msiClientId": "{The client ID of the managed identity}", + "linkedResource": "{datastore ID}", + "blobUrl": "https://teststorageaccount.blob.core.windows.net/testcontainer/test.geojson" + } + } + ``` ++ For more information on the properties required in the HTTP request body, see [Data registry properties](#data-registry-properties). ++1. Once you have the body of your HTTP request ready, execute the following **HTTP PUT request**: ++ ```http + https://us.atlas.microsoft.com/dataRegistries/{udid}?api-version=2022-12-01-preview&subscription-key={Azure-Maps-Subscription-key} + + ``` ++ For more information on the `udid` property, see [The user data ID](#the-user-data-id). ++1. Copy the value of the **Operation-Location** key from the response header. ++> [!TIP] +> If the contents of a previously registered file is modified, it will fail its [data validation](#data-validation) and won't be usable in Azure Maps until it's re-registered. To re-register a file, rerun the register request, passing in the same [AzureBlob](#the-azureblob) used to create the original registration. +The value of the **Operation-Location** key is the status URL that you'll use to check the status of the data registry creation in the next section, it contains the operation ID used by the [Get operation][Get operation] API. ++> [!NOTE] +> The value of the **Operation-Location** key will not contain the `subscription-key`, you will need to add that to the request URL when using it to check the data registry creation status. ++### Check the data registry creation status ++To (optionally) check the status of the data registry creation process, enter the status URL you copied in the [Create a data registry](#create-a-data-registry) section, and add your subscription key as a query string parameter. The request should look similar to the following URL: ++```http +https://us.atlas.microsoft.com/dataRegistries/operations/{udid}?api-version=2022-12-01-preview&subscription-key={Your-Azure-Maps-Primary-Subscription-key} +``` ++## Get a list of all files in the data registry ++To get a list of all files registered in an Azure Maps account using the [List][list] request: ++```http +https://us.atlas.microsoft.com/dataRegistries?api-version=2022-12-01-preview&subscription-key={Azure-Maps-Subscription-key} +``` ++The following is a sample response showing three possible statuses, completed, running and failed: ++```json +{ + "value": [ + { + "udid": "f6495f62-94f8-0ec2-c252-45626f82fcb2", + "description": "Contoso Indoor Design", + "kind": "AzureBlob", + "azureBlob": { + "dataFormat": "zip", + "msiClientId": "3263cad5-ed8b-4829-b72b-3d1ba556e373", + "linkedResource": "my-storage-account", + "blobUrl": "https://mystorageaccount.blob.core.windows.net/my-container/my/blob/path1.zip", + "downloadURL": "https://us.atlas.microsoft.com/dataRegistries/f6495f62-94f8-0ec2-c252-45626f82fcb2/content?api-version=2022-12-01-preview", + "sizeInBytes": 29920, + "contentMD5": "CsFxZ2YSfxw3cRPlqokV0w==" + }, + "status": "Completed" + }, + { + "udid": "8b1288fa-1958-4a2b-b68e-13a7i5af7d7c", + "kind": "AzureBlob", + "azureBlob": { + "dataFormat": "geojson", + "msiClientId": "3263cad5-ed8b-4829-b72b-3d1ba556e373", + "linkedResource": "my-storage-account", + "blobUrl": "https://mystorageaccount.blob.core.windows.net/my-container/my/blob/path2.geojson", + "downloadURL": "https://us.atlas.microsoft.com/dataRegistries/8b1288fa-1958-4a2b-b68e-13a7i5af7d7c/content?api-version=2022-12-01-preview", + "sizeInBytes": 1339 + }, + "status": "Running" + }, + { + "udid": "7c1288fa-2058-4a1b-b68f-13a6h5af7d7c", + "description": "Contoso Geofence GeoJSON", + "kind": "AzureBlob", + "azureBlob": { + "dataFormat": "geojson", + "linkedResource": "my-storage-account", + "blobUrl": "https://mystorageaccount.blob.core.windows.net/my-container/my/blob/path3.geojson", + "downloadURL": "https://us.atlas.microsoft.com/dataRegistries/7c1288fa-2058-4a1b-b68f-13a6h5af7d7c/content?api-version=2022-12-01-preview", + "sizeInBytes": 1650, + "contentMD5": "rYpEfIeLbWZPyaICGEGy3A==" + }, + "status": "Failed", + "error": { + "code": "ContentMD5Mismatch", + "message": "Actual content MD5: sOJMJvFParkSxBsvvrPOMQ== doesn't match expected content MD5: CsFxZ2YSfxw3cRPlqokV0w==." + } + } + ] +} +``` ++The data returned when running the list request is similar to the data provided when creating a registry with a few additions: ++| property | description | +|-|--| +| contentMD5 | MD5 hash created from the contents of the file being registered. For more information, see [Data validation](#data-validation) | +| downloadURL | The download URL of the underlying data. Used to [Get content from a data registry](#get-content-from-a-data-registry). | +| sizeInBytes | The size of the content in bytes. | ++## Get content from a data registry ++Once you've uploaded one or more files to an Azure storage account, created and Azure Maps datastore to link to those files, then registered them using the [register][register or replace] API, you can access the data contained in the files. ++Use the `udid` to get the content of a file registered in an Azure Maps account: ++ ```http +https://us.atlas.microsoft.com/dataRegistries/{udid}/content?api-version=2022-12-01-preview&subscription-key={Azure-Maps-Subscription-key} +``` ++The contents of the file will appear in the body of the response. For example, a text based GeoJSON file will appear similar to the following example: ++```json +{ + "type": "FeatureCollection", + "features": [ + { + "type": "Feature", + "geometry": { + "type": "Point", + "coordinates": [ + -122.126986, + 47.639754 + ] + }, + "properties": { + "geometryId": "001", + "radius": 500 + } + } + ] +} +``` ++The file type is returned in the `content-type` key of the response header. ++Both text and binary files can be saved to a local hard drive or used directly in other processes like importing into the Azure Maps Creator conversion process. ++## Replace a data registry ++If you need to replace a previously registered file with another file, rerun the register request, passing in the same [AzureBlob](#the-azureblob) used to create the original registration, except for the [blobUrl](#the-bloburl-property). The `BlobUrl` needs to be modified to point to the new file. ++## Data validation ++When you register a file in Azure Maps using the data registry API, an MD5 hash is created from the contents of the file, encoding it into a 128-bit fingerprint and saving it in the `AzureBlob` as the `contentMD5` property. The MD5 hash stored in the `contentMD5` property is used to ensure the data integrity of the file. Since the MD5 hash algorithm always produces the same output given the same input, the data validation process can compare the `contentMD5` property of the file when it was registered against a hash of the file in the Azure storage account to check that it's intact and unmodified. If the hash isn't the same, the validation fails. If the file in the underlying storage account changes, the validation will fail. If you need to modify the contents of a file that has been registered in Azure Maps, you'll need to register it again. ++[data registry]: /rest/api/maps/data-registry +[list]: /rest/api/maps/data-registry/list +[Register Or Replace]: /rest/api/maps/data-registry/register-or-replace +[Get operation]: /rest/api/maps/data-registry/get-operation ++[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[storage account overview]: /azure/storage/common/storage-account-overview +[create storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal +[managed identity]: /azure/active-directory/managed-identities-azure-resources/overview +[subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account +[Azure portal]: https://portal.azure.com/ +[Visual Studio]: https://visualstudio.microsoft.com/downloads/ +[geographic scope]: geographic-scope.md |
azure-monitor | Agent Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md | The following table lists the agent data sources that are currently available wi ## Configure data sources -To configure data sources for Log Analytics agents, go to the **Log Analytics workspaces** menu in the Azure portal and select a workspace. Select **Agents configuration**. Select the tab for the data source you want to configure. Use the links in the preceding table to access documentation for each data source and information on their configuration. +To configure data sources for Log Analytics agents, go to the **Log Analytics workspaces** menu in the Azure portal and select a workspace. Select **Legacy agents management**. Select the tab for the data source you want to configure. Use the links in the preceding table to access documentation for each data source and information on their configuration. Any configuration is delivered to all agents connected to that workspace. You can't exclude any connected agents from this configuration. |
azure-monitor | Agent Linux Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md | A clean reinstall of the agent fixes most issues. This task might be the first s Extra configurations | `/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/*.conf` > [!NOTE]- > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [agent's configuration](../agents/agent-data-sources.md#configure-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from **Agents configuration**. For a single agent, run the following script: + > Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [agent's configuration](../agents/agent-data-sources.md#configure-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from **Legacy agents management**. For a single agent, run the following script: > > `sudo /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable && sudo rm /etc/opt/omi/conf/omsconfig/configuration/Current.mof* /etc/opt/omi/conf/omsconfig/configuration/Pending.mof*` |
azure-monitor | Agent Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md | For the network requirements for the Linux agent, see [Log Analytics agent overv ### Workspace ID and key -Regardless of the installation method used, you need the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Under the **Settings** section, select **Agents management**. +Regardless of the installation method used, you need the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Under the **Settings** section, select **Agents**. [](media/log-analytics-agent/workspace-details.png#lightbox) |
azure-monitor | Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md | To download the latest version of the Windows agent from your Log Analytics work 1. In your list of Log Analytics workspaces, select the workspace. -1. In your Log Analytics workspace, select the **Agents Management** tile and then select **Windows Servers**. +1. In your Log Analytics workspace, select the **Agents** tile and then select **Windows Servers**. 1. On the **Windows Servers** screen, select the appropriate **Download Windows Agent** version to download depending on the processor architecture of the Windows operating system. |
azure-monitor | Agent Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md | Configure .NET Framework 4.6 or later to support secure cryptography because by ### Workspace ID and key -Regardless of the installation method used, you'll require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then in the **Settings** section, select **Agents management**. +Regardless of the installation method used, you'll require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then in the **Settings** section, select **Agents**. [](media/log-analytics-agent/workspace-details.png#lightbox) |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | description: This article describes the version details for the Azure Monitor ag Previously updated : 1/30/2023 Last updated : 2/22/2023 We strongly recommended to update to the latest version at all times, or opt in ## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|-| Jan 2023 | <ul><li>Fixed issue related to incorrect *EventLevel* and *Task* values for Log Analytics *Event* table, to match Windows Event Viewer values</li><li>Added missing columns for IIS logs - *TimeGenerated, Time, Date, Computer, SourceSystem, AMA, W3SVC, SiteName*</li><li>Reliability improvements for metrics collection</li><li>Fixed machine restart issues on for Arc-enabled servers related to repeated calls to HIMDS service</li></ul> | 1.12.0.0 | None | +| Jan 2023 | **Linux** <ul><li>RHEL 9 and Amazon Linux 2 support</li><li>Update to OpenSSL 1.1.1s and require TLS 1.2 or higher</li><li>Performance improvements</li><li>Improvements in Garbage Collection for persisted disk cache and handling corrupted cache files better</li><li>**Fixes** <ul><li>Set agent service memory limit for CentOS/RedHat 7 distros. Resolved MemoryMax parsing error</li><li>Fixed modifying rsyslog system-wide log format caused by installer on RedHat/Centos 7.3</li><li>Fixed permissions to config directory</li><li>Installation reliability improvements</li><li>Fixed permissions on default file so rpm verification doesn't fail</li><li>Added traceFlags setting to enable trace logs for agent</li></ul></li></ul> **Windows** <ul><li>Fixed issue related to incorrect *EventLevel* and *Task* values for Log Analytics *Event* table, to match Windows Event Viewer values</li><li>Added missing columns for IIS logs - *TimeGenerated, Time, Date, Computer, SourceSystem, AMA, W3SVC, SiteName*</li><li>Reliability improvements for metrics collection</li><li>Fixed machine restart issues on for Arc-enabled servers related to repeated calls to HIMDS service</li></ul> | 1.12.0.0 | 1.25.1 | | Nov-Dec 2022 | <ul><li>Support for air-gapped clouds added for [Windows MSI installer for clients](./azure-monitor-agent-windows-client.md) </li><li>Reliability improvements for using AMA with Custom Metrics destination</li><li>Performance and internal logging improvements</li></ul> | 1.11.0.0 | None | | Oct 2022 | **Windows** <ul><li>Increased reliability of data uploads</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lockdown write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 | | Sep 2022 | Reliability improvements | 1.9.0.0 | None | |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | -Azure Monitor Agent provides the following benefits over legacy agents: +In addition to consolidating and improving upon legacy Log Analytics agents, Azure Monitor Agent provides additional immediate benefits for **cost savings, simplified management experience, security and performance.** [Learn more about these benefits](./azure-monitor-agent-overview.md#benefits) -- **Security**- - Enhanced security through Managed Identity and Azure Active Directory (Azure AD) tokens (for clients). -- **Performance**- - The AMA agent event throughput is 25% better than the MMA agent. -- **Cost savings** by [using data collection rules](data-collection-rule-azure-monitor-agent.md). Using DCRs is one of the most useful advantages of using Azure Monitor Agent:- - DCRs let you configure data collection for specific machines connected to a workspace as compared to the "all or nothing" approach of legacy agents. - - With DCRs, you can define which data to ingest and which data to filter out to reduce workspace clutter and save on costs. -- **Simpler management** of data collection, including ease of troubleshooting:- - Easy *multihoming* on Windows and Linux. - - Centralized, "in the cloud" agent configuration makes every action simpler and more easily scalable throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time. - - Greater transparency and control of more capabilities and services, such as Microsoft Sentinel, Defender for Cloud, and VM Insights. -- **A single agent** that consolidates all features necessary to address all telemetry data collection needs across servers and client devices running Windows 10 or 11. A single agent is the goal, although Azure Monitor Agent currently converges with the Log Analytics agents. -## Migration plan considerations +## Migration guidance -Your migration plan to the Azure Monitor Agent should take into account: +### Before you begin +1. Review and follow the **[prerequisites](./azure-monitor-agent-manage.md#prerequisites)** for use with Azure Monitor Agent. + - For non-Azure and on premises servers, [installing the Azure Arc agent](../../azure-arc/servers/agent-overview.md) is required though it's not mandatory to use Azure Arc for management overall. As such, this should incur no additional cost for Arc. +2. Service (legacy Solutions) requirements - The legacy Log Analytics agents are used by various Azure services to collect required data. If you're not using any additional Azure service, you may skip this step altogether. + - Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **discover solutions enabled** on your workspace(s) that use the legacy agents, including the **per-solution migration recommendation<sup>1</sup>** shown under `Workspace overview` tab. + - If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. +3. **Agent coexistence:** + - If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later. + - Azure Monitor Agent **can run alongside the legacy Log Analytics agents on the same machine** so that you can continue to use existing functionality during evaluation or migration. You can begin the transition, but ensure you understand the **limitations below**: + - Be careful when you collect duplicate data from the same machine, as this could skew query results, affect downstream features like alerts, dashboards, workbooks and generate more charges for data ingestion and retention. To avoid data duplication, ensure the agents are *collecting data from different machines* or *sending the data to different destinations*. Additionally, + - For **Defender for Cloud**, you will only be [billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when running both agents + - For **Sentinel**, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents. + - Running two telemetry agents on the same machine consumes double the resources, including but not limited to CPU, memory, storage space, and network bandwidth. -- **Service (legacy Solutions) requirements:** - - Review [Azure Monitor Agent's supported services list](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent supports the services you require. If you currently use service(s) in preview, start testing your scenarios during the preview phase. This will save time and ensure you're ready to deploy to production as soon as the service becomes generally available. Moreover you benefit from added security and reduced cost immediately. - - Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to *discover what solutions and features you're using today that depend on the legacy agents*. - - If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel. +<sup>1</sup> Start testing your scenarios during the preview phase. This will save time, avoid surprises later and ensure you're ready to deploy to production as soon as the service becomes generally available. Moreover you benefit from added security and reduced cost immediately. -- **Installing Azure Monitor Agent alongside a legacy agent:** - - If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later. - - Azure Monitor Agent **can run alongside the legacy Log Analytics agents on the same machine** so that you can continue to use existing functionality during evaluation or migration. You can begin the transition, but ensure you understand the **limitations below**: - - Be careful when you collect duplicate data from the same machine, as this could skew query results, affect downstream features like alerts, dashboards, workbooks and generate more charges for data ingestion and retention. To avoid data duplication, ensure the agents are *collecting data from different machines* or *sending the data to different destinations*. Additionally, - - For **Defender for Cloud**, you will only be [billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when running both agents - - For **Sentinel**, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents. - - Running two telemetry agents on the same machine consumes double the resources, including but not limited to CPU, memory, storage space, and network bandwidth. +### Migration steps + -## Prerequisites +1. **[Create data collection rules](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule)**. You can use the [DCR generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator)<sup>1</sup> to **automatically convert your legacy agent configuration into data collection rule templates**. Review the generated rules before you create them, to leverage benefits like filtering, granular targeting (per machine), and other optimizations. -Review the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for use with Azure Monitor Agent. For non-Azure servers, [installing the Azure Arc agent](../../azure-arc/servers/agent-overview.md) is an important prerequisite that then helps to install the agent extension and other required extensions. Using Azure Arc for this purpose comes at no added cost. It's not mandatory to use Azure Arc for server management overall. You can continue using your existing non-Azure management solutions. After the Azure Arc agent is installed, you can follow the same guidance in this article across Azure and non-Azure for migration. +2. Deploy extensions and DCR-associations: + 1. **Test first** by deploying extensions<sup>2</sup> and DCR-Associations on a few non-production machines. You can also deploy side-by-side on machines running legacy agents (see the section above for agent coexistence + 2. Once data starts flowing via Azure Monitor agent, **compare it with legacy agent data** to ensure there are no gaps. You can do this by joining with the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' for the new data collection + 3. Post testing, you can **roll out broadly**<sup>3</sup> using [built-in policies]() for at-scale deployment of extensions and DCR-associations. **Using policy will also ensure automatic deployment of extensions and DCR-associations for any new machines in future.** + 4. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **monitor the at-scale migration** across your machines + +3. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly. -## Migration testing +4. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, you may **choose to either disable or uninstall the legacy Log Analytics agents** as applicable + - If you have migrated to Azure Monitor agent for selected features/solutions and you need to continue using the legacy Log Analytics for others, you can selectively disable or "turn off" legacy agent collection by editing the Log Analytics workspace configurations directly + - If you've migrated to Azure Monitor agent for all your requirements, you may [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. + - Don't uninstall the legacy agent if you need to use it for uploading data to System Center Operations Manager. -To ensure safe deployment during migration, begin testing with few resources running Azure Monitor Agent in your nonproduction environment. After you validate the data collected on these test resources, roll out to production by following the same steps. -To start collecting some of the existing data types, see [Create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule). Alternatively, you can use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator) to convert existing legacy agent configuration into data collection rules. +<sup>1</sup> The DCR generator only converts the configurations for Windows event logs, Linux syslog and performance counters. Support for additional features and solutions will be available soon +<sup>2</sup> In addition to the Azure Monitor agent extension, you need to deploy additional extensions required for specific solutions. See [other extensions to be installed here](./agents-overview.md#supported-services-and-features) +<sup>3</sup> Before you deploy a large number of agents, consider [configuring the workspace](agent-data-sources.md) to disable data collection for the Log Analytics agent. If you leave data collection for the Log Analytics agent enabled, you might collect duplicate data and increase your costs. You might choose to collect duplicate data for a short period during migration until you verify that you've deployed and configured Azure Monitor Agent correctly. -After you *validate* that data is flowing as expected with Azure Monitor Agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent. -## At-scale migration using Azure Policy --We recommend using [Azure Policy](../../governance/policy/overview.md) to migrate a large number of agents. Start by analyzing your current monitoring setup with the Log Analytics agent by using the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper). Use this tool to find sources like virtual machines, virtual machine scale sets, and non-Azure servers. --Use the [DCR Config Generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator) to migrate legacy agent configuration, including data sources and destinations, from the workspace to the new DCRs. --> [!IMPORTANT] -> Before you deploy a large number of agents, consider [configuring the workspace](agent-data-sources.md) to disable data collection for the Log Analytics agent. If you leave data collection for the Log Analytics agent enabled, you might collect duplicate data and increase your costs. You might choose to collect duplicate data for a short period during migration until you verify that you've deployed and configured Azure Monitor Agent correctly. --Validate that Azure Monitor Agent is collecting data as expected and all downstream dependencies, such as dashboards, alerts, and workbooks, function properly. --After you confirm that Azure Monitor Agent is collecting data properly, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. --> [!IMPORTANT] -> Don't uninstall the legacy agent if you need to use it for System Center Operations Manager scenarios or others solutions not yet available on Azure Monitor Agent. ## Next steps |
azure-monitor | Data Sources Custom Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-custom-logs.md | Use the following procedure to define a custom log file. Scroll to the end of th The Custom Log wizard runs in the Azure portal and allows you to define a new custom log to collect. -1. In the Azure portal, select **Log Analytics workspaces** > your workspace > **Settings**. -1. Select **Custom logs**. +1. In the Azure portal, select **Log Analytics workspaces** > your workspace. +1. Under the **Classic** section, select **Legacy custom logs**. 1. By default, all configuration changes are automatically pushed to all agents. For Linux agents, a configuration file is sent to the Fluentd data collector. 1. Select **Add** to open the Custom Log wizard. The entire log entry will be stored in a single property called **RawData**. You Use the following process in the Azure portal to remove a custom log that you previously defined. -1. From the **Data** menu in the **Advanced Settings** for your workspace, select **Custom Logs** to list all your custom logs. +1. On the left, under the **Classic** section for your workspace, select **Legacy custom Logs** to list all your custom logs. 1. Select **Remove** next to the custom log to remove the log. ## Data collection |
azure-monitor | Data Sources Performance Counters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-performance-counters.md | Performance counters in Windows and Linux provide insight into the performance o  ## Configure performance counters-Configure performance counters from the [Agents configuration menu](../agents/agent-data-sources.md#configure-data-sources) for the Log Analytics workspace. +Configure performance counters from the [Legacy agents management menu](../agents/agent-data-sources.md#configure-data-sources) for the Log Analytics workspace. When you first configure Windows or Linux performance counters for a new workspace, you're given the option to quickly create several common counters. They're listed with a checkbox next to each. Ensure that any counters you want to initially create are selected and then select **Add the selected performance counters**. |
azure-monitor | Data Sources Windows Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md | Windows event logs are one of the most common [data sources](../agents/agent-dat ## Configure Windows event logs -Configure Windows event logs from the [Agents configuration menu](../agents/agent-data-sources.md#configure-data-sources) for the Log Analytics workspace. +Configure Windows event logs from the [Legacy agents management menu](../agents/agent-data-sources.md#configure-data-sources) for the Log Analytics workspace. Azure Monitor only collects events from Windows event logs that are specified in the settings. You can add an event log by entering the name of the log and selecting **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You can't provide any other criteria to filter events. As you enter the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add doesn't appear in the list, you can still add it by entering the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the **Properties** page for the log and copy the string from the **Full Name** field. -[](media/data-sources-windows-events/configure.png#lightbox) +[](media/data-sources-windows-events/configure.png#lightbox) > [!IMPORTANT] > You can't configure collection of security events from the workspace by using the Log Analytics agent. You must use [Microsoft Defender for Cloud](../../security-center/security-center-enable-data-collection.md) or [Microsoft Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events. The [Azure Monitor agent](azure-monitor-agent-overview.md) can also be used to collect security events. |
azure-monitor | Diagnostics Extension Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-logs.md | For information on how to install and configure the diagnostics extension, see [ To enable collection of diagnostics extension data from an Azure Storage account: 1. In the Azure portal, go to **Log Analytics Workspaces** and select your workspace.-1. Select **Storage accounts logs** in the **Workspace Data Sources** section of the menu. +1. Select **Legacy storage account logs** in the **Classic** section of the menu. 1. Select **Add**. 1. Select the **Storage account** that contains the data to collect. 1. Select the **Data Type** you want to collect. |
azure-monitor | Log Analytics Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md | This section explains how to install the Log Analytics agent on different types - Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md) can be installed with the Azure portal, Azure CLI, Azure PowerShell, or an Azure Resource Manager template. - [Microsoft Defender for Cloud can provision the Log Analytics agent](../../security-center/security-center-enable-data-collection.md) on all supported Azure VMs and any new ones that are created if you enable it to monitor for security vulnerabilities and threats. - Install for individual Azure virtual machines [manually from the Azure portal](../vm/monitor-virtual-machine.md?toc=%2fazure%2fazure-monitor%2ftoc.json).-- Connect the machine to a workspace from the **Virtual machines** option in the **Log Analytics workspaces** menu in the Azure portal.+- Connect the machine to a workspace from the **Virtual machines (deprecated)** option in the **Log Analytics workspaces** menu in the Azure portal. ### Windows virtual machine on-premises or in another cloud |
azure-monitor | Itsm Convert Servicenow To Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-convert-servicenow-to-webhook.md | If you're syncing work items between ServiceNow and an Azure Log Analytics works :::image type="content" source="media/itsmc-convert-servicenow-to-webhook/alerts-itsmc-service-now-for-loop.png" alt-text="Screenshot showing loop that imports data into a Log Analytics workspace."::: -The data is visible in the **Custom logs** section of your Log Analytics workspace. +The data is visible in the **Legacy custom logs** section of your Log Analytics workspace. ## Sample JSON schema for a change_request table |
azure-monitor | Convert Classic Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md | If you don't need to migrate an existing resource, and instead want to create a > * Diagnostic settings use a different export format/schema than continuous export. Migrating breaks any existing integrations with Azure Stream Analytics. > * Diagnostic settings export might increase costs. For more information, see [Export telemetry from Application Insights](export-telemetry.md#diagnostic-settings-based-export). -- Check your current retention settings under **General** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource.+- Check your current retention settings under **Settings** > **Usage and estimated costs** > **Data Retention** for your Log Analytics workspace. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource. > [!NOTE] > - If you currently store Application Insights data for longer than the default 90 days and want to retain this longer retention period after migration, adjust your [workspace retention settings](../logs/data-retention-archive.md?tabs=portal-1%2cportal-2#set-retention-and-archive-policy-by-table). The legacy **Continuous export** functionality isn't supported for workspace-bas You don't have to make any changes prior to migrating. This message alerts you that your current Application Insights retention settings aren't set to the default 90-day retention period. This warning message means you might want to modify the retention settings for your Log Analytics workspace prior to migrating and starting to ingest new data. -You can check your current retention settings for Log Analytics under **General** > **Usage and estimated costs** > **Data Retention** in the Log Analytics UI. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource. +You can check your current retention settings for Log Analytics under **Settings** > **Usage and estimated costs** > **Data Retention** in the Log Analytics UI. This setting affects how long any new ingested data is stored after you migrate your Application Insights resource. ## Workspace-based resource changes |
azure-monitor | Opencensus Python Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md | First, instrument your Python application with latest [OpenCensus Python SDK](./ ) ``` -3. Make sure AzureExporter is properly configured in your `settings.py` under `OPENCENSUS`. For requests from urls that you don't wish to track, add them to `EXCLUDELIST_PATHS`. +3. Make sure AzureExporter is configured properly in your `settings.py` under `OPENCENSUS`. For requests from urls that you don't wish to track, add them to `EXCLUDELIST_PATHS`. + ```python OPENCENSUS = { OpenCensus doesn't have an extension for FastAPI. To write your own FastAPI midd HTTP_URL = COMMON_ATTRIBUTES['HTTP_URL'] HTTP_STATUS_CODE = COMMON_ATTRIBUTES['HTTP_STATUS_CODE'] - APPINSIGHTS_CONNECTION_STRING='<your-appinsights_connection-string-here>' - exporter=AzureExporter(connection_string=f'{APPINSIGHTS_CONNECTION_STRING}') + exporter=AzureExporter(connection_string='<your-appinsights-connection-string-here>') sampler=ProbabilitySampler(1.0) # fastapi middleware for opencensus |
azure-monitor | Status Monitor V2 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md | Each of these options is described in the [detailed instructions](status-monitor ### Does Application Insights Agent support ASP.NET Core applications? - Yes. Starting from [Application Insights Agent 2.0.0-beta1](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0-beta1), ASP.NET Core applications hosted in IIS are supported. + Yes. Starting from [Application Insights Agent 2.0.0](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0), ASP.NET Core applications hosted in IIS are supported. ### How do I verify that the enablement succeeded? See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap The release note updates are listed here. +### 2.0.0 ++- Updated the Application Insights .NET/.NET Core SDK to 2.21.0-redfield + ### 2.0.0-beta3 - Updated the Application Insights .NET/.NET Core SDK to 2.20.1-redfield |
azure-monitor | Visual Studio Codelens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/visual-studio-codelens.md | - Title: Application Insights telemetry in Visual Studio CodeLens | Microsoft Docs -description: Quickly access your Application Insights request and exception telemetry with CodeLens in Visual Studio. - Previously updated : 03/17/2017-----# Application Insights telemetry in Visual Studio CodeLens -Methods in the code of your web app can be annotated with telemetry about run-time exceptions and request response times. If you install [Azure Application Insights](./app-insights-overview.md) in your application, the telemetry appears in Visual Studio [CodeLens](/visualstudio/ide/find-code-changes-and-other-history-with-codelens) - the notes at the top of each function where you're used to seeing useful information such as the number of places the function is referenced or the last person who edited it. -- --> [!NOTE] -> Application Insights in CodeLens is available in Visual Studio 2015 Update 3 and later, or with the latest version of [Developer Analytics Tools extension](https://visualstudiogallery.msdn.microsoft.com/82367b81-3f97-4de1-bbf1-eaf52ddc635a). CodeLens is available in the Enterprise and Professional editions of Visual Studio. -> -> --## Where to find Application Insights data -Look for Application Insights telemetry in the CodeLens indicators of the public request methods of your web application. -CodeLens indicators are shown above method and other declarations in C# and Visual Basic code. If Application Insights data is available for a method, you'll see indicators for requests and exceptions such as "100 requests, 1% failed" or "10 exceptions." Click a CodeLens indicator for more details. --> [!TIP] -> Application Insights request and exception indicators may take a few extra seconds to load after other CodeLens indicators appear. -> -> --## Exceptions in CodeLens - --The exception CodeLens indicator shows the number of exceptions that have occurred in the past 24 hours from the 15 most frequently occurring exceptions in your application during that period, while processing the request served by the method. --To see more details, click the exceptions CodeLens indicator: --* The percentage change in number of exceptions from the most recent 24 hours relative to the prior 24 hours -* Choose **Go to code** to navigate to the source code for the function throwing the exception -* Choose **Search** to query all instances of this exception that have occurred in the past 24 hours -* Choose **Trend** to view a trend visualization for occurrences of this exception in the past 24 hours -* Choose **View all exceptions in this app** to query all exceptions that have occurred in the past 24 hours -* Choose **Explore exception trends** to view a trend visualization for all exceptions that have occurred in the past 24 hours. --> [!TIP] -> If you see "0 exceptions" in CodeLens but you know there should be exceptions, check to make sure the right Application Insights resource is selected in CodeLens. To select another resource, right-click on your project in the Solution Explorer and choose **Application Insights > Choose Telemetry Source**. CodeLens is only shown for the 15 most frequently occurring exceptions in your application in the past 24 hours, so if an exception is the 16th most frequently or less, you'll see "0 exceptions." Exceptions from ASP.NET views may not appear on the controller methods that generated those views. -> -> [!TIP] -> If you see "? exceptions" in CodeLens, you need to associate your Azure account with Visual Studio or your Azure account credential may have expired. In either case, click "? exceptions" and choose **Add an account...** to enter your credentials. -> -> --## Requests in CodeLens - --The request CodeLens indicator shows the number of HTTP requests that been serviced by a method in the past 24 hours, plus the percentage of those requests that failed. --To see more details, click the requests CodeLens indicator: --* The absolute and percentage changes in number of requests, failed requests, and average response times over the past 24 hours compared to the prior 24 hours -* The reliability of the method, calculated as the percentage of requests that did not fail in the past 24 hours -* Choose **Search** for requests or failed requests to query all the (failed) requests that occurred in the past 24 hours -* Choose **Trend** to view a trend visualization for requests, failed requests, or average response times in the past 24 hours. -* Choose the name of the Application Insights resource in the upper left corner of the CodeLens details view to change which resource is the source for CodeLens data. --## <a name="next"></a>Next steps -* **[Working with Application Insights in Visual Studio](./visual-studio.md)**. Search telemetry, see data in CodeLens, and configure Application Insights. All within Visual Studio. -* **[Working with the Application Insights portal](./overview-dashboard.md)**. Dashboards, powerful diagnostic and analytic tools, alerts, a live dependency map of your application, and telemetry export. - |
azure-monitor | Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/visual-studio.md | - Title: Debug in Visual Studio with Application Insights -description: Learn about web app performance analysis and diagnostics during debugging and in production. - Previously updated : 03/17/2017-----# Debug your applications with Application Insights in Visual Studio -In Visual Studio 2015 and later, you can analyze performance and diagnose issues in your ASP.NET web app both in debugging and in production by using telemetry from [Application Insights](./app-insights-overview.md). --If you created your ASP.NET web app by using Visual Studio 2017 or later, it already has the Application Insights SDK. Otherwise, if you haven't done so already, [add Application Insights to your app](./asp-net.md). --To monitor your app when it's in live production, you normally view the Application Insights telemetry in the [Azure portal](https://portal.azure.com), where you can set alerts and apply powerful monitoring tools. But for debugging, you can also search and analyze the telemetry in Visual Studio. --You can use Visual Studio to analyze telemetry both from your production site and from debugging runs on your development machine. In the latter case, you can analyze debugging runs even if you haven't yet configured the SDK to send telemetry to the Azure portal. --## <a name="run"></a> Debug your project -Run your web app in local debug mode by using F5. Open different pages to generate some telemetry. --In Visual Studio, you see a count of the events that were logged by the Application Insights module in your project. -- --Select the **Application Insights** button to search your telemetry. --## Application Insights Search -The **Application Insights Search** window shows logged events. If you signed in to Azure when you set up Application Insights, you can search the same events in the Azure portal. Right-click the project and select **Application Insights** > **Search**. -- --> [!NOTE] -> After you select or clear filters, select **Search** at the end of the text search field. -> -The free text search works on any fields in the events. For example, you can search for part of the URL of a page. You can also search for the value of a property, such as a client's city, or specific words in a trace log. --Select any event to see its detailed properties. --For requests to your web app, you can click through to the code. --. --You can also open related items to help diagnose failed requests or exceptions. -- --## View exceptions and failed requests -Exception reports show in the **Search** window. In some older types of ASP.NET application, you have to [set up exception monitoring](./asp-net-exceptions.md) to see exceptions that are handled by the framework. --Select an exception to get a stack trace. If the code of the app is open in Visual Studio, you can click through from the stack trace to the relevant line of the code. -- --## View request and exception summaries in the code -In the CodeLens line above each handler method, you see a count of the requests and exceptions logged by Application Insights in the past 24 hours. -- --> [!NOTE] -> CodeLens shows Application Insights data only if you've [configured your app to send telemetry to the Application Insights portal](./asp-net.md). -> --For more information, see [Application Insights telemetry in Visual Studio CodeLens.](./visual-studio-codelens.md) --## Local monitoring -From Visual Studio 2015 Update 2: If you haven't configured the SDK to send telemetry to the Application Insights portal so that there's no instrumentation key in ApplicationInsights.config, the diagnostics window displays telemetry from your latest debugging session. --This is desirable if you've already published a previous version of your app. You don't want the telemetry from your debugging sessions to be mixed up with the telemetry on the Application Insights portal from the published app. --It's also useful if you have some [custom telemetry](./api-custom-events-metrics.md) that you want to debug before you send telemetry to the portal. --For example, at first you might have fully configured Application Insights to send telemetry to the portal. But now you want to see the telemetry only in Visual Studio: -- * In the **Search** window's settings, there's an option to search local diagnostics even if your app sends telemetry to the portal. - * To stop telemetry being sent to the portal, comment out the line `<instrumentationkey>...` from ApplicationInsights.config. When you're ready to send telemetry to the portal again, uncomment it. ---## Next steps -- [Work with the Application Insights portal](./overview-dashboard.md) where you can view dashboards, use powerful diagnostic and analytic tools, get alerts, see a live dependency map of your application, and view exported telemetry data. |
azure-monitor | Container Insights Enable Arc Enabled Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md | +- [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the resource group containing the Log Analytics Workspace - To view the monitoring data, you need to have [Log Analytics Reader](../logs/manage-access.md#azure-rbac) role assignment on the Log Analytics workspace. - The following endpoints need to be enabled for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements). |
azure-monitor | Container Insights Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md | The following table summarizes known errors you might encounter when you use Con | - | | | Error message "No data for selected filters" | It might take some time to establish monitoring data flow for newly created clusters. Allow at least 10 to 15 minutes for data to appear for your cluster.<br><br>If data still doesn't show up, check if the Log Analytics workspace is configured for `disableLocalAuth = true`. If yes, update back to `disableLocalAuth = false`.<br><br>`az resource show --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]"`<br><br>`az resource update --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]" --api-version "2021-06-01" --set properties.features.disableLocalAuth=False` | | Error message "Error retrieving data" | While an AKS cluster is setting up for health and performance monitoring, a connection is established between the cluster and a Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error might occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, reenable monitoring of your cluster with Container insights. Then specify an existing workspace or create a new one. To reenable, [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. |-| "Error retrieving data" after adding Container insights through `az aks cli` | When you enable monitoring by using `az aks cli`, Container insights might not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Solutions** from the pane on the left side. To resolve this issue, redeploy the solution. Follow the instructions in [Enable Container insights](container-insights-onboard.md). | +| "Error retrieving data" after adding Container insights through `az aks cli` | When you enable monitoring by using `az aks cli`, Container insights might not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Legacy solutions** from the pane on the left side. To resolve this issue, redeploy the solution. Follow the instructions in [Enable Container insights](container-insights-onboard.md). | To help diagnose the problem, we've provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_dev/scripts/troubleshoot). |
azure-monitor | Data Platform Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md | The following table shows sample data from a multidimensional metric, network th | 8/9/2017 8:15 | IP="10.24.2.15" | Direction="Send" | 155.0 Kbps | | 8/9/2017 8:15 | IP="10.24.2.15" | Direction="Receive" | 100.1 Kbps | +> [!NOTE] +> Dimension names and dimension values are case-insenstive. + ## Retention of metrics |
azure-monitor | Azure Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-sql.md | The above page also provides instructions on enabling support for monitoring mul ## Use Azure SQL Analytics (preview) -Navigate to your SQL Analytics deployment from the **Solutions** page of the Log Analytics workspace. +Navigate to your SQL Analytics deployment from the **Legacy solutions** page of the Log Analytics workspace. Azure SQL Analytics provides two separate views: one for monitoring SQL Database, and the other view for monitoring SQL Managed Instance. |
azure-monitor | Dns Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/dns-analytics.md | The solution starts collecting data without the need of further configuration. H ### Configure the solution -From the Log Analytics workspace in the Azure portal, select **Workspace summary**. Then select the **DNS Analytics** tile. On the solution dashboard, select **Configuration** to open the **DNS Analytics Configuration** page. There are two types of configuration changes that you can make: +From the Log Analytics workspace in the Azure portal, select **Workspace summary (deprecated)**. Then select the **DNS Analytics** tile. On the solution dashboard, select **Configuration** to open the **DNS Analytics Configuration** page. There are two types of configuration changes that you can make: - **Allowlisted Domain Names**: The solution doesn't process all the lookup queries. It maintains an allowlist of domain name suffixes. The lookup queries that resolve to the domain names that match domain name suffixes in this allowlist aren't processed by the solution. Not processing allowlisted domain names helps to optimize the data sent to Azure Monitor. The default allowlist includes popular public domain names, such as www.google.com and www.facebook.com. You can view the complete default list by scrolling. |
azure-monitor | Scom Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/scom-assessment.md | View the summarized compliance assessments for your infrastructure and then dril ### To view recommendations for a focus area and take corrective action 1. Log in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). 2. In the Azure portal, click **More services** found on the lower left-hand corner. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics**.-3. In the Log Analytics subscriptions pane, select a workspace and then click the **Workspace summary** menu item. +3. In the Log Analytics subscriptions pane, select a workspace and then click the **Workspace summary (deprecated)** menu item. 4. On the **Overview** page, click the **System Center Operations Manager Health Check** tile. 5. On the **System Center Operations Manager Health Check** page, review the summary information in one of the focus area sections and then click one to view recommendations for that focus area. 6. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.<br><br> <br> |
azure-monitor | Solution Targeting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solution-targeting.md | After you have the computer group created in your workspace, you'll include it i To create a scope configuration: 1. In the Azure portal, go to **Log Analytics workspaces** and select your workspace.- 1. In the properties for the workspace under **Workspace Data Sources**, select **Scope Configurations**. + 1. In the properties for the workspace under **Classic**, select **Scope configurations (deprecated)**. 1. Select **Add** to create a new scope configuration. 1. Enter a name for the scope configuration. 1. Click **Select Computer Groups**. After you have a scope configuration, you can apply it to one or more solutions. To apply a scope configuration: 1. In the Azure portal, go to **Log Analytics workspaces** and select your workspace.- 1. In the properties for the workspace, select **Solutions**. + 1. In the properties for the workspace, select **Legacy solutions**. 1. Select the solution you want to scope. 1. In the properties for the solution under **Workspace Data Sources**, select **Solution Targeting**. If the option isn't available, [this solution can't be targeted](#solutions-and-agents-that-cant-be-targeted). 1. Select **Add scope configuration**. If you already have a configuration applied to this solution, this option is unavailable. You must remove the existing configuration before you add another one. |
azure-monitor | Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solutions.md | You can add monitoring solutions to Azure Monitor for any applications and servi ## Use monitoring solutions -The **Overview** page displays a tile for each solution installed in a Log Analytics workspace. To open this page, go to **Log Analytics workspaces** in the [Azure portal](https://portal.azure.com) and select your workspace. In the **General** section of the menu, select **Workspace Summary**. +The **Overview** page displays a tile for each solution installed in a Log Analytics workspace. To open this page, go to **Log Analytics workspaces** in the [Azure portal](https://portal.azure.com) and select your workspace. In the **Classic** section of the menu, select **Workspace Summary (deprecated)**. :::image type="content" source="media/solutions/insights-hub.png" lightbox="media/solutions/insights-hub.png" alt-text="Screenshot that shows selections for opening Insights Hub."::: |
azure-monitor | Computer Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/computer-groups.md | When you configure Azure Monitor to import Active Directory group memberships, i > [!NOTE] > Imported Active Directory groups only contain Windows machines. -You configure Azure Monitor to import Active Directory security groups from the **Computer Groups** menu item in your Log Analytics workspace in the Azure portal. Select the **Active Directory** tab, and then **Import Active Directory group memberships from computers**. When groups have been imported, the menu lists the number of computers with group membership detected and the number of groups imported. You can click on either of these links to return the **ComputerGroup** records with this information. +You configure Azure Monitor to import Active Directory security groups from the **Legacy computer groups** menu item in your Log Analytics workspace in the Azure portal. Select the **Active Directory** tab, and then **Import Active Directory group memberships from computers**. When groups have been imported, the menu lists the number of computers with group membership detected and the number of groups imported. You can click on either of these links to return the **ComputerGroup** records with this information. ### Windows Server Update Service When you configure Azure Monitor to import WSUS group memberships, it analyzes the targeting group membership of any computers with the Log Analytics agent. If you are using client-side targeting, any computer that is connected to Azure Monitor and is part of any WSUS targeting groups has its group membership imported to Azure Monitor. If you are using server-side targeting, the Log Analytics agent should be installed on the WSUS server in order for the group membership information to be imported to Azure Monitor. This membership is continuously updated every 4 hours. -You configure Azure Monitor to import WSUS groups from the **Computer Groups** menu item in your Log Analytics workspace in the Azure portal. Select the **Windows Server Update Service** tab, and then **Import WSUS group memberships**. When groups have been imported, the menu lists the number of computers with group membership detected and the number of groups imported. You can click on either of these links to return the **ComputerGroup** records with this information. +You configure Azure Monitor to import WSUS groups from the **Legacy computer groups** menu item in your Log Analytics workspace in the Azure portal. Select the **Windows Server Update Service** tab, and then **Import WSUS group memberships**. When groups have been imported, the menu lists the number of computers with group membership detected and the number of groups imported. You can click on either of these links to return the **ComputerGroup** records with this information. ### Configuration Manager When you configure Azure Monitor to import Configuration Manager collection memberships, it creates a computer group for each collection. The collection membership information is retrieved every 3 hours to keep the computer groups current. Before you can import Configuration Manager collections, you must [connect Configuration Manager to Azure Monitor](collect-sccm.md). -You configure Azure Monitor to import WSUS groups from the **Computer Groups** menu item in your Log Analytics workspace in the Azure portal. Select the **System Center Configuration Manager** tab, and then **Import Configuration Manager collection memberships**. When collections have been imported, the menu lists the number of computers with group membership detected and the number of groups imported. You can click on either of these links to return the **ComputerGroup** records with this information. +You configure Azure Monitor to import WSUS groups from the **Legacy computer groups** menu item in your Log Analytics workspace in the Azure portal. Select the **System Center Configuration Manager** tab, and then **Import Configuration Manager collection memberships**. When collections have been imported, the menu lists the number of computers with group membership detected and the number of groups imported. You can click on either of these links to return the **ComputerGroup** records with this information. ## Managing computer groups-You can view computer groups that were created from a log query or the Log Search API from the **Computer Groups** menu item in your Log Analytics workspace in the Azure portal. Select the **Saved Groups** tab to view the list of groups. +You can view computer groups that were created from a log query or the Log Search API from the **Legacy computer groups** menu item in your Log Analytics workspace in the Azure portal. Select the **Saved Groups** tab to view the list of groups. Click the **x** in the **Remove** column to delete the computer group. Click the **View members** icon for a group to run the group's log search that returns its members. You can't modify a computer group but instead must delete and then recreate it with the modified settings. |
azure-monitor | Daily Cap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md | To create an alert when the daily cap is reached, create an [Activity log alert ## View the effect of the daily cap-The following query can be used to track the data volumes that are subject to the daily cap for a Log Analytics workspace between daily cap resets. This accounts for the security data types that aren't included in the daily cap. In this example, the workspace's reset hour is 14:00. Change this value this for your workspace. +The following query can be used to track the data volumes that are subject to the daily cap for a Log Analytics workspace between daily cap resets. This accounts for the security data types that aren't included in the daily cap. In this example, the workspace's reset hour is 14:00. Change this value for your workspace. ```kusto let DailyCapResetHour=14; |
azure-monitor | Data Collector Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md | In this section are samples that demonstrate how to submit data to the Azure Mon For each sample, set the variables for the authorization header by doing the following: 1. In the Azure portal, locate your Log Analytics workspace.-2. Select **Agents management**. +2. Select **Agents**. 2. To the right of **Workspace ID**, select the **Copy** icon, and then paste the ID as the value of the **Customer ID** variable. 3. To the right of **Primary Key**, select the **Copy** icon, and then paste the ID as the value of the **Shared Key** variable. |
azure-monitor | Monitor Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/monitor-workspace.md | In some situations, like moving a subscription to a different tenant, the Azure Recommended actions: -* If the subscription mentioned in the warning message no longer exists, go to the **Azure Activity log** pane under **Workspace Data Sources**. Select the relevant subscription, and then select the **Disconnect** button. +* If the subscription mentioned in the warning message no longer exists, go to the **Legacy activity log connector** pane under **Classic**. Select the relevant subscription, and then select the **Disconnect** button. * If you no longer have access to the subscription mentioned in the warning message: * Follow the preceding step to disconnect the subscription. * To continue collecting logs from this subscription, contact the subscription owner to fix the permissions and re-enable activity log collection. |
azure-monitor | Move Workspace Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace-region.md | A workspace environment can be complex and include connected sources, managed so | sort by ResourceProvider, ResourceType ``` - - *Installed solutions*: Select **Solutions** on the workspace navigation pane for a list of installed solutions. + - *Installed solutions*: Select **Legacy solutions** on the workspace navigation pane for a list of installed solutions. - *Data collector API*: Data arriving through a [Data Collector API](../logs/data-collector-api.md) is stored in custom log tables. For a list of custom log tables, select **Logs** on the workspace navigation pane, and then select **Custom log** on the schema pane. - *Linked services*: Workspaces might have linked services to dependent resources such as an Azure Automation account, a storage account, or a dedicated cluster. Remove linked services from your workspace. Reconfigure them manually in the target workspace. - *Alerts*: To list alerts, select **Alerts** on your workspace navigation pane, and then select **Manage alert rules** on the toolbar. Alerts in workspaces created after June 1, 2019, or in workspaces that were [upgraded from the Log Analytics Alert API to the scheduledQueryRules API](../alerts/alerts-log-api-switch.md) can be included in the template. |
azure-monitor | Private Link Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md | Restricting access as previously explained applies to data in the resource. Howe > * Change Tracking solution > * VM Insights > * Container Insights-> * Log Analytics **Workspace Summary** pane (that shows the solutions dashboard) +> * Log Analytics **Workspace Summary (deprecated)** pane (that shows the solutions dashboard) ## Application Insights considerations * You'll need to add resources hosting the monitored workloads to a private link. For example, see [Using private endpoints for Azure Web App](../../app-service/networking/private-endpoint.md). We've identified the following products and experiences query workspaces through > * LogicApp connector > * Update Management solution > * Change Tracking solution-> * The **Workspace Summary** pane in the portal (that shows the solutions dashboard) +> * The **Workspace Summary (deprecated)** pane in the portal (that shows the solutions dashboard) > * VM Insights > * Container Insights |
azure-monitor | Tutorial Logs Ingestion Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md | The following PowerShell script generates sample data to configure the custom ta ## Add a custom log table Before you can send data to the workspace, you need to create the custom table where the data will be sent. -1. Go to the **Log Analytics workspaces** menu in the Azure portal and select **Tables (preview)**. The tables in the workspace will appear. Select **Create** > **New custom log (DCR based)**. +1. Go to the **Log Analytics workspaces** menu in the Azure portal and select **Tables**. The tables in the workspace will appear. Select **Create** > **New custom log (DCR based)**. :::image type="content" source="media/tutorial-logs-ingestion-portal/new-custom-log.png" lightbox="media/tutorial-logs-ingestion-portal/new-custom-log.png" alt-text="Screenshot that shows the new DCR-based custom log."::: |
azure-monitor | Workbooks Jsonpath | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-jsonpath.md | In this example, the JSON object represents a store's inventory. We're going to 1. Select **Run Query**. -  + :::image type="content" source="media/workbooks-jsonpath/query-jsonpath.png" alt-text="Screenshot that shows editing a query item with JSON data source and JSON path result format."::: ## Use regular expressions to covert values To convert YYYMMDD format into YYYY-MM-DD format: 1. In the **Regex Match** field, use this regular expression: `([0-9]{4})([0-9]{2})([0-9]{2})`. This regular expression: - matches a four digit number, then a two digit number, then another two digit number. - The parentheses form capture groups to use in the next step.- 1.In the **Replace With**, use this regular expression: `$1-$2-$3. This expression creates a new string with each captured group, with a hyphen between them, turning "12345678" into "1234-56-78"). +1. In the **Replace With**, use this regular expression: `$1-$2-$3`. This expression creates a new string with each captured group, with a hyphen between them, turning "12345678" into "1234-56-78"). 1. Run the query again. -  + :::image type="content" source="media/workbooks-jsonpath/workbooks-jsonpath-convert-date-time.png" alt-text="Screenshot that shows JSONpath converted to date-time format."::: ## Next steps - [Workbooks overview](./workbooks-overview.md) |
azure-monitor | Service Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/service-map.md | Sign in to the [Azure portal](https://portal.azure.com). 1. Enable the Service Map solution from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview). Or use the process described in [Add monitoring solutions from the Solutions Gallery](../insights/solutions.md). 1. [Install the Dependency agent on Windows](../vm/vminsights-enable-hybrid.md#install-the-dependency-agent-on-windows) or [install the Dependency agent on Linux](../vm/vminsights-enable-hybrid.md#install-the-dependency-agent-on-linux) on each computer where you want to get data. The Dependency agent can monitor connections to immediate neighbors, so you might not need an agent on every computer. -1. Access Service Map in the Azure portal from your Log Analytics workspace. Select the **Solutions** option from the left pane. +1. Access Service Map in the Azure portal from your Log Analytics workspace. Select the **Legacy solutions** option from the left pane. . 1. From the list of solutions, select **ServiceMap(workspaceName)**. On the **Service Map** solution overview page, select the **Service Map** summary tile. |
azure-monitor | Vminsights Migrate From Service Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-migrate-from-service-map.md | Once you migrate to VM insights, remove the Service Map solution from the worksp 1. In the search bar, type *Log Analytics workspaces*. As you begin typing, the list filters suggestions based on your input. 1. Select **Log Analytics workspaces**. 1. From your list of Log Analytics workspaces, select the workspace you chose when you enabled Service Map.-1. On the left, select **Solutions**. +1. On the left, select **Legacy solutions**. 1. From the list of solutions, select **ServiceMap(workspace name)**. 1. On the **Overview** page for the solution, select **Delete**. 1. When prompted to confirm, select **Yes**. |
azure-monitor | Vminsights Optout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-optout.md | If you still need the Log Analytics workspace, follow these steps to completely 1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the Azure portal, select **All services**. In the list of resources, type **Log Analytics**. As you begin typing, the list filters suggestions based on your input. Select **Log Analytics**. 3. In your list of Log Analytics workspaces, select the workspace you chose when you enabled VM insights.-4. On the left, select **Solutions**. +4. On the left, select **Legacy solutions**. 5. In the list of solutions, select **VMInsights(workspace name)**. On the **Overview** page for the solution, select **Delete**. When prompted to confirm, select **Yes**. ## Disable monitoring and keep the workspace |
azure-netapp-files | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md | Customer-managed keys in Azure NetApp Files volume encryption enable you to use ## Considerations > [!IMPORTANT]-> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Wait for an official confirmation email from the Azure NetApp Files team before using customer-managed keys. +> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week from submitting waitlist request. * Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. Customer-managed keys in Azure NetApp Files volume encryption enable you to use * If the account isn't eligible for MSI certificate renewal, an error will communicate the date and time when the account is eligible. It's recommended you run this operation periodically (for example, daily) to prevent the certificate from expiring and from the customer-managed key volume going offline. -<!-- - * You will need to call the operation via ARM REST API. Submit a POST request to `/subscriptions/<16 digit subscription ID>/resourceGroups/<resource_group_name>/providers/Microsoft.NetApp/netAppAccounts/<account name>/renewCredentials?api-version=2022-04`. - This operation is available with the Azure CLI, PowerShell, and SDK beginning with the `2022-05` versions. - * If the certificate is more than 46 days old, you can call proxy Azure Resource Manager (ARM) operation via REST API to renew the certificate. For example: - ```rest - /{accountResourceId}/renewCredentials?api-version=2022-01 ΓÇô example /subscriptions/<16 digit subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.NetApp/netAppAccounts/<account name>/renewCredentials?api-version=2022-01 - ``` --> - * Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless `Private endpoint network policy` is enabled on the subnet. It's recommended to keep this option disabled. * If Azure NetApp Files fails to create a customer-managed key volume, error messages are displayed. Refer to the [Error messages and troubleshooting](#error-messages-and-troubleshooting) section for more information. +* Currently, customer-managed keys can't be configured while creating data replication volumes to establish an Azure NetApp Files cross-region replication or cross-zone replication relationship. ## Supported regions For more information about Azure Key Vault and Azure Private Endpoint, refer to: 1. Select the identity type that you want to use for authentication to the Azure Key Vault. If your Azure Key Vault is configured to use Vault access policy as its permission model, then both options are available. Otherwise, only the user-assigned option is available. * If you choose **System-assigned**, select the **Save** button. The Azure portal configures the NetApp account automatically with the following process: A system-assigned identity is added to your NetApp account. An access policy is to be created on your Azure Key Vault with key permissions Get, Encrypt, Decrypt.++ :::image type="content" source="../media/azure-netapp-files/encryption-system-assigned.png" alt-text="Screenshot of the encryption menu with system-assigned options." lightbox="../media/azure-netapp-files/encryption-system-assigned.png"::: + * If you choose **User-assigned**, you must select an identity to use. Choosing **Select an identity** opens a context pane prompting you to select a user-assigned managed identity. :::image type="content" source="../media/azure-netapp-files/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="../media/azure-netapp-files/encryption-user-assigned.png"::: You can use an Azure Key Vault that is configured to use Azure role-based access 1. `Microsoft.KeyVault/vaults/keys/encrypt/action` 1. `Microsoft.KeyVault/vaults/keys/decrypt/action` - Although there are pre-defined roles with these privileges, it is recommended that you create a custom role with the required permissions. See [Azure custom roles](../role-based-access-control/custom-roles.md) for details. + Although there are pre-defined roles with these permissions, they grant more privileges than are required. For the minimum level of privileges, you should create a custom role with only the required permissions. For details, see [Azure custom roles](../role-based-access-control/custom-roles.md). ```json { |
azure-netapp-files | Cross Zone Replication Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-introduction.md | The preview of cross-zone replication is available in the following regions: * South Central US * Sweden Central * Switzerland North+* UAE North * UK South * US Gov Virginia * West Europe |
azure-percept | Retirement Of Azure Percept Dk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/retirement-of-azure-percept-dk.md | -**Update November 9, 2022**: A firmware update that enables the Vision SoM and Audio SOM to retain their functionality with the DK beyond the retirement date, will be made available before the retirement date. +**Update February 22, 2023**: A firmware update for the Percept DK Vision and Audio accessory components (also known as Vision and Audio SOM) is now available [here](https://aka.ms/audio_vision_som_update), and will enable the accessory components to continue functioning beyond the retirement date. -The [Azure Percept](https://azure.microsoft.com/products/azure-percept/) public preview will be evolving to support new edge device platforms and developer experiences. As part of this evolution the Azure Percept DK and Audio Accessory and associated supporting Azure services for the Percept DK will be retired March 30, 2023. +The Azure Percept preview including the Percept DK, Audio Accessory, and associated supporting Azure services will be retired March 30th, 2023. ## How does this change affect me? - After March 30, 2023, the Azure Percept DK and Audio Accessory will no longer be supported by any Azure services including Azure Percept Studio, OS updates, containers updates, view web stream, and Custom Vision integration. - Microsoft will no longer provide customer success support for the Azure Percept DK and Audio Accessory and any associated supporting services for the Percept DK.-- Existing Custom Vision and Custom Speech projects created using Percept Studio for the Percept DK will not be deleted and billing if applicable will continue. You can no longer modify or use your project with Percept Studio. +- Existing Custom Vision and Custom Speech projects created using Percept Studio for the Percept DK will not be deleted and billing if applicable will continue for any backend services utilized after the retirement date. You can no longer modify or use your project with Percept Studio. ## Recommended action If you have questions regarding Azure Percept DK, please refer to the below **FA | What is changing? | Azure Percept DK and Audio Accessory will no longer be supported by any Azure services including Azure Percept Studio and Updates. | | When is this change occurring? | On March 30, 2023. Until this date your DK and Studio will function as-is and updates and customer support will be offered. After this date, all updates and customer support will stop. | | Will my projects be deleted? | Your projects remain in the underlying Azure Services they were created in (example: Custom Vision, Speech Studio, etc.). They won't be deleted due to this retirement. You can no longer modify or use your project with Percept Studio. | -| Do I need to do anything before March 30, 2023? | Yes, you will need to close the resources and projects associated with the Azure Percept Studio and DK to avoid future billing, as these backend resources and projects will continue to bill after retirement. | +| Do I need to do anything before March 30, 2023? | Yes, you will need to close the resources and projects associated with the Azure Percept Studio and DK to avoid future billing, as these backend resources and projects will continue to bill after retirement. For SoMs to continue having functionality, you will need to apply a firmware update that enables the Vision SoM and Audio SoM to retain functionality that is now available [here](https://aka.ms/audio_vision_som_update). | |
azure-resource-manager | Bicep Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md | Title: Bicep config file description: Describes the configuration file for your Bicep deployments Previously updated : 02/18/2023 Last updated : 02/21/2023 # Configure your Bicep environment The [Bicep linter](linter.md) checks Bicep files for syntax errors and best prac ## Enable experimental features -The following sample enables the [user-defined types in Bicep](https://aka.ms/bicepCustomTypes). +You can enable preview features by adding: ```json { "experimentalFeaturesEnabled": {- "userDefineTypes": true + "userDefineTypes": true, + "extensibility": true } } ``` -The available experimental features include: +The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include: -- **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider.+- **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). - **paramsFiles**: Allows for the use of a Bicep-style parameters file with a terser syntax than the JSON equivalent parameters file. Currently, you also need a special build of Bicep to enable this feature, so is it inaccessible to most users. See [Parameters - first release](https://github.com/Azure/bicep/issues/9567). - **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file. - **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245). |
azure-resource-manager | Bicep Extensibility Kubernetes Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-extensibility-kubernetes-provider.md | + + Title: Bicep extensibility Kubernetes provider +description: Learn how to Bicep Kubernetes provider to deploy .NET applications to Azure Kubernetes Service clusters. + Last updated : 02/21/2023+++# Bicep extensibility Kubernetes provider (Preview) ++The Kubernetes provider allows you to create Kubernetes resources directly with Bicep. Bicep can deploy anything that can be deployed with the [Kubernetes command-line client (kubectl)](https://kubernetes.io/docs/reference/kubectl/kubectl/) and a [Kubernetes manifest file](../../aks/concepts-clusters-workloads.md#deployments-and-yaml-manifests). ++## Enable the preview feature ++This preview feature can be enabled by configuring the [bicepconfig.json](./bicep-config.md): ++```json +{ + "experimentalFeaturesEnabled": { + "extensibility": true, + } +} +``` ++## Import Kubernetes provider ++To safely pass secrets for the Kubernetes deployment, you must invoke the Kubernetes code with a Bicep module and pass the parameter as a secret. +To import the Kubernetes provider, use the [import statement](./bicep-import-providers.md). After importing the provider, you can refactor the Bicep module file as usual, such as by using variables, parameters, and output. By contract, the Kubernetes manifest in YML doesn't include any programmability support. ++The following sample imports the Kubernetes provider: ++```bicep +@secure() +param kubeConfig string ++import 'kubernetes@1.0.0' with { + namespace: 'default' + kubeConfig: kubeConfig +} +``` ++- **namespace**: Specify the namespace of the provider. +- **KubeConfig**: Specify a base64 encoded value of the [Kubernetes cluster admin credentials](/rest/api/aks/managed-clusters/list-cluster-admin-credentials). ++The following sample shows how to pass `kubeConfig` value from a parent Bicep file: ++```bicep +resource aks 'Microsoft.ContainerService/managedClusters@2022-05-02-preview' existing = { + name: 'demoAKSCluster' +} ++module kubernetes './kubernetes.bicep' = { + name: 'buildbicep-deploy' + params: { + kubeConfig: aks.listClusterAdminCredential().kubeconfigs[0].value + } +} +``` ++The AKS cluster can be a new resource or an existing resource. The `Import Kubernetes manifest` command from Visual Studio Code can automatically add the import snippet. For the details, see [Import Kubernetes manifest command](./visual-studio-code.md#bicep-commands). ++## Visual Studio Code import ++From Visual Studio Code, you can import Kubernetes manifest files to create Bicep module files. For more information, see [Visual Studio Code](./visual-studio-code.md#bicep-commands). ++## Next steps ++- [Quickstart - Deploy Azure applications to Azure Kubernetes Services by using Bicep extensibility Kubernetes provider](../../aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md) + |
azure-resource-manager | Bicep Import Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-import-providers.md | + + Title: Import Bicep extensibility providers +description: Describes how to import Bicep extensibility providers. + Last updated : 02/21/2023+++# Import Bicep extensibility providers ++This article describes the syntax you use to import Bicep extensibility providers. ++## Import providers ++The syntax for importing providers is: ++```bicep +import '<provider-name>@<provider-version>' with { + <provider-properties> +} +``` ++## Kubernetes provider ++See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). ++## Next steps ++- To learn about how to use the Kubernetes provider, see [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). +- To go through a Kubernetes provider tutorial, see [Quickstart - Deploy Azure applications to Azure Kubernetes Services by using Bicep Kubernetes provider.](../../aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md). |
azure-resource-manager | Resource Declaration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/resource-declaration.md | Title: Declare resources in Bicep description: Describes how to declare resources to deploy in Bicep. Previously updated : 09/28/2022 Last updated : 02/21/2023 # Resource declaration in Bicep resource stg 'Microsoft.Storage/storageAccounts@2021-04-01' = { Symbolic names are case-sensitive. They may contain letters, numbers, and underscores (`_`). They can't start with a number. A resource can't have the same name as a parameter, variable, or module. -For the available resource types and version, see [Bicep resource reference](/azure/templates/). Bicep doesn't support `apiProfile`, which is available in [Azure Resource Manager templates (ARM templates) JSON](../templates/syntax.md). +For the available resource types and version, see [Bicep resource reference](/azure/templates/). Bicep doesn't support `apiProfile`, which is available in [Azure Resource Manager templates (ARM templates) JSON](../templates/syntax.md). You can also define Bicep extensibility provider resources. For more information, see [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). To conditionally deploy a resource, use the `if` syntax. For more information, see [Conditional deployment in Bicep](conditional-resource-deployment.md). |
azure-resource-manager | Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md | Title: Create Bicep files by using Visual Studio Code description: Describes how to create Bicep files by using Visual Studio Code Previously updated : 12/06/2022 Last updated : 02/21/2023 # Create Bicep files by using Visual Studio Code These commands include: - [Decompile into Bicep](#decompile-into-bicep) - [Deploy Bicep File](#deploy-bicep-file) - [Generate Parameters File](#generate-parameters-file)+- [Import Kubernetes Manifest (preview)](#import-kubernetes-manifest-preview) - [Insert Resource](#insert-resource) - [Open Bicep Visualizer](#open-bicep-visualizer) - [Open Bicep Visualizer to the side](#open-bicep-visualizer) This command decompiles an ARM JSON template into a Bicep file, and places it in You can deploy Bicep files directly from Visual Studio Code. Select **Deploy Bicep file** from the command palette or from the context menu. The extension prompts you to sign in Azure, select subscription, create/select resource group, and enter parameter values. ### Generate parameters file This command creates a parameter file in the same folder as the Bicep file. The new parameter file name is `<bicep-file-name>.parameters.json`. +### Import Kubernetes manifest (Preview) ++This command imports a [Kubernetes manifest file](../../aks/concepts-clusters-workloads.md#deployments-and-yaml-manifests), and creates a [Bicep module](./modules.md). For more information, see [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md), and [Quickstart: Deploy Azure applications to Azure Kubernetes Service (AKS) cluster using Bicep Kubernetes provider (Preview)](../../aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md). + ### Insert resource The `insert resource` command adds a resource declaration in the Bicep file by providing the resource ID of an existing resource. After you select **Insert Resource**, enter the resource ID in the command palette. It takes a few moments to insert the resource. You can find the resource ID by using one of these methods: -- Use [Azure Resource extension for VSCode](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups).+- Use [Azure Resource extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups). :::image type="content" source="./media/visual-studio-code/visual-studio-code-azure-resources-extension.png" alt-text="Screenshot of Visual Studio Code Azure Resources extension."::: |
azure-resource-manager | Cloud Services Extended Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/cloud-services-extended-support.md | + + Title: Move Azure Cloud Services (extended support) deployment resources +description: Use Azure Resource Manager to move Cloud Services (extended support) deployment resources to a new resource group or subscription. + Last updated : 02/14/2023++++# Move guidance for Cloud Services (extended support) deployment model resources ++The steps to move resources deployed through the Cloud Services (extended support) model differ based on whether you're moving the resources within a subscription or to a new subscription. ++## Move in the same subscription ++When moving Cloud Services (extended support) resources from one resource group to another resource group within the same subscription, the following restrictions apply: ++- Cloud Service must not be in manual mode +- Cloud Service must not be VIP Swappable +- Cloud Service must not have any pending operations +- Cloud Service must not be in migration +- Cloud Service must not be in failed state +- Ensure the Cloud Service has an unexpired SAS blob URI pointing to the cloud service package ++> [!NOTE] +> Cloud Services and associated networking resources (for example, PublicIPs and network security groups) can be move independently. Load balancers must always exist in the same resource group ++To move classic resources to a new resource group within the same subscription, use the [standard move operations](../move-resource-group-and-subscription.md) through the portal, Azure PowerShell, Azure CLI, or REST API. You use the same operations as you use for moving Resource Manager resources. ++## Move across subscriptions ++When moving Cloud Services (extended support) deployments to a new subscription, the following restrictions apply: ++- When performing a cross subscription move, all associated cloud service resources such key vault and network resources must move together. +- If faced with a Move Resource operation error saying that the cloud service can't be moved because of a prior failed operation, create a ticket to resolve the issue. +- Cloud Service must not have any cross-subscription references. |
azure-resource-manager | Template Expressions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-expressions.md | Title: Template syntax and expressions description: Describes the declarative JSON syntax for Azure Resource Manager templates (ARM templates). Previously updated : 02/09/2023 Last updated : 02/22/2023 To escape double quotes in an expression, such as adding a JSON object in the te }, ``` +To escape single quotes in an ARM expression output, double up the single quotes. The output of the following template will result in JSON value `{"abc":"'quoted'"}`. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "resources": [], + "outputs": { + "foo": { + "type": "object", + "value": "[createObject('abc', '''quoted''')]" + } + } +} +``` + When passing in parameter values, the use of escape characters depends on where the parameter value is specified. If you set a default value in the template, you need the extra left bracket. ```json |
azure-signalr | Signalr Reference Data Plane Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-reference-data-plane-rest-api.md | The following table shows all versions of REST API we have for now. You can also API Version | Status | Port | Doc | Spec ||||-`1.0` | Latest | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) +`20220601` | Latest | Standard | [Doc](./swagger/signalr-data-plane-rest-v20220601.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/V20220601.json) +`1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) `1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json) The latest available APIs are listed as following. | API | Path |-| - | - | -| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` | -| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` | -| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` | -| [Check if the connection with the given connectionId exists](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` | -| [Close the client connection](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` | -| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` | -| [Check if there are any client connections inside the given group](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` | -| [Check if there are any client connections connected for the given user](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` | -| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | -| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | -| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` | -| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` | -| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` | -| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` | +| - | - | +| [Get service health status.](./swagger/signalr-data-plane-rest-v20220601.md#head-get-service-health-status) | `HEAD /api/health` | +| [Close all of the connections in the hub.](./swagger/signalr-data-plane-rest-v20220601.md#post-close-all-of-the-connections-in-the-hub) | `POST /api/hubs/{hub}/:closeConnections` | +| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v20220601.md#post-broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/hubs/{hub}/:send` | +| [Check if the connection with the given connectionId exists](./swagger/signalr-data-plane-rest-v20220601.md#head-check-if-the-connection-with-the-given-connectionid-exists) | `HEAD /api/hubs/{hub}/connections/{connectionId}` | +| [Close the client connection](./swagger/signalr-data-plane-rest-v20220601.md#delete-close-the-client-connection) | `DELETE /api/hubs/{hub}/connections/{connectionId}` | +| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v20220601.md#post-send-message-to-the-specific-connection) | `POST /api/hubs/{hub}/connections/{connectionId}/:send` | +| [Check if there are any client connections inside the given group](./swagger/signalr-data-plane-rest-v20220601.md#head-check-if-there-are-any-client-connections-inside-the-given-group) | `HEAD /api/hubs/{hub}/groups/{group}` | +| [Close connections in the specific group.](./swagger/signalr-data-plane-rest-v20220601.md#post-close-connections-in-the-specific-group) | `POST /api/hubs/{hub}/groups/{group}/:closeConnections` | +| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v20220601.md#post-broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/hubs/{hub}/groups/{group}/:send` | +| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v20220601.md#put-add-a-connection-to-the-target-group) | `PUT /api/hubs/{hub}/groups/{group}/connections/{connectionId}` | +| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-connection-from-the-target-group) | `DELETE /api/hubs/{hub}/groups/{group}/connections/{connectionId}` | +| [Remove a connection from all groups](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-connection-from-all-groups) | `DELETE /api/hubs/{hub}/connections/{connectionId}/groups` | +| [Check if there are any client connections connected for the given user](./swagger/signalr-data-plane-rest-v20220601.md#head-check-if-there-are-any-client-connections-connected-for-the-given-user) | `HEAD /api/hubs/{hub}/users/{user}` | +| [Close connections for the specific user.](./swagger/signalr-data-plane-rest-v20220601.md#post-close-connections-for-the-specific-user) | `POST /api/hubs/{hub}/users/{user}/:closeConnections` | +| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v20220601.md#post-broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/hubs/{hub}/users/{user}/:send` | +| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v20220601.md#head-check-whether-a-user-exists-in-the-target-group) | `HEAD /api/hubs/{hub}/users/{user}/groups/{group}` | +| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v20220601.md#put-add-a-user-to-the-target-group) | `PUT /api/hubs/{hub}/users/{user}/groups/{group}` | +| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-user-from-the-target-group) | `DELETE /api/hubs/{hub}/users/{user}/groups/{group}` | +| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-user-from-all-groups) | `DELETE /api/hubs/{hub}/users/{user}/groups` | ## Using REST API Currently, we have the following limitation for REST API requests: * Header size is a maximum of 16 KB. * Body size is a maximum of 1 MB. -If you want to send message larger than 1 MB, use the Management SDK with `persistent` mode. +If you want to send message larger than 1 MB, use the Management SDK with `persistent` mode. |
azure-signalr | Signalr Data Plane Rest V20220601 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/swagger/signalr-data-plane-rest-v20220601.md | + + Title: Azure SignalR service data plane REST API reference - v20220601 +description: Describes REST APIs version v20220601 Azure SignalR service supports to manage the connections and send messages to them. ++++ Last updated : 02/22/2023+++# Azure SignalR Service REST API +## Version: 2022-06-01 ++### Available APIs ++| API | Path | +| - | - | +| [Get service health status.](#head-get-service-health-status) | `HEAD /api/health` | +| [Close all of the connections in the hub.](#post-close-all-of-the-connections-in-the-hub) | `POST /api/hubs/{hub}/:closeConnections` | +| [Broadcast a message to all clients connected to target hub.](#post-broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/hubs/{hub}/:send` | +| [Check if the connection with the given connectionId exists](#head-check-if-the-connection-with-the-given-connectionid-exists) | `HEAD /api/hubs/{hub}/connections/{connectionId}` | +| [Close the client connection](#delete-close-the-client-connection) | `DELETE /api/hubs/{hub}/connections/{connectionId}` | +| [Send message to the specific connection.](#post-send-message-to-the-specific-connection) | `POST /api/hubs/{hub}/connections/{connectionId}/:send` | +| [Check if there are any client connections inside the given group](#head-check-if-there-are-any-client-connections-inside-the-given-group) | `HEAD /api/hubs/{hub}/groups/{group}` | +| [Close connections in the specific group.](#post-close-connections-in-the-specific-group) | `POST /api/hubs/{hub}/groups/{group}/:closeConnections` | +| [Broadcast a message to all clients within the target group.](#post-broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/hubs/{hub}/groups/{group}/:send` | +| [Add a connection to the target group.](#put-add-a-connection-to-the-target-group) | `PUT /api/hubs/{hub}/groups/{group}/connections/{connectionId}` | +| [Remove a connection from the target group.](#delete-remove-a-connection-from-the-target-group) | `DELETE /api/hubs/{hub}/groups/{group}/connections/{connectionId}` | +| [Remove a connection from all groups](#delete-remove-a-connection-from-all-groups) | `DELETE /api/hubs/{hub}/connections/{connectionId}/groups` | +| [Check if there are any client connections connected for the given user](#head-check-if-there-are-any-client-connections-connected-for-the-given-user) | `HEAD /api/hubs/{hub}/users/{user}` | +| [Close connections for the specific user.](#post-close-connections-for-the-specific-user) | `POST /api/hubs/{hub}/users/{user}/:closeConnections` | +| [Broadcast a message to all clients belong to the target user.](#post-broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/hubs/{hub}/users/{user}/:send` | +| [Check whether a user exists in the target group.](#head-check-whether-a-user-exists-in-the-target-group) | `HEAD /api/hubs/{hub}/users/{user}/groups/{group}` | +| [Add a user to the target group.](#put-add-a-user-to-the-target-group) | `PUT /api/hubs/{hub}/users/{user}/groups/{group}` | +| [Remove a user from the target group.](#delete-remove-a-user-from-the-target-group) | `DELETE /api/hubs/{hub}/users/{user}/groups/{group}` | +| [Remove a user from all groups.](#delete-remove-a-user-from-all-groups) | `DELETE /api/hubs/{hub}/users/{user}/groups` | +### /api/health ++#### HEAD +##### Summary ++Get service health status. ++<a name="head-get-service-health-status"></a> +### Get service health status ++`HEAD /api/health` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | +| - | -- | +| 200 | The service is healthy | +| default | Error response | ++### /api/hubs/{hub}/:closeConnections ++#### POST +##### Summary ++Close all of the connections in the hub. ++<a name="post-close-all-of-the-connections-in-the-hub"></a> +### Close all of the connections in the hub ++`POST /api/hubs/{hub}/:closeConnections` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| excluded | query | Exclude these connectionIds when closing the connections in the hub. | No | [ string ] | +| reason | query | The reason closing the client connections. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 204 | Success | | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/:send ++#### POST +##### Summary ++Broadcast a message to all clients connected to target hub. ++<a name="post-broadcast-a-message-to-all-clients-connected-to-target-hub"></a> +### Broadcast a message to all clients connected to target hub ++`POST /api/hubs/{hub}/:send` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| excluded | query | Excluded connection Ids | No | [ string ] | +| api-version | query | The version of the REST APIs. | Yes | string | +| message | body | The payload message. | Yes | [PayloadMessage](#payloadmessage) | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 202 | Success | [ServiceResponse](#serviceresponse) | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/connections/{connectionId} ++#### HEAD +##### Summary ++Check if the connection with the given connectionId exists ++<a name="head-check-if-the-connection-with-the-given-connectionid-exists"></a> +### Check if the connection with the given connectionId exists ++`HEAD /api/hubs/{hub}/connections/{connectionId}` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| connectionId | path | The connection Id. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 200 | Success | [ServiceResponse](#serviceresponse) | +| default | Error response | [ErrorDetail](#errordetail) | ++#### DELETE +##### Summary ++Close the client connection ++<a name="delete-close-the-client-connection"></a> +### Close the client connection ++`DELETE /api/hubs/{hub}/connections/{connectionId}` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| connectionId | path | The connection Id. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| reason | query | The reason of the connection close. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 200 | Success | [ServiceResponse](#serviceresponse) | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/connections/{connectionId}/:send ++#### POST +##### Summary ++Send message to the specific connection. ++<a name="post-send-message-to-the-specific-connection"></a> +### Send message to the specific connection ++`POST /api/hubs/{hub}/connections/{connectionId}/:send` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| connectionId | path | The connection Id. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | +| message | body | The payload message. | Yes | [PayloadMessage](#payloadmessage) | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 202 | Success | [ServiceResponse](#serviceresponse) | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/groups/{group} ++#### HEAD +##### Summary ++Check if there are any client connections inside the given group ++<a name="head-check-if-there-are-any-client-connections-inside-the-given-group"></a> +### Check if there are any client connections inside the given group ++`HEAD /api/hubs/{hub}/groups/{group}` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| group | path | Target group name, which length should be greater than 0 and less than 1025. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 200 | Success | [ServiceResponse](#serviceresponse) | +| 404 | Not Found | | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/groups/{group}/:closeConnections ++#### POST +##### Summary ++Close connections in the specific group. ++<a name="post-close-connections-in-the-specific-group"></a> +### Close connections in the specific group ++`POST /api/hubs/{hub}/groups/{group}/:closeConnections` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| group | path | Target group name, which length should be greater than 0 and less than 1025. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| excluded | query | Exclude these connectionIds when closing the connections in the hub. | No | [ string ] | +| reason | query | The reason closing the client connections. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 204 | Success | | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/groups/{group}/:send ++#### POST +##### Summary ++Broadcast a message to all clients within the target group. ++<a name="post-broadcast-a-message-to-all-clients-within-the-target-group"></a> +### Broadcast a message to all clients within the target group ++`POST /api/hubs/{hub}/groups/{group}/:send` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| group | path | Target group name, which length should be greater than 0 and less than 1025. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| excluded | query | Excluded connection Ids | No | [ string ] | +| api-version | query | The version of the REST APIs. | Yes | string | +| message | body | The payload message. | Yes | [PayloadMessage](#payloadmessage) | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 202 | Success | [ServiceResponse](#serviceresponse) | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/groups/{group}/connections/{connectionId} ++#### PUT +##### Summary ++Add a connection to the target group. ++<a name="put-add-a-connection-to-the-target-group"></a> +### Add a connection to the target group ++`PUT /api/hubs/{hub}/groups/{group}/connections/{connectionId}` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| group | path | Target group name, which length should be greater than 0 and less than 1025. | Yes | string | +| connectionId | path | Target connection Id | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 200 | Success | [ServiceResponse](#serviceresponse) | +| 404 | Not Found | | +| default | Error response | [ErrorDetail](#errordetail) | ++#### DELETE +##### Summary ++Remove a connection from the target group. ++<a name="delete-remove-a-connection-from-the-target-group"></a> +### Remove a connection from the target group ++`DELETE /api/hubs/{hub}/groups/{group}/connections/{connectionId}` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| group | path | Target group name, which length should be greater than 0 and less than 1025. | Yes | string | +| connectionId | path | Target connection Id | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 200 | Success | [ServiceResponse](#serviceresponse) | +| 404 | Not Found | | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/connections/{connectionId}/groups ++#### DELETE +##### Summary ++Remove a connection from all groups ++<a name="delete-remove-a-connection-from-all-groups"></a> +### Remove a connection from all groups ++`DELETE /api/hubs/{hub}/connections/{connectionId}/groups` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| connectionId | path | Target connection Id | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 200 | Success | [ServiceResponse](#serviceresponse) | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/users/{user} ++#### HEAD +##### Summary ++Check if there are any client connections connected for the given user ++<a name="head-check-if-there-are-any-client-connections-connected-for-the-given-user"></a> +### Check if there are any client connections connected for the given user ++`HEAD /api/hubs/{hub}/users/{user}` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| user | path | The user Id. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 200 | Success | [ServiceResponse](#serviceresponse) | +| 404 | Not Found | | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/users/{user}/:closeConnections ++#### POST +##### Summary ++Close connections for the specific user. ++<a name="post-close-connections-for-the-specific-user"></a> +### Close connections for the specific user ++`POST /api/hubs/{hub}/users/{user}/:closeConnections` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| user | path | The user Id. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| excluded | query | Exclude these connectionIds when closing the connections in the hub. | No | [ string ] | +| reason | query | The reason closing the client connections. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 204 | Success | | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/users/{user}/:send ++#### POST +##### Summary ++Broadcast a message to all clients belong to the target user. ++<a name="post-broadcast-a-message-to-all-clients-belong-to-the-target-user"></a> +### Broadcast a message to all clients belong to the target user ++`POST /api/hubs/{hub}/users/{user}/:send` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| user | path | The user Id. | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | +| message | body | The payload message. | Yes | [PayloadMessage](#payloadmessage) | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 202 | Success | [ServiceResponse](#serviceresponse) | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/users/{user}/groups/{group} ++#### HEAD +##### Summary ++Check whether a user exists in the target group. ++<a name="head-check-whether-a-user-exists-in-the-target-group"></a> +### Check whether a user exists in the target group ++`HEAD /api/hubs/{hub}/users/{user}/groups/{group}` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| group | path | Target group name, which length should be greater than 0 and less than 1025. | Yes | string | +| user | path | Target user Id | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 200 | Success | [ServiceResponse](#serviceresponse) | +| 404 | Not Found | | +| default | Error response | [ErrorDetail](#errordetail) | ++#### PUT +##### Summary ++Add a user to the target group. ++<a name="put-add-a-user-to-the-target-group"></a> +### Add a user to the target group ++`PUT /api/hubs/{hub}/users/{user}/groups/{group}` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| group | path | Target group name, which length should be greater than 0 and less than 1025. | Yes | string | +| user | path | Target user Id | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| ttl | query | Specifies the seconds that the user exists in the group. If not set, the user lives in the group for 1 year at most. If a user is added to some groups without ttl limitation, only the latest updated 100 groups will be reserved among all groups the user joined without TTL. If ttl = 0, only the current connected connections of the target user will be added to the target group. | No | integer | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 200 | Success | [ServiceResponse](#serviceresponse) | +| default | Error response | [ErrorDetail](#errordetail) | ++#### DELETE +##### Summary ++Remove a user from the target group. ++<a name="delete-remove-a-user-from-the-target-group"></a> +### Remove a user from the target group ++`DELETE /api/hubs/{hub}/users/{user}/groups/{group}` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| group | path | Target group name, which length should be greater than 0 and less than 1025. | Yes | string | +| user | path | Target user Id | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 204 | Success | | +| default | Error response | [ErrorDetail](#errordetail) | ++### /api/hubs/{hub}/users/{user}/groups ++#### DELETE +##### Summary ++Remove a user from all groups. ++<a name="delete-remove-a-user-from-all-groups"></a> +### Remove a user from all groups ++`DELETE /api/hubs/{hub}/users/{user}/groups` +##### Parameters ++| Name | Located in | Description | Required | Schema | +| - | - | -- | -- | - | +| hub | path | Target hub name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | Yes | string | +| user | path | Target user Id | Yes | string | +| application | query | Target application name, which should start with alphabetic characters and only contain alpha-numeric characters or underscore. | No | string | +| api-version | query | The version of the REST APIs. | Yes | string | ++##### Responses ++| Code | Description | Schema | +| - | -- | | +| 204 | Success | | +| default | Error response | [ErrorDetail](#errordetail) | ++### Models ++#### CodeLevel ++| Name | Type | Description | Required | +| - | - | -- | -- | +| CodeLevel | integer | | | ++#### ErrorDetail ++The error object. ++| Name | Type | Description | Required | +| - | - | -- | -- | +| code | string | One of a server-defined set of error codes. | No | +| message | string | A human-readable representation of the error. | No | +| target | string | The target of the error. | No | +| details | [ [ErrorDetail](#errordetail) ] | An array of details about specific errors that led to this reported error. | No | +| inner | [InnerError](#innererror) | | No | ++#### ErrorKind ++| Name | Type | Description | Required | +| - | - | -- | -- | +| ErrorKind | integer | | | ++#### ErrorScope ++| Name | Type | Description | Required | +| - | - | -- | -- | +| ErrorScope | integer | | | ++#### InnerError ++| Name | Type | Description | Required | +| - | - | -- | -- | +| code | string | A more specific error code than was provided by the containing error. | No | +| inner | [InnerError](#innererror) | | No | ++#### PayloadMessage ++| Name | Type | Description | Required | +| - | - | -- | -- | +| target | string | | Yes | +| arguments | [ ] | | No | ++#### ServiceResponse ++| Name | Type | Description | Required | +| - | - | -- | -- | +| code | string | | No | +| level | [CodeLevel](#codelevel) | | No | +| scope | [ErrorScope](#errorscope) | | No | +| errorKind | [ErrorKind](#errorkind) | | No | +| message | string | | No | +| jsonObject | | | No | +| isSuccess | boolean | | No | |
azure-vmware | Attach Azure Netapp Files To Azure Vmware Solution Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md | Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 02/17/2023 Last updated : 02/21/2023 -By using NFS datastores backed by Azure NetApp Files, you can expand your storage instead of scaling the clusters. You can also use Azure NetApp Files volumes to replicate data from on-premises or primary VMware environments for the secondary site. +By using NFS datastores backed by Azure NetApp Files, you can expand your storage instead of scaling the clusters. You can also use Azure NetApp Files volumes to replicate data from on-premises or primary VMware vSphere environments for the secondary site. Create your Azure VMware Solution and create Azure NetApp Files NFS volumes in the virtual network connected to it using an ExpressRoute. Ensure there's connectivity from the private cloud to the NFS volumes created. Use those volumes to create NFS datastores and attach the datastores to clusters of your choice in a private cloud. As a native integration, you need no other permissions configured via vSphere. Before you begin the prerequisites, review the [Performance best practices](#per >[!NOTE] >Azure NetApp Files datastores for Azure VMware Solution are generally available. To use it, you must register Azure NetApp Files datastores for Azure VMware Solution. -## Supported regions --Azure VMware Solution currently supports the following regions: --**Asia** : East Asia, Japan East, Japan West, Southeast Asia. --**Australia** : Australia East, Australia Southeast. --**Brazil** : Brazil South. --**Europe** : France Central, Germany West Central, North Europe, Sweden Central, Sweden North, Switzerland West, UK South, UK West, West Europe --**North America** : Canada Central, Canada East, Central US, East US, East US 2, North Central US, South Central US, West US, West US 2. +Azure VMware Solution is currently supported in these [regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware). ## Performance best practices To attach an Azure NetApp Files volume to your private cloud using Azure CLI, fo `az vmware datastore list --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud` ++ ## Service level change for Azure NetApp Files datastore Based on the performance requirements of the datastore, you can change the service level of the Azure NetApp Files volume used for the datastore by following the instructions to [dynamically change the service level of a volume for Azure NetApp Files](../azure-netapp-files/dynamic-change-volume-service-level.md)-This has no impact to the Datastore or private cloud as there is no downtime involved and the IP address/mount path remain unchanged. However, the volume Resource Id will be changed due to the capacity pool change. Therefore to avoid any metadata mismatch re-issue the datastore create command via Azure CLI as follows: `az vmware datastore netapp-volume create`. +This has no impact to the Datastore or private cloud as there is no downtime involved and the IP address/mount path remain unchanged. However, the volume Resource ID will be changed due to the capacity pool change. Therefore to avoid any metadata mismatch re-issue the datastore create command via Azure CLI as follows: `az vmware datastore netapp-volume create`. >[!IMPORTANT] -> The input values for **cluster** name, datastore **name**, **private-cloud** (SDDC) name, and **resource-group** must be **exactly the same as the current one**, and the **volume-id** is the new Resource Id of the volume. +> The input values for **cluster** name, datastore **name**, **private-cloud** (SDDC) name, and **resource-group** must be **exactly the same as the current one**, and the **volume-id** is the new Resource ID of the volume. -**cluster** -**name** |
azure-vmware | Azure Vmware Solution Citrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-citrix.md | Citrix Virtual Apps and Desktop service supports Azure VMware Solution. Azure VM [Deployment guide](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops-service/install-configure/resource-location/azure-resource-manager.html#azure-vmware-solution-avs-integration) -[Solution brief](https://www.citrix.com/content/dam/citrix/en_us/documents/solution-brief/citrix-virtual-apps-and-desktop-service-on-azure-vmware-solution.pdf) +[Solution brief](https://www.citrix.com/downloads/citrix-virtual-apps-and-desktops/) **FAQ (review Q&As)** |
batch | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md | Deleting tasks accomplishes two things: - Ensures that you don't have a build-up of tasks in the job. This action will help avoid difficulty in finding the task you're interested in as you'll have to filter through the Completed tasks. - Cleans up the corresponding task data on the node (provided `retentionTime` hasn't already been hit). This action helps ensure that your nodes don't fill up with task data and run out of disk space. +> [!NOTE] +> For tasks just submitted to Batch, the DeleteTask API call takes up to 10 minutes to take effect. Before it takes effect, other tasks might be prevented from being scheduled. It's because Batch Scheduler still tries to schedule the tasks just deleted. If you want to delete one task shortly after it's submitted, please terminate the task instead (since the terminate task will take effect immediately). And then delete the task 10 minutes later. + ### Submit large numbers of tasks in collection Tasks can be submitted on an individual basis or in collections. Submit tasks in [collections](/rest/api/batchservice/task/addcollection) of up to 100 at a time when doing bulk submission of tasks to reduce overhead and submission time. |
cognitive-services | Speech Container Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md | With Speech containers, you can build a speech application architecture that's o | Container | Features | Latest | Release status | |--|--|--|--|-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.10.0 | Generally available | -| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.10.0 | Generally available | +| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.11.0 | Generally available | +| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.11.0 | Generally available | | Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.9.0 | Generally available | +| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.10.0 | Generally available | ## Prerequisites |
cognitive-services | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/disaster-recovery.md | - Title: Disaster recovery with cross-region support- -description: This article provides instructions on how to use the cross-region feature to recover your Cognitive Service resources in the event of a network outage. ----- Previously updated : 01/27/2023----# CrossΓÇôregion disaster recovery --One of the first decisions every Cognitive Service customer makes is which region to create their resource in. The choice of region provides customers with the benefits of regional compliance by enforcing data residency requirements. Cognitive Services is available in [multiple geographies](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services) to ensure customers across the world are supported. --It's rare, but possible, to encounter a network issue that affects an entire region. If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two resources in different regions and the ability to sync data between them. --## Feature overview --The cross-region disaster recovery feature, also known as Single Resource Multiple Region (SRMR), enables this scenario by allowing you to distribute traffic or copy custom models to multiple resources which can exist in any supported geography. --## SRMR business scenarios --* To ensure high availability of your application, each Cognitive Service supports a flexible recovery region option that allows you to choose from a list of supported regions. -* Customers with globally distributed end users can deploy resources in multiple regions to optimize the latency of their applications. --## Routing profiles --Azure Traffic Manager routes requests among the selected regions. The SRMR currently supports [Priority](../traffic-manager/traffic-manager-routing-methods.md#priority-traffic-routing-method), [Performance](../traffic-manager/traffic-manager-routing-methods.md#performance-traffic-routing-method) and [Weighted](../traffic-manager/traffic-manager-routing-methods.md#weighted-traffic-routing-method) profiles and is currently available for the following --* [Computer Vision](./computer-vision/overview.md) -* [Immersive Reader](../applied-ai-services/immersive-reader/overview.md) -* [Univariate Anomaly Detector](./anomaly-detector/overview.md) --> [!NOTE] -> SRMR is not supported for multi-service resources or free tier resources. --If you use Priority or Weighted traffic manager profiles, your configuration will behave according to the [Traffic Manager documentation](../traffic-manager/traffic-manager-routing-methods.md). --## Enable SRMR --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to your resource's page. -1. Under the **Resource Management** section on the left pane, select the Regions tab and choose a routing method. - :::image type="content" source="media/disaster-recovery/routing-method.png" alt-text="Screenshot of the routing method select menu in the Azure portal." lightbox="media/disaster-recovery/routing-method.png"::: -1. Select the **Add Region** link. -1. On the **Add Region** pop-up screen, set up additional regions for your resources. - :::image type="content" source="media/disaster-recovery/add-regions.png" alt-text="Screenshot of the Add Region popup in the Azure portal." lightbox="media/disaster-recovery/add-regions.png"::: -1. Save your changes. --## See also -* [Create a new resource using the Azure portal](cognitive-services-apis-create-account.md) -* [Create a new resource using the Azure CLI](cognitive-services-apis-create-account-cli.md) -* [Create a new resource using the client library](cognitive-services-apis-create-account-client-library.md) -* [Create a new resource using an ARM template](create-account-resource-manager-template.md) |
cognitive-services | Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/manage-costs.md | Azure OpenAI base series and Codex series models are charged per 1,000 tokens. C Our models understand and process text by breaking it down into tokens. For reference, each token is roughly four characters for typical English text. +Token costs are for both input and output. For example, if you have a 1,000 token JavaScript code sample that you ask an Azure OpenAI model to convert to Python. You would be charged approximately 1,000 tokens for the initial input request sent, and 1,000 more tokens for the output that is received in response for a total of 2,000 tokens. ++In practice, for this type of completion call the token input/output wouldn't be perfectly 1:1. A conversion from one programming language to another could result in a longer or shorter output depending on many different factors including the value assigned to the max_tokens parameter. + ### Base Series and Codex series fine-tuned models Azure OpenAI fine-tuned models are charged based on three factors: |
communication-services | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md | Firefox desktop browser support is now available in public preview. Known issues ### iOS 16 introduced bugs when putting browser in the background during a call-The iOS 16 release has introduced a bug that can stop the ACS audio\video call when using Safari mobile browser. Apple is aware of this issue and are looking for a fix on their side. The impact could be that an ACS call might stop working during a call and the only resolution to get it working again is to have the end customer restart their phone. +The iOS 16 release has introduced a bug that can stop the ACS audio\video call when using Safari mobile browser. Apple is aware of this issue and is looking for a fix on their side. The impact could be that an ACS call might stop working during a call and the only resolution to get it working again is to have the end customer restart their phone. To reproduce this bug: - Have a user using an iPhone running iOS 16 Results: ### Chrome M98 - regression -Chrome version 98 introduced a regression with anormal generation of video keyframes that impacts resolution of a sent video stream negatively for majority (70%+) of users. +Chrome version 98 introduced a regression with abnormal generation of video keyframes that impacts resolution of a sent video stream negatively for majority (70%+) of users. - This is a known regression introduced on [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1295815) +### No incoming audio during a call ++Occasionally, a user in an ACS call may not be able to hear the audio from remote participants. +There is a related [Chromium](https://bugs.chromium.org/p/chromium/issues/detail?id=1402250) bug which causes this issue, the issue can be mitigated by reconnecting the PeerConnection. We've added this workaround since SDK 1.9.1 (stable) and SDK 1.10.0 (beta) ++On Android Chrome, if a user joins ACS call several times, the incoming audio can also disappear. The user will not be able to hear the audio from other participants until the page is refreshed. We've fixed this issue in SDK 1.10.1-beta.1, and improved the audio resource usage. + ### Some Android devices failing call scenarios except for group calls. A number of specific Android devices fail to start, accept calls, and meetings. The devices that run into this issue, won't recover and will fail on every attempt. These are mostly Samsung model A devices, particularly models A326U, A125U and A215U. - This is a known regression introduced on [Chromium](https://bugs.chromium.org/p/webrtc/issues/detail?id=13223). +### Android Chrome mutes the call after browser goes to background for one minute ++On Android Chrome, if a user is on an ACS call and puts the browser into background for one minute. The microphone will lose access and the other participants in the call won't hear the audio from the user. Once the user brings the browser to foreground, microphone will be available again. Related chromium bugs [here](https://bugs.chromium.org/p/chromium/issues/detail?id=1027446) and [here](https://bugs.chromium.org/p/webrtc/issues/detail?id=10940) ++### The user has dropped the call but is still on the participant list. ++The problem can occur if a mobile user leaves the ACS group call without properly hang up. When a user closes the browser or refreshes the webpage without hang up, other participants in the group call will still see the user on the participant list for about 2 minutes. ++### iOS Safari refreshes the page if the user goes to another app and returns back to the browser ++The problem can occur if a user in an ACS call with iOS Safari, and switches to other app for a while. After the user returns back to the browser, +the browser page may refresh. This is because OS kills the browser. One way to mitigate this issue is to keep some states and recover after page refreshes. ++ ### iOS 15.1 users joining group calls or Microsoft Teams meetings. -* Sometimes when incoming PSTN is received the tab with the call or meeting will hang. Related webkit bugs [here](https://bugs.webkit.org/show_bug.cgi?id=233707) and [here](https://bugs.webkit.org/show_bug.cgi?id=233708#c0). +* Sometimes when incoming PSTN is received the tab with the call or meeting will hang. Related WebKit bugs [here](https://bugs.webkit.org/show_bug.cgi?id=233707) and [here](https://bugs.webkit.org/show_bug.cgi?id=233708#c0). ### Local microphone/camera mutes when certain interruptions occur on iOS Safari and Android Chrome. This problem can occur if another application or the operating system takes over - A user plays a YouTube video, for example, or starts a FaceTime call. Switching to another native application can capture access to the microphone or camera. - A user enables Siri, which will capture access to the microphone. -On iOS for example, while on an ACS call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the ACS call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the ACS call for audio to start flowing again in the ACS call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the ACS call and the ACS call will not be marked as muted. Once the PSTN call is finished, android chrome will regain audio automatically and audio will start flowing normally again in the ACS call. +On iOS for example, while on an ACS call, if a PSTN call comes in, then a microphoneMutedUnexepectedly bad UFD will be raised and audio will stop flowing in the ACS call and the call will be marked as muted. Once the PSTN call is over, the user will have to go and unmute the ACS call for audio to start flowing again in the ACS call. In the case of Android Chrome when a PSTN call comes in, audio will stop flowing in the ACS call and the ACS call will not be marked as muted. In this case, there is no microphoneMutedUnexepectedly UFD event. Once the PSTN call is finished, Android Chrome will regain audio automatically and audio will start flowing normally again in the ACS call. -In case camera is on and an interruption occurs, ACS call may or may not loose the camera. If lost then camera will be marked as off and user will have to go turn it back on after the interruption has released the camera. +In case camera is on and an interruption occurs, ACS call may or may not lose the camera. If lost then camera will be marked as off and user will have to go turn it back on after the interruption has released the camera. Occasionally, microphone or camera devices won't be released on time, and that can cause issues with the original call. For example, if the user tries to unmute while watching a YouTube video, or if a PSTN call is active simultaneously. |
communications-gateway | Monitoring Azure Communications Gateway Data Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitoring-azure-communications-gateway-data-reference.md | Azure Communications Gateway has the following dimensions associated with its me ## See Also - See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md) for a description of monitoring Azure Communications Gateway.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. |
communications-gateway | Reliability Communications Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md | Management regions contain the infrastructure used for the ordering, monitoring ## Availability zone support -Azure availability zones have a minimum of three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. If a local zone fails, regional services, capacity, and high availability are supported by the other zones in the region. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview.md). +Azure availability zones have a minimum of three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. If a local zone fails, regional services, capacity, and high availability are supported by the other zones in the region. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview). ### Zone down experience for service regions The reliability design described in this document is implemented by Microsoft an ## Next steps > [!div class="nextstepaction"]-> [Prepare to deploy an Azure Communications Gateway resource](prepare-to-deploy.md) +> [Prepare to deploy an Azure Communications Gateway resource](prepare-to-deploy.md) |
communications-gateway | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/security.md | Last updated 02/09/2023 -# Security and Azure Communications Gateway +# Overview of security for Azure Communications Gateway The customer data Azure Communications Gateway handles can be split into: The following cipher suites are used for encrypting SIP and RTP. ## Next steps +- Read the [security baseline for Azure Communications Gateway](/security/benchmark/azure/baselines/azure-communications-gateway-security-baseline?toc=/azure/communications-gateway/toc.json&bc=/azure/communications-gateway/breadcrumb/toc.json) - Learn about [how Azure Communications Gateway communicates with Microsoft Teams and your network](interoperability.md). |
connectors | Connectors Create Api Azureblobstorage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md | You can add network security to an Azure storage account by [restricting access - To access storage accounts behind firewalls using the Azure Blob Storage managed connector in Consumption and ISE-based logic apps, review the following documentation: - > [!NOTE] - > - > The following solutions don't apply to Standard logic apps. - - [Access storage accounts in same region with system-managed identities](#access-blob-storage-in-same-region-with-system-managed-identities) - [Access storage accounts in other regions](#access-storage-accounts-in-other-regions) - To access storage accounts behind firewalls using the ISE-versioned Azure Blob Storage connector that's only available in an ISE-based logic app, review [Access storage accounts through trusted virtual network](#access-storage-accounts-through-trusted-virtual-network). -- To access storage accounts behind firewalls in Standard logic apps, use the Azure Blob Storage *built-in* connector, not the managed connector, and review [Access storage accounts through virtual network integration](#access-storage-accounts-through-virtual-network-integration).+- To access storage accounts behind firewalls in Standard logic apps, review the following documentation: ++ - Azure Blob Storage *built-in* connector: [Access storage accounts through virtual network integration](#access-storage-accounts-through-virtual-network-integration) ++ - Azure Blob Storage *managed* connector: [Access storage accounts in other regions](#access-storage-accounts-in-other-regions) ### Access storage accounts in other regions To add your outbound IP addresses to the storage account firewall, follow these To connect to Azure Blob Storage in any region, you can use [managed identities for authentication](../active-directory/managed-identities-azure-resources/overview.md). You can create an exception that gives Microsoft trusted services, such as a managed identity, access to your storage account through a firewall. + > [!NOTE] + > + > This solution doesn't apply to Standard logic apps. Even if you use a system-assigned managed identity with a Standard logic app, + > the Azure Blob Storage managed connector can't connect to a storage account in the same region. + To use managed identities in your logic app to access Blob Storage, follow these steps: 1. [Configure access to your storage account](#configure-storage-account-access). |
cosmos-db | Analytical Store Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md | There are two types of schema representation in the analytical store. These type The well-defined schema representation creates a simple tabular representation of the schema-agnostic data in the transactional store. The well-defined schema representation has the following considerations: -* The first document defines the base schema and property must always have the same type across all documents. The only exceptions are: +* The first document defines the base schema and properties must always have the same type across all documents. The only exceptions are: * From `NULL` to any other data type. The first non-null occurrence defines the column data type. Any document not following the first non-null datatype won't be represented in analytical store. * From `float` to `integer`. All documents will be represented in analytical store. * From `integer` to `float`. All documents will be represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float. WITH (num varchar(100)) AS [IntToFloat] * Spark pools in Azure Synapse will represent these columns as `undefined`. * SQL serverless pools in Azure Synapse will represent these columns as `NULL`. +##### Representation challenges Workaround ++Currently the base schema can't be reset and It is possible that an old document, with an incorrect schema, was used to create that base schema. To delete or update the problematic documents won't help. The possible solutions are: ++ * To migrate the data to a new container, making sure that all documents have the correct schema. + * To abandon the property with the wrong schema and add a new one, with another name, that has the correct datatypes. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have **NULL**. You can add the **status2** property to all documents and start to use it, instead of the original property. #### Full fidelity schema representation |
cosmos-db | Migrate Continuous Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md | The following are the key reasons to migrate into continuous mode: > * If the account has a single write region. > * If the account isn't enabled with analytical store. >-> If the account is using [customer-managed keys](./how-to-setup-cmk.md), a user-assigned managed identity must be declared in the Key Vault access policy and must be set as the default identity on the account. +> If the account is using [customer-managed keys](./how-to-setup-cmk.md), a managed identity (System-assigned or User-assigned) must be declared in the Key Vault access policy and must be set as the default identity on the account. ## Permissions |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md | +> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWXr4T] + > [!TIP] > Want to try the API for MongoDB with no commitment? Create an Azure Cosmos DB account using [Try Azure Cosmos DB](../try-free.md) for free. |
cosmos-db | Quickstart Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md | -Get started with the Azure Cosmos DB client library for JavaScript to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks. +Get started with the Azure Cosmos DB client library for JavaScript to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks. > [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-sql-api-javascript-samples) are available on GitHub as a Node.js project. Get started with the Azure Cosmos DB client library for JavaScript to create dat - An Azure account with an active subscription. - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.-- [Node.js 10 or later](https://dotnet.microsoft.com/download)+- [Node.js LTS](https://nodejs.org/en/download/) - [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/) ### Prerequisite check Get started with the Azure Cosmos DB client library for JavaScript to create dat This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for JavaScript to manage resources. -### Create an Azure Cosmos DB account +### <a id="create-account"></a>Create an Azure Cosmos DB account > [!TIP] > No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. If you create an account using the free trial, you can safely skip ahead to the [Create a new JavaScript project](#create-a-new-javascript-project) section. [!INCLUDE [Create resource tabbed conceptual - ARM, Azure CLI, PowerShell, Portal](includes/create-resources.md)] -### Configure environment variables - ### Create a new JavaScript project This section walks you through creating an Azure Cosmos account and setting up a :::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/package.json" highlight="6"::: -### Install the package +### Install packages -1. Add the [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) npm package to the Node.js project. +### [Passwordless (Recommended)](#tab/passwordless) ++1. Add the [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) and [@azure/identity](https://www.npmjs.com/package/@azure/identity) npm packages to the Node.js project. ```bash npm install @azure/cosmos+ npm install @azure/identity ``` 1. Add the [dotenv](https://www.npmjs.com/package/dotenv) npm package to read environment variables from a `.env` file. This section walks you through creating an Azure Cosmos account and setting up a npm install dotenv ``` -### Create local development environment files +### [Connection String](#tab/connection-string) -1. Create a `.gitignore` file and add the following value to ignore your environment file and your node_modules. This configuration file ensures that only secure and relevant files are checked into source code. +1. Add the [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) npm package to the Node.js project. - ```text - .env - node_modules + ```bash + npm install @azure/cosmos ``` -1. Create a `.env` file with the following variables: +1. Add the [dotenv](https://www.npmjs.com/package/dotenv) npm package to read environment variables from a `.env` file. - ```text - COSMOS_ENDPOINT= - COSMOS_KEY= + ```bash + npm install dotenv ``` -### Create a code file --Create an `index.js` and add the following boilerplate code to the file to read environment variables: ---### Add dependency to client library --Add the following code at the end of the `index.js` file to include the required dependency to programmatically access Cosmos DB. ---### Add environment variables to code file --Add the following code at the end of the `index.js` file to include the required environment variables. The endpoint and key were found at the end of the [account creation steps](#create-an-azure-cosmos-db-account). -+ -### Add variables for names -Add the following variables to manage unique database and container names and the [**partition key (`pk`)**](../partitioning-overview.md). +### Configure environment variables -In this example, we chose to add a timeStamp to the database and container in case you run this sample code more than once. ## Object model You'll use the following JavaScript classes to interact with these resources: - [Get an item](#get-an-item) - [Query items](#query-items) -The sample code described in this article creates a database named ``adventureworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier. +The sample code described in this article creates a database named ``cosmicworks`` with a container named ``products``. The ``products`` table is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier. For this sample code, the container will use the category as a logical partition key. ### Authenticate the client -In the `index.js`, add the following code to use the resource **endpoint** and **key** to authenticate to Cosmos DB. Define a new instance of the [``CosmosClient``](/javascript/api/@azure/cosmos/cosmosclient) class. ++## [Passwordless (Recommended)](#tab/passwordless) ++++#### Authenticate using DefaultAzureCredential +++From the project directory, open the *index.js* file. In your editor, add npm packages to work with Cosmos DB and authenticate to Azure. You'll authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` from the [`@azure/identity`](https://www.npmjs.com/package/@azure/identity) package. `DefaultAzureCredential` will automatically discover and use the account you signed-in with previously. +++Create an environment variable that specifies your Cosmos DB endpoint. +++Create constants for the database and container names. ++++Create a new client instance of the [`CosmosClient`](/javascript/api/@azure/cosmos/cosmosclient) class constructor with the `DefaultAzureCredential` object and the endpoint. +++## [Connection String](#tab/connection-string) ++From the project directory, open the *index.js* file. In your editor, import [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos) package to work with Cosmos DB and authenticate to Azure using the endpoint and key. +++Create environment variables that specify your Cosmos DB endpoint and key. +++Create constants for the database and container names. +++Create a new client instance of the [`CosmosClient`](/javascript/api/@azure/cosmos/cosmosclient) class constructor with the endpoint and key. :::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/index.js" range="21-22"::: +++### <a id="create-and-query-the-database"></a> ### Create a database -Add the following code to use the [``CosmosClient.Databases.createDatabaseIfNotExists``](/javascript/api/@azure/cosmos/databases#@azure-cosmos-databases-createifnotexists) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database. +## [Passwordless (Recommended)](#tab/passwordless) +++The `@azure/cosmos` client library enables you to perform *data* operations using [Azure RBAC](../role-based-access-control.md). However, to authenticate *management* operations, such as creating and deleting databases, you must use RBAC through one of the following options: ++> - [Azure CLI scripts](manage-with-cli.md) +> - [Azure PowerShell scripts](manage-with-powershell.md) +> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md) +> - [Azure Resource Manager JavaScript client library](https://www.npmjs.com/package/@azure/arm-cosmosdb) ++The Azure CLI approach is used in for this quickstart and passwordless access. Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) command to create a Cosmos DB for NoSQL database. ++```azurecli +# Create a SQL API database ` +az cosmosdb sql database create ` + --account-name <cosmos-db-account-name> ` + --resource-group <resource-group-name> ` + --name cosmicworks +``` ++The command line to create a database is for PowerShell, shown on multiple lines for clarity. For other shell types, change the line continuation characters as appropriate. For example, for Bash, use backslash ("\\"). Or, remove the continuation characters and enter the command on one line. ++## [Connection String](#tab/connection-string) ++Add the following code to use the [``CosmosClient.Databases.createDatabaseIfNotExists``](/javascript/api/@azure/cosmos/databases#@azure-cosmos-databases-createifnotexists) method to create a new database if it doesn't already exist. This method returns a reference to the existing or newly created database. :::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/index.js" range="24-26"::: ++ ### Create a container +## [Passwordless (Recommended)](#tab/passwordless) ++The `Microsoft.Azure.Cosmos` client library enables you to perform *data* operations using [Azure RBAC](../role-based-access-control.md). However, to authenticate *management* operations such as creating and deleting databases you must use RBAC through one of the following options: ++> - [Azure CLI scripts](manage-with-cli.md) +> - [Azure PowerShell scripts](manage-with-powershell.md) +> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md) +> - [Azure Resource Manager JavaScript client library](https://www.npmjs.com/package/@azure/arm-cosmosdb) ++The Azure CLI approach is used in this example. Use the [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) command to create a Cosmos DB container. ++```azurecli +# Create a SQL API container +az cosmosdb sql container create ` + --account-name <cosmos-db-account-name> ` + --resource-group <resource-group-name> ` + --database-name cosmicworks ` + --partition-key-path "/categoryId" ` + --name products +``` ++The command line to create a container is for PowerShell, on multiple lines for clarity. For other shell types, change the line continuation characters as appropriate. For example, for Bash, use backslash ("\\"). Or, remove the continuation characters and enter the command on one line. For Bash, you'll also need to add `MSYS_NO_PATHCONV=1` before the command so that Bash deals with the partition key parameter correctly. ++After the resources have been created, use classes from the `Microsoft.Azure.Cosmos` client libraries to connect to and query the database. ++## [Connection String](#tab/connection-string) + Add the following code to create a container with the [``Database.Containers.createContainerIfNotExistsAsync``](/javascript/api/@azure/cosmos/containers#@azure-cosmos-containers-createifnotexists) method. The method returns a reference to the container. :::code language="javascript" source="~/cosmos-db-sql-api-javascript-samples/001-quickstart/index.js" range="28-35"::: ++ ### Create an item Add the following code to provide your data set. Each _product_ has a unique ID, name, category name (used as partition key) and other fields. The partition key is specific to a container. In this Contoso Products container ### Query items -Add the following code to query for all items that match a specific filter. Create a [parameterized query expression](/javascript/api/@azure/cosmos/sqlqueryspec) then call the [``Container.Items.query``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-query) method. This method returns a [``QueryIterator``](/javascript/api/@azure/cosmos/queryiterator) that will manage the pages of results. Then, use a combination of ``while`` and ``for`` loops to [``fetchNext``](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-fetchnext) page of results as a [``FeedResponse``](/javascript/api/@azure/cosmos/feedresponse) and then iterate over the individual data objects. +Add the following code to query for all items that match a specific filter. Create a [parameterized query expression](/javascript/api/@azure/cosmos/sqlqueryspec) then call the [``Container.Items.query``](/javascript/api/@azure/cosmos/items#@azure-cosmos-items-query) method. This method returns a [``QueryIterator``](/javascript/api/@azure/cosmos/queryiterator) that manages the pages of results. Then, use a combination of ``while`` and ``for`` loops to [``fetchNext``](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-fetchnext) page of results as a [``FeedResponse``](/javascript/api/@azure/cosmos/feedresponse) and then iterate over the individual data objects. The query is programmatically composed to `SELECT * FROM todo t WHERE t.partitionKey = 'Bikes, Touring Bikes'`. |
cosmos-db | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md | The `Microsoft.Azure.Cosmos` client library enables you to perform *data* operat > - [Azure CLI scripts](manage-with-cli.md) > - [Azure PowerShell scripts](manage-with-powershell.md) > - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md)-> - [Azure Resource Manager .NET client library](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDB/) +> - [Azure Resource Manager Python client library](https://pypi.org/project/azure-mgmt-cosmosdb) The Azure CLI approach is used in for this quickstart and passwordless access. Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) command to create a Cosmos DB for NoSQL database. The `Microsoft.Azure.Cosmos` client library enables you to perform *data* operat > - [Azure CLI scripts](manage-with-cli.md) > - [Azure PowerShell scripts](manage-with-powershell.md) > - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md)-> - [Azure Resource Manager .NET client library](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDB/) +> - [Azure Resource Manager Python client library](https://pypi.org/project/azure-mgmt-cosmosdb) The Azure CLI approach is used in this example. Use the [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) command to create a Cosmos DB container. |
cosmos-db | Partitioning Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md | Azure Cosmos DB uses partitioning to scale individual containers in a database t For example, a container holds items. Each item has a unique value for the `UserID` property. If `UserID` serves as the partition key for the items in the container and there are 1,000 unique `UserID` values, 1,000 logical partitions are created for the container. -In addition to a partition key that determines the item's logical partition, each item in a container has an *item ID* (unique within a logical partition). Combining the partition key and the *item ID* creates the item's *index*, which uniquely identifies the item. [Choosing a partition key](#choose-partitionkey) is an important decision that will affect your application's performance. +In addition to a partition key that determines the item's logical partition, each item in a container has an *item ID* (unique within a logical partition). Combining the partition key and the *item ID* creates the item's *index*, which uniquely identifies the item. [Choosing a partition key](#choose-partitionkey) is an important decision that affects your application's performance. -> -> [!VIDEO https://aka.ms/docs.partitioning-overview] +> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWXbMV] -This article explains the relationship between logical and physical partitions. It also discusses best practices for partitioning and gives an in-depth view at how horizontal scaling works in Azure Cosmos DB. It's not necessary to understand these internal details to select your partition key but we have covered them so you have clarity on how Azure Cosmos DB works. +This article explains the relationship between logical and physical partitions. It also discusses best practices for partitioning and gives an in-depth view at how horizontal scaling works in Azure Cosmos DB. It's not necessary to understand these internal details to select your partition key but we've covered them so you have clarity on how Azure Cosmos DB works. ## Logical partitions A logical partition consists of a set of items that have the same partition key. A logical partition also defines the scope of database transactions. You can update items within a logical partition by using a [transaction with snapshot isolation](database-transactions-optimistic-concurrency.md). When new items are added to a container, new logical partitions are transparently created by the system. You don't have to worry about deleting a logical partition when the underlying data is deleted. -There is no limit to the number of logical partitions in your container. Each logical partition can store up to 20GB of data. Good partition key choices have a wide range of possible values. For example, in a container where all items contain a `foodGroup` property, the data within the `Beef Products` logical partition can grow up to 20 GB. [Selecting a partition key](#choose-partitionkey) with a wide range of possible values ensures that the container is able to scale. +There's no limit to the number of logical partitions in your container. Each logical partition can store up to 20 GB of data. Good partition key choices have a wide range of possible values. For example, in a container where all items contain a `foodGroup` property, the data within the `Beef Products` logical partition can grow up to 20 GB. [Selecting a partition key](#choose-partitionkey) with a wide range of possible values ensures that the container is able to scale. You can use Azure Monitor Alerts to [monitor if a logical partition's size is approaching 20 GB](how-to-alert-on-logical-partition-key-storage-size.md). ## Physical partitions -A container is scaled by distributing data and throughput across physical partitions. Internally, one or more logical partitions are mapped to a single physical partition. Typically smaller containers have many logical partitions but they only require a single physical partition. Unlike logical partitions, physical partitions are an internal implementation of the system and they are entirely managed by Azure Cosmos DB. +A container is scaled by distributing data and throughput across physical partitions. Internally, one or more logical partitions are mapped to a single physical partition. Typically smaller containers have many logical partitions but they only require a single physical partition. Unlike logical partitions, physical partitions are an internal implementation of the system and they're entirely managed by Azure Cosmos DB. The number of physical partitions in your container depends on the following: The number of physical partitions in your container depends on the following: > [!NOTE] > Physical partitions are an internal implementation of the system and they are entirely managed by Azure Cosmos DB. When developing your solutions, don't focus on physical partitions because you can't control them. Instead, focus on your partition keys. If you choose a partition key that evenly distributes throughput consumption across logical partitions, you will ensure that throughput consumption across physical partitions is balanced. -There is no limit to the total number of physical partitions in your container. As your provisioned throughput or data size grows, Azure Cosmos DB will automatically create new physical partitions by splitting existing ones. Physical partition splits do not impact your application's availability. After the physical partition split, all data within a single logical partition will still be stored on the same physical partition. A physical partition split simply creates a new mapping of logical partitions to physical partitions. +There's no limit to the total number of physical partitions in your container. As your provisioned throughput or data size grows, Azure Cosmos DB will automatically create new physical partitions by splitting existing ones. Physical partition splits do not impact your application's availability. After the physical partition split, all data within a single logical partition will still be stored on the same physical partition. A physical partition split simply creates a new mapping of logical partitions to physical partitions. Throughput provisioned for a container is divided evenly among physical partitions. A partition key design that doesn't distribute requests evenly might result in too many requests directed to a small subset of partitions that become "hot." Hot partitions lead to inefficient use of provisioned throughput, which might result in rate-limiting and higher costs. For **all** containers, your partition key should: * Be a property that has a value which does not change. If a property is your partition key, you can't update that property's value. -* Should only contain `String` values - or numbers should ideally be converted into a `String`, if there is any chance that they are outside the boundaries of double precision numbers according to [IEEE 754 binary64](https://www.rfc-editor.org/rfc/rfc8259#ref-IEEE754). The [Json specification](https://www.rfc-editor.org/rfc/rfc8259#section-6) calls out the reasons why using numbers outside of this boundary in general is a bad practice due to likely interoperability problems. These concerns are especially relevant for the partition key column, because it is immutable and requires data migration to change it later. +* Should only contain `String` values - or numbers should ideally be converted into a `String`, if there's any chance that they are outside the boundaries of double precision numbers according to [IEEE 754 binary64](https://www.rfc-editor.org/rfc/rfc8259#ref-IEEE754). The [Json specification](https://www.rfc-editor.org/rfc/rfc8259#section-6) calls out the reasons why using numbers outside of this boundary in general is a bad practice due to likely interoperability problems. These concerns are especially relevant for the partition key column, because it's immutable and requires data migration to change it later. * Have a high cardinality. In other words, the property should have a wide range of possible values. If your container could grow to more than a few physical partitions, then you sh ## Use item ID as the partition key -If your container has a property that has a wide range of possible values, it is likely a great partition key choice. One possible example of such a property is the *item ID*. For small read-heavy containers or write-heavy containers of any size, the *item ID* is naturally a great choice for the partition key. +If your container has a property that has a wide range of possible values, it's likely a great partition key choice. One possible example of such a property is the *item ID*. For small read-heavy containers or write-heavy containers of any size, the *item ID* is naturally a great choice for the partition key. The system property *item ID* exists in every item in your container. You may have other properties that represent a logical ID of your item. In many cases, these are also great partition key choices for the same reasons as the *item ID*. The *item ID* is a great partition key choice for the following reasons: * There are a wide range of possible values (one unique *item ID* per item).-* Because there is a unique *item ID* per item, the *item ID* does a great job at evenly balancing RU consumption and data storage. +* Because there's a unique *item ID* per item, the *item ID* does a great job at evenly balancing RU consumption and data storage. * You can easily do efficient point reads since you'll always know an item's partition key if you know its *item ID*. Some things to consider when selecting the *item ID* as the partition key include: |
data-factory | Apply Dataops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/apply-dataops.md | + + Title: Applying DataOps +description: Learn how to apply DataOps to Azure Data Factory. ++++++ Last updated : 02/17/2023+++# Applying DataOps to Azure Data Factory +++Azure Data Factory is MicrosoftΓÇÖs Data Integration and ETL service in the cloud. This paper provides guidance for DataOps in data factory. It isn't intended to be a complete tutorial on CI/CD, Git, or DevOps. Rather, you'll find the data factory teamΓÇÖs guidance for achieving DataOps in the service with references to detailed implementation links for data factory deployment best practices, factory management, and governance. There's a resources section at the end of this paper with links to tutorials. ++## What is DataOps? ++DataOps is a process that data organizations practice for collaborative data management intended to provide faster value to decision makers. ++Gartner provides this clear [definition of DataOps](https://www.gartner.com/en/information-technology/glossary/dataops): ++*DataOps is a collaborative data management practice focused on improving communication, integration and automation of data flows between data managers and data consumers across an organization. The goal of DataOps is to deliver value faster by creating predictable delivery and change management of data, data models, and related artifacts. DataOps uses technology to automate the design, deployment, and management of data delivery with appropriate levels of governance and uses metadata to improve the usability and value of data in a dynamic environment.* ++## How do you achieve DataOps in Azure Data Factory? ++Azure Data Factory provides data engineers with a visually based data pipeline paradigm for easily building cloud-scale data integration and ETL projects. Data factory relies on native integrations with mature version control tools such as GitHub and Azure DevOps, as well as the broader Azure ecosystem, to provide many built-in features to facilitate DataOps that include rich collaboration, governance, and artifact relationships. ++Specifically, once you bring your own GitHub or Azure DevOps repository into data factory, the service provides intuitive built-in UI options for common commands, such as commits, saving artifacts, and version control. The service also provides the option to provide CI/CD and code check-in best practices, to protect the sanity and health of your production environment. ++### "Code" in Azure Data Factory ++All artifacts in Azure Data Factory, whether they're pipelines, linked services, triggers, etc. have corresponding ΓÇ£codeΓÇ¥ representations in JSON behind the visual UI integration. These artifacts act in compliance with [Azure Resource Manager templates](/azure-resource-manager/templates/overview.md) standards. You can find the code by clicking on the bracket icon on the top right of the canvas. Sample JSON ΓÇ£codeΓÇ¥ would look like this: ++++### Live mode and Git version control ++Every factory has one single source of truth: pipelines, linked services, and trigger definitions stored within the service. This source of truth is what the pipeline runs execute and what determines the behaviors of triggers. If you are in live mode, every time you publish, you directly modify the single source of truth. The following image shows what the **Publish All** button looks like in live mode. +++Live mode can be convenient for single person working on side projects, as it allows developers to see immediate effects of their code changes. However, it's discouraged for a team of developers working on production-level work projects. The dangers include fat fingers, accidental deletions of critical resources, publishing of untested codes, etc., just to name a few. +When working on mission critical projects and platforms, consider bringing in a Git repository and use the Git mode in data factory to streamline the development process. [Version control](source-control.md#version-control) and gated check-in capabilities of the Git mode helps you prevent most, if not all, of the accidents associated with touching live mode directly. ++> [!NOTE] +> In Git mode, the **Publish** or **Publish All** button will be replaced by **Save** or **Save All**, and your changes are committed to your own branches (not directly changing the live code bases). ++### Setting up GitHub and Azure DevOps integration ++In Azure Data Factory, it's highly recommended to store your repository in either GitHub or Azure DevOps. The service fully supports both methods and the choice of which repo to use depends upon your individual organizational standards. There are two methods to set up a new repository or to connect to an existing repository: using the Azure portal or creating from the Azure Data Factory Studio UI ++#### Azure portal factory creation ++When you create a new data factory from the Azure portal, the default Git repo is Azure DevOps. You can also select GitHub as your repo and configure your repo settings. ++From the Azure portal, select the repo type and enter the repo and branch names to create a new factory natively integrated with Git. +++#### Enforcing use of Git with Azure Policy in your organization ++The use of Git in your Azure Data Factory projects is a highly recommended best practice. Even if you aren't implementing a complete CI/CD process, Git integration with ADF enables saving of your resource artifacts in your own sandbox environment (Git branch) where you can test your changes independently from the rest of the factory branches. You can [use Azure Policy to enforce use of Git](policy-reference.md) in your organizationΓÇÖs factory. ++#### Azure Data Factory Studio ++After you create your data factory, you can also connect to your repo through the Azure Data Factory Studio. In the **Manage** tab, you'll see the option to configure your repo and repo settings. +++Through a guided process, you're directed through a series of steps to help you easily configure and connect to your repository of choice. Once fully set up, you can start to work collaboratively and save your resources to your repo. +++### Continuous integration and continuous delivery (CI/CD) ++CI/CD is a paradigm of code development where changes are inspected and tested as they move through various stages - development, test, staging, etc. After being reviewed and tested through each stage they are finally published to live code bases in a production environment. ++Continuous integration (CI) is the practice of automatically testing and validating every time a developer makes a change to your codebase. Continuous delivery (CD) means that after Continuous Integration tests succeed, the changes are brought to the next stage continuously. ++As discussed briefly previously, ΓÇ£codeΓÇ¥ in Azure Data Factory takes the form of [Azure Resource Manager template](/azure-resource-manager/templates/overview.md) JSON. Hence, the changes going through the continuous integration and delivery (CI/CD) process comprise additions, deletions, and edits to JSON blobs. ++#### Pipeline runs in Azure Data Factory ++Before talking about CI/CD in Azure Data Factory, we first need to talk about how the service runs a pipeline. Before data factory runs a pipeline, it does following things: ++- Pulls the latest published definition of the pipeline, and its associated assets, such as dataset(s), linked service(s), etc. +- Compiles it down to actions; if data factory executed it recently, it retrieves the actions from cached compilations. +- Runs the pipeline. ++Running the pipeline entails the following steps: ++- The service takes the point in time snapshot of the pipeline definition. +- Throughout the pipeline duration, the definitions don't change. +- Even if your pipelines run for a long time, they are unaffected by subsequent changes made after they were started. If you publish changes to the linked service, pipelines, etc., during the run, these do not affect in-progress runs. +- When you publish changes, subsequent runs started after publication use the updated definitions. ++#### Publishing in Azure Data Factory ++Regardless of whether you're deploying pipelines with [Azure Release Pipeline](continuous-integration-delivery-automate-azure-pipelines.md) to automate publishing, or with [manual deployment](continuous-integration-delivery-manual-promotion.md) of Resource Manager templates, in the backend, publishing is a series of create/update operations on [datasets](/rest/api/datafactory/datasets/create-or-update?tabs=HTTP), [linked services](/rest/api/datafactory/linked-services/create-or-update?tabs=HTTP), [pipelines](/rest/api/datafactory/pipelines/create-or-update?tabs=HTTP), and [triggers](/rest/api/datafactory/triggers/create-or-update?tabs=HTTP), for each of the artifacts. The effect is the same as making the underlying Rest API calls directly. ++A few things come from the actions here: ++- All of these API calls are [synchronous](https://www.techtarget.com/whatis/definition/synchronous-asynchronous-API#:~:text=With%20synchronous%20communications%2C%20the%20parties,not%20respond%20for%20some%20time.), meaning that the call only returns when the publishing succeeds/fails. There won't be a state of partial deployment for the artifact. +- API calls are to a large extent sequential. We try to parallelize the calls, while maintaining the referential dependencies of the artifacts. The order of deployments is linked service -> dataset/integration runtime -> pipeline -> trigger. This order ensures that dependent artifacts can properly reference its dependencies. For example, pipelines depend on datasets and so data factory deploys them after datasets. +- Deployment of linked services, datasets, etc. are independent from the pipelines. There are situations where data factory updates linked services before a pipeline updates. We'll talk about this situation in the section [When to Stop a Trigger](#when-to-stop-a-trigger). +- Deployment won't delete artifacts from the factories. You need to explicitly call delete APIs for each artifact type ([pipeline](/rest/api/datafactory/pipelines/delete?tabs=HTTP), [dataset](/rest/api/datafactory/datasets/delete?tabs=HTTP), [linked service](/rest/api/datafactory/linked-services/delete?tabs=HTTP), etc.) to clean up a factory. Refer to the sample post deployment script from Azure Data Factory for example. +- Even if you havenΓÇÖt touched a pipeline, dataset, or linked service, it still invokes a quick update API call to the factory. ++##### Publishing triggers ++- Triggers have states: **started** or **stopped**. +- You can't make changes to a trigger in **started** mode. You need to stop a trigger before publishing any changes. +- You can invoke the [Create or Update Trigger API](/rest/api/datafactory/triggers/create-or-update?tabs=HTTP) on a trigger in **started** mode. + - If the payload changes, the API fails. + - If the payload remains unchanged, the API succeeds. +- This behavior has profound impact on when to stop a trigger. ++#### When to stop a trigger ++When it comes to deployment into a production data factory, with live triggers kicking off pipeline runs all the time, the question becomes ΓÇ£Should we stop them?ΓÇ¥. ++The short answer is that only in the following few scenarios should you consider stopping the trigger: ++- You need to stop the trigger if you're updating the trigger definitions, including fields such as end date, frequency, and pipeline association. +- It's recommended to stop the trigger if you're updating the datasets or linked services referenced in a live pipeline. For example, if you're rotating the credentials for SQL Server. +- You may choose to stop the trigger if the associated pipeline is throwing errors and failing and burdening your servers. ++Here are the few points to consider regarding stopping triggers: ++- As explained in section [Pipeline Runs in Azure Data Factory](#pipeline-runs-in-azure-data-factory), when a trigger kicks off a pipeline run, it takes a snapshot of the pipeline, dataset, integration runtime, and linked service definitions. If the pipeline runs before the changes populate into the backend, the trigger starts a run with the old version. In most cases, this should be fine. +- As explained in section [Publishing Triggers](#publishing-triggers). When a trigger is in **started** state, it can't be updated. Therefore, if you need to change details about the trigger definition, stop the trigger before publishing the changes. +- As explained in section [Publishing in Azure Data Factory](#publishing-in-azure-data-factory), modifications to the datasets or linked services publish before pipeline changes. To ensure the pipeline runs use the correct credentials and communicate with the right servers, we recommend that you stop the associated trigger too. ++#### Preparing "code" changes ++We recommend that you follow these best practices for pull requests. ++- Each developer should work on their own individual branches, and at the end of day, create pull requests to the main branch of the repository. See tutorials on pull requests in [GitHub](https://docs.github.com/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) and [DevOps](/devops/repos/git/pull-requests.md?view=azure-devops&tabs=browser&preserve-view=true). +- When gate keepers approve the pull requests and merge the changes into the main branch, the CI/CD process can start. There are two suggested methods to promote changes throughout environments: [automated](#automated-deployment-of-changes) and [manual](#manual-deployment-of-changes). +- Once you're ready to kick off CI/CD pipelines, you can do so generally using [Azure Pipeline Release](continuous-integration-delivery-improvements.md) or make deployments of specific individual pipelines using this [open source utility from Azure Player](https://github.com/Azure-Player/azure.datafactory.tools). ++#### Automated deployment of changes ++To help with automated deployments, we recommend using the Azure Data Factory utilities npm package. Using the npm package helps validate all the resources in a pipeline and generate the ARM templates for the user. ++To get started with the [Azure Data Factory utilities npm package](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities), refer to [Automated publishing for continuous integration and delivery](continuous-integration-delivery-improvements.md#package-overview). ++#### Manual deployment of changes ++After you've merged your branch back to the main collaboration branch in your Git repository, you can manually publish your changes to the live Azure Data Factory service. The service provides UI control over publishing from non-development factories with the **Disable publish (from ADF Studio)** option. +++### Selective deployment ++Selective deployment relies on a feature of GitHub and Azure DevOps, known as **cherry picking**. This feature allows you to deploy only certain changes but not others. For instance, one developer has made changes to multiple pipelines, but for todayΓÇÖs deployment, we may only want to deploy changes to one. ++Follow the tutorials from Azure DevOps and GitHub to select the commits relevant to the pipeline you need. Ensure that all changes, including relevant changes made to the triggers, linked services, and dependencies associated with the pipeline, have been cherry picked. ++Once you've cherry picked the changes and merged to the main collaboration pipeline, you can kick off the CI/CD process for the proposed changes. Additional information on how to hot fix, cherry pick, or utilize external frameworks for selective deployment as described in the [Automated testing](#automated-testing) section of this article. ++### Unit testing ++Unit testing is an important part of the process of developing new pipelines or editing existing data factory artifacts, which focuses on testing components of the code. Data Factory allows for individual unit testing at both the pipeline and data flow artifact level by using the pipeline [debug feature](iterative-development-debugging.md?tabs=data-factory#debugging-a-pipeline). +++When developing data flows, you'll be able to gain insights into each individual transformation and code change by using the [data preview feature](concepts-data-flow-debug-mode.md?tabs=data-factory) to achieve unit testing before deploying your changes to production. +++The service provides live and interactive feedback of your pipeline activities in the UI when debugging and unit testing in Azure Data Factory. ++For more advanced unit testing within your repository, refer to the blog [How to build unit tests for Azure Data Factory](https://towardsdatascience.com/how-to-build-unit-tests-for-azure-data-factory-3aa11b36c7af). ++### Automated testing ++There are several tools available for automated testing that you can use with Azure Data Factory. Since the service stores objects in the service as JSON entities, it can be convenient to use the open-source .NET unit testing framework NUnit with Visual Studio. Refer to this post [Setup automated testing for Azure Data Factory](https://richardswinbank.net/adf/set_up_automated_testing_for_azure_data_factory) that provides an in-depth explanation of how to set up an automated unit testing environment for your factory. (Special thanks to Richard Swinbank for permission to use this blog.) ++Customers can also run **TEST** pipelines with **PowerShell** or **AZ CLI** as part of the CI/CD process for pre and post deployment steps. ++A key strength of data factory lies in its parameterization of data sets. This feature empowers customers to run same pipelines with different data sets to make sure their new development meets all source and destination requirements. +++### Other CI/CD frameworks for Azure Data Factory ++As described previously, built-in Git integration is available natively through the Azure Data Factory UI including merging, branching, comparison, and publication. However, there are other useful CI/CD frameworks that are popular in the Azure community, which provide alternative mechanisms to provide similar capabilities. The Azure Data Factory Git methodology is based on ARM templates, whereas frameworks like [ADFTools by Kamil Nowinski](https://marketplace.visualstudio.com/items?itemName=SQLPlayer.DataFactoryTools) take a different approach by relying on individual JSON artifacts from your factory instead. Data engineers who are savvy in Azure DevOps and prefer to work in that environment (as opposed to the ARM-based UI approach that the service offers out of the box) may find that this framework works well for them and for common scenarios like partial deployments. This framework can also simplify handling of triggers when deploying into environments that have running trigger states. ++## Data governance in Azure Data Factory ++An important aspect of effective DataOps is data governance. For data integration ETL tools, providing data lineage and artifact relationships can provide important information for a data engineer to understand the impact of downstream changes. Data factory provides built-in related artifact views that constitute your factory implementation. +++Native integration with Microsoft Purview further provides lineage, impact analysis, and data cataloging. ++[Microsoft Purview](https://azure.microsoft.com/products/purview/) provides a unified data governance solution to help manage and govern your on-premises, multicloud, and software as a service (SaaS) data. It allows you to easily create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. These features enable data consumers to access valuable, trustworthy data management. +++With native integration into your Purview Data Catalog, data factory enables easy search and discovery of data assets to use in your data integration pipelines across the full breadth of your organizationΓÇÖs data estate. +++You can use the main search bar from the Azure Data Factory Studio to find data assets in your Purview catalog. +++## Next steps ++- [Automated publishing for CI/CD in Azure Data Factory](continuous-integration-delivery-improvements.md) +- [Source control in Azure Data Factory](source-control.md) +- [Azure Data Factory video library with helpful videos on using CI/CD in data factory](https://www.youtube.com/channel/UC2S0k7NeLcEm5_IhHUwpN0g/featured) +- [Hotfix data factory in Git](continuous-integration-delivery-hotfix-environment.md) ++ |
data-factory | Apply Finops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/apply-finops.md | + + Title: Applying FinOps +description: Learn how to apply FinOps to Azure Data Factory. ++++++ Last updated : 02/16/2023+++# Applying FinOps to Azure Data Factory ++This article describes how to apply FinOps in Azure Data Factory. ++## What is FinOps? ++The FinOps Foundation [Technical Advisory Council](https://www.finops.org/about/technical-advisory-council/) defines FinOps accordingly: ++*FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology, and business teams to collaborate on data-driven spending decisions.* ++At its core, FinOps is a cultural practice. ItΓÇÖs the way for teams to manage their cloud costs, where everyone takes ownership of their cloud usage supported by a central best-practices group. Cross-functional teams in Engineering, Finance, Product, etc. work together to enable faster product delivery, while at the same time gaining more financial control and predictability. +++## How to apply FinOps to Azure Data Factory ++Azure Data Factory is MicrosoftΓÇÖs Data Integration and ETL (extract, transform, load) service in the cloud. To achieve effective budgeting and cost controls in data factory, we first review how to understand the pricing model. Next, it's important to analyze your spending at factory and pipeline levels. You can do this with data factory's built-in consumption reports and at the Azure subscription level using Azure cost management and cost analysis features. Lastly, we talk about setting spending limits on your Azure subscription to help provide cost controls. ++## Understanding Azure Data Factory pricing ++The chart below explains the general flow of calculating data factory pricing. It shows how to use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for calculating pricing. Overall, the primary parts to understanding data factory billing involve these costs: **orchestration**, **execution**, **type of integration runtime (IR)**, **data movement (copy)**, and **data flows**. ++1. Check whether your data factory source or sink integration runtime uses managed virtual network (VNET). If so, orchestration and execution are calculated using the Azure managed VNET IR. If not, proceed to the next step. +1. Confirm whether the source or sink uses the self-hosted integration runtime. If so, orchestration and execution are calculated by the self-hosted IR, and the total cost equals the sum of costs for both orchestration and execution. If not, orchestration and execution are calculated by the Azure IR. +1. For Azure IR and Azure managed VNET IR, confirm whether you use data flow. If so, the total cost equals the sum of costs for the data flow cluster, orchestration, and execution. Otherwise, the total cost is simply the sum of costs for orchestration and execution. +++## Example scenarios ++Let's look at several examples of common data factory scenarios and estimated costs associated with each workload. As we work through each example, keep these standards for data factory costs in mind: ++- When you review your bill, keep in mind that data factory rounds up to the minute for each activity duration (that is, 1 min 1 sec = 2-min billing). +- The following examples are based on common scenarios and show estimated costs. +- Other costs can be incurred from the data stores and external services in Azure that you utilize. +- Your actual costs can differ slightly from these examples based on the sales contract terms that you have with Microsoft. +- This link provides more examples: [Understanding Azure Data Factory pricing through examples](pricing-concepts.md). ++### Example: Copy data and transform with Azure Databricks hourly ++In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform the data with Azure Databricks on an hourly schedule for 8 hours per day for 30 days. ++The prices used in this example are hypothetical and aren't intended to imply actual exact pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates. ++#### Configuration ++To accomplish the scenario, you need to create a pipeline with the following items: ++1. One copy activity with an input dataset for the data to be copied from AWS S3, and an output dataset for the data on Azure storage. +1. One Azure Databricks activity for the data transformation. +1. One scheduled trigger to execute the pipeline every hour. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run. ++#### Cost estimation ++Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) and flow using the following steps: ++1. If both the source and sink don't use Azure managed VNET, go to Step 2. +1. If both the source and sink don't use self-hosted IR, orchestration and execution are calculated using Azure IR. +1. We only use the copy activity and an external activity here. It doesn't use the data flow activity, so the total cost equals the sum of the costs for orchestration and execution. ++Estimated pricing for a month (8 hours per day for 30 days): ++|Types |Calculation | +||| +|**Orchestration** (activity run counts in thousands) |3 activity runs per execution (1 for trigger run, 2 for activity runs).<br>Activity run counts/month = 3 * 8 * 30 = **720**.<br>Activity run counts in thousand/month = **1** | +|**Execution** |1. Data integration unit (DIU) hours:<br> ΓÇó DIU hours **per execution** = 10 min<br> ΓÇó Default DIU setting = 4<br> ΓÇó DIU hours/month = (10 min / 60 min) * 4 * 8 * 30 = **160**<br><br>2. External pipeline activity execution hours:<br> ΓÇó Per execution time: 10 min<br> ΓÇó External pipeline activity execution hours = (10 min / 60 min) * 8 * 30 = **40** | ++#### Pricing calculator example ++Total scenario pricing for 30 days: $41.01 +++### Example: Using mapping data flow debug for a normal workday ++This example shows mapping data flow debug costs for a typical workday for a data engineer. The prices used in the following example are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates. ++#### Azure Data Factory engineer ++A data factory engineer is responsible for designing, building, and testing mapping data flows every day. The engineer logs into the Azure Data Factory Studio in the morning and enables the debug mode for data Flows. The default Time to Live (TTL) for debug sessions is 60 minutes. The engineer works throughout the day for 8 hours, so the debug session never expires. Therefore, the engineer's charges for the day are: ++8 hours * 8 compute-optimized cores * $0.193 per hour per core = **$12.35** ++## Budgeting ++When planning an Azure Data Factory implementation, it's important to understand and forecast your costs to help build a budget for your ETL and data integration projects. +++Select on the consumption report button from the pipeline monitoring view to get a snapshot of the units billed for each run. ++On the monitoring page, you can manually use the consumption report for any pipeline run from a debug or manually triggered run, or even from an automated trigger run. +++The data factory pipeline consumption report provides the estimated units billed. You can run these tests using a debug execution of your pipeline on smaller datasets and then extrapolate your production budget from these estimates. ++The consumption report provides values in units. To derive a monetary estimate from this, multiply the unitsΓÇÖ value in this report by the price in your region based on the Azure pricing calculator. This produces an estimate for that pipeline execution. A best practice is to execute the pipeline several times with different datasets to get a baseline range of costs and use an average of those runs for your budgeting. ++## Azure cost optimization ++This section discusses cost optimization with Microsoft cost management, the Azure Advisor, and for reserved instances in data factory. ++### Microsoft cost management ++Microsoft Azure provides tools that help you to track, optimize, and control your Azure spending. If your data factory spending is a top priority, the recommendation is to create a separate resource group in Azure for each data factory. This way, it's easy to build budgets, track your spending, and apply cost controls using [Microsoft Cost Management](/cost-management-billing/costs/cost-mgt-best-practices.md). +++Today organizations are working harder than ever to control spending and do more with less. You can use the Azure budgets feature to set spending limits on your Azure Data Factory v2 usage and the overall Azure resource group that you're using for data factory. +++From the [create budget window](/cost-management-billing/costs/tutorial-acm-create-budgets.md), use filters to choose either the Azure Data Factory service or a resource group. ++### Azure Advisor ++Another valuable tool for optimizing your Azure budget is Azure Advisor. With Azure Advisor, you can receive recommendations for reducing your overall Azure spending. This includes utilization of [Azure Data Factory's reserved instance pricing for reducing costs of mapping data flows](/advisor/advisor-reference-cost-recommendations.md#consider-data-factory-reserved-instance-to-save-over-your-on-demand-costs). You can also pay for Azure Data Factory charges with your [Azure pre-payment credit](plan-manage-costs.md#using-azure-prepayment-with-azure-data-factory). +++### Reserved instances in Azure Data Factory ++[Reserved instances](data-flow-reserved-capacity-overview.md) are available in Azure Data Factory for mapping data flows, which you can use to provide savings over the normal list price of data flows. With reserved instances, you pre-purchase 1-year or 3-year reservations at discount levels based on the length of the reservation. To see a customized view of your cost savings using reserved instances, navigate to the Azure portal and choose **Reservations**, then select data factory. From there, you'll pick the type of data flows that you typically use, and the Azure portal will then estimate your future savings based on your previous data factory utilization. +++Reserving mapping data flow capacity using reserved instances can provide an immediate discount on your overall data factory spending related directly to your data flow usage. ++## Tracking your data factory spending ++As you build out your data integration infrastructure in Azure, it's important to track your spending over time. There are several ways to track your data factory budget. By default, data factory provides an all-up summarized cost for your factory based on the different billing meters that the service utilizes. ++### How to use pipeline billing granular view ++You can ask data factory to provide a pipeline-level roll-up of your costs by setting the factory to use **[by pipeline](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/granular-billing-for-azure-data-factory/ba-p/3654600)** billing as an option under factory settings. +++This view gives you a breakdown of your data factory spend by each pipeline. This can be useful to attribute costs at a line-item level rather than a factory roll-up (which is the default). +++The pipeline-level view of your data factory bill is useful to attribute overall data factory costs to each pipeline resource. It's also useful to provide an easy-to-use mechanism to implement charge-back to users of your factory, both for internal organization consumption and external customer or partner usage. ++### How to use tags for pipeline cost attribution ++Another mechanism for tracking attributing costs for your data factory resource is to use [tagging in your factory](plan-manage-costs.md). You can assign the same tag to your data factory and other Azure resources, putting them into the same category to view their consolidated billing. All SSIS (SQL Server Integration Services) IRs within the factory inherit this tag. Keep in mind that if you change your data factory tag, you need to stop and restart all SSIS IRs within the factory for them to inherit the new tag. For more details, refer to the [reconfigure SSIS IR section](manage-azure-ssis-integration-runtime.md#to-reconfigure-an-azure-ssis-ir). ++## Next steps ++- [Plan to manage costs for Azure Data Factory](plan-manage-costs.md) +- [Understanding Azure Data Factory pricing through examples](pricing-concepts.md) +- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) |
data-factory | Connector Oracle Cloud Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-cloud-storage.md | Title: Copy data from Oracle Cloud Storage -description: Learn about how to copy data from Oracle Cloud Storage to supported sink data stores using a Azure Data Factory or Synapse Analytics pipeline. +description: Learn about how to copy data from Oracle Cloud Storage to supported sink data stores using an Azure Data Factory or Synapse Analytics pipeline. |
data-factory | Parameterize Linked Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md | All the linked service types are supported for parameterization. - Salesforce - Salesforce Service Cloud - SAP ODP+- SAP Table - SFTP - SharePoint Online List - Snowflake |
data-factory | Pricing Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md | -# Understanding Data Factory pricing through examples +# Understanding Azure Data Factory pricing through examples [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)] This article explains and demonstrates the Azure Data Factory pricing model with detailed examples. You can also refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service. To understand how to estimate pricing for any scenario, not just the examples here, refer to the article [Plan and manage costs for Azure Data Factory](plan-manage-costs.md). -For more details about pricing in Azure Data Factory, refer to the [Data Pipeline Pricing and FAQ](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/). +For more details about pricing in Azure Data Factory, see the [Data Pipeline Pricing and FAQ](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/). ## Pricing examples-The prices used in these examples below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates. +The prices used in the following examples are hypothetical and don't intend to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates. - [Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md) - [Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md) |
defender-for-cloud | Devops Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md | If you're having issues with Defender for DevOps these frequently asked question - [Is it possible to block the developers committing code with exposed secrets](#is-it-possible-to-block-the-developers-committing-code-with-exposed-secrets) - [I am not able to configure Pull Request Annotations](#i-am-not-able-to-configure-pull-request-annotations) - [What are the programing languages that are supported by Defender for DevOps?](#what-are-the-programing-languages-that-are-supported-by-defender-for-devops) +- [I'm getting the There's no CLI tool error in Azure DevOps](#im-getting-the-theres-no-cli-tool-error-in-azure-devops) ### I'm getting an error while trying to connect The following languages are supported by Defender for DevOps: - Java Script - Type Script +### I'm getting the There's no CLI tool error in Azure DevOps ++If when running the pipeline in Azure DevOps, you receive the following error: +"no such file or directory, scandir 'D:\a\_msdo\versions\microsoft.security.devops.cli'". ++This error occurs if you are missing the dependency of `dotnet6` in the pipeline's YAML file. DotNet6 is required to allow the Microsoft Security DevOps extension to run. Include this as a task in your YAML file to eliminate the error. + +You can learn more about [Microsoft Security DevOps](https://marketplace.visualstudio.com/items?itemName=ms-securitydevops.microsoft-security-devops-azdevops). ## Next steps |
defender-for-cloud | File Integrity Monitoring Enable Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md | In this article, you'll learn how to: - [Monitor workspaces, entities, and files](#monitor-workspaces-entities-and-files) - [Compare baselines using File Integrity Monitoring](#compare-baselines-using-file-integrity-monitoring) +> [!NOTE] +> File Integrity Monitoring may create the following account on monitored SQL Servers: `NT Service\HealthService` \ +> If you delete the account, it will be automatically recreated. + ## Availability |Aspect|Details| Learn more about Defender for Cloud in: - [Setting security policies](tutorial-security-policy.md) - Learn how to configure security policies for your Azure subscriptions and resource groups. - [Managing security recommendations](review-security-recommendations.md) - Learn how recommendations help you protect your Azure resources.-- [Azure Security blog](https://azure.microsoft.com/blog/topics/security/) - Get the latest Azure security news and information.+- [Azure Security blog](https://azure.microsoft.com/blog/topics/security/) - Get the latest Azure security news and information. |
defender-for-iot | Release Notes Sentinel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-sentinel.md | For more information, see: - [Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json) - [Tutorial: Investigate and detect threats for IoT devices](../../sentinel/iot-advanced-threat-monitoring.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json). -## Version 2.1 +## Version 2.0.2 ++**Released**: February 2023 ++New features in this version include: ++- Improved analytics rules, with the new ability to have incidents created only when new alerts are triggered in Defender for IoT. When configuring your incident creation in Microsoft Sentinel, filter alerts by the **Is New** property. ++- An enhanced incident details page that includes Defender for IoT data, including a deep link to the Defender for IoT alert details page, the product name, remediation steps, and MITRE tactics and techniques. ++- Performance improvements for analytics rule queries. ++## Version 2.0.1 **Released**: September 2022 New features in this version include: For more information, see [Updates to the Microsoft Defender for IoT solution](whats-new.md#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub). -## Version 2.0 +## Version 2.0.0 **Released**: September 2022 |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Title: What's new in Microsoft Defender for IoT description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 02/09/2023 Last updated : 02/22/2023 # What's new in Microsoft Defender for IoT? Features released earlier than nine months ago are described in the [What's new |Service area |Updates | |||-| **OT networks** | **Cloud features**: <br>- [Download updates from the Sites and sensors page (Public preview)](#download-updates-from-the-sites-and-sensors-page-public-preview) <br>- [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) <br>- [Device inventory GA in the Azure portal](#device-inventory-ga-in-the-azure-portal) <br>- [Device inventory grouping enhancements (Public preview)](#device-inventory-grouping-enhancements-public-preview) <br><br> **Sensor version 22.2.3**: [Configure OT sensor settings from the Azure portal (Public preview)](#configure-ot-sensor-settings-from-the-azure-portal-public-preview) | +| **OT networks** | **Cloud features**: <br>- [Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2](#microsoft-sentinel-microsoft-defender-for-iot-solution-version-202) <br>- [Download updates from the Sites and sensors page (Public preview)](#download-updates-from-the-sites-and-sensors-page-public-preview) <br>- [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) <br>- [Device inventory GA in the Azure portal](#device-inventory-ga-in-the-azure-portal) <br>- [Device inventory grouping enhancements (Public preview)](#device-inventory-grouping-enhancements-public-preview) <br><br> **Sensor version 22.2.3**: [Configure OT sensor settings from the Azure portal (Public preview)](#configure-ot-sensor-settings-from-the-azure-portal-public-preview) | | **Enterprise IoT networks** | **Cloud features**: [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) | +### Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2 ++[Version 2.0.2](release-notes-sentinel.md#version-202) of the Microsoft Defender for IoT solution is now available in the [Microsoft Sentinel content hub](/azure/sentinel/sentinel-solutions-catalog), with improvements in analytics rules for incident creation, an enhanced incident details page, and performance improvements for analytics rule queries. ++For more information, see: ++- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) +- [Microsoft Defender for IoT solution versions in Microsoft Sentinel](release-notes-sentinel.md) + ### Download updates from the Sites and sensors page (Public preview) If you're running a local software update on your OT sensor or on-premises management console, the **Sites and sensors** page now provides a new wizard for downloading your update packages, accessed via the **Sensor update (Preview)** menu. |
digital-twins | How To Use 3D Scenes Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md | This will open the **New element** panel where you can fill in element informati A *primary twin* is the main digital twin counterpart for an element. You connect the element to a twin in your Azure Digital Twins instance so that the element can represent your twin and its data within the 3D visualization. -In the **New element** panel, the **Primary twin** dropdown list contains names of all the twins in the connected Azure Digital Twins instance. +In the **New element** panel, the **Primary twin** dropdown list contains names of all the twins in the connected Azure Digital Twins instance. Next to this field, you can select the **Inspect properties** icon to view the twin data, or the **Advanced twin search** icon to find other twins by querying property values. :::image type="content" source="media/how-to-use-3d-scenes-studio/new-element-primary-twin.png" alt-text="Screenshot of the New element options in 3D Scenes Studio. The Primary twin dropdown list is highlighted." lightbox="media/how-to-use-3d-scenes-studio/new-element-primary-twin.png"::: |
digital-twins | Quickstart 3D Scenes Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-3d-scenes-studio.md | |
dms | Known Issues Azure Sql Migration Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md | WHERE STEP in (3,4,6); - **Recommendation**: Make sure the target database schema was created before starting the migration. For more information on how to deploy the target database schema, see [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) +- **Message**: DatabaseSizeMoreThanMax: `The source database size <Source Database Size> exceeds the maximum allowed size of the target database <Target Database Size>. Check if the target database has enough space.` ++- **Cause**: The target database doesn't have enough space. ++- **Recommendation**: Make sure the target database schema was created before starting the migration. For more information on how to deploy the target database schema, see [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension). - **Message**: NoTablesFound: `Some of the source tables don't exist in the target database. Missing tables: <TableList>`. Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure - For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) - For more information on known limitations with Log Replay Service, see [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations)-- For more information on SQL Server on Virtual machine resource limits, see [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)+- For more information on SQL Server on Virtual machine resource limits, see [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist) |
energy-data-services | Concepts Csv Parser Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-csv-parser-ingestion.md | Title: Microsoft Energy Data Services Preview csv parser ingestion workflow concept #Required; page title is displayed in search results. Include the brand. + Title: Microsoft Azure Data Manager for Energy Preview csv parser ingestion workflow concept #Required; page title is displayed in search results. Include the brand. description: Learn how to use CSV parser ingestion. #Required; article description that is displayed in search results. Previously updated : 08/18/2022 Last updated : 02/10/2023 # CSV parser ingestion concepts A CSV (comma-separated values) file is a comma delimited text file that is used to save data in a table structured format. -A CSV Parser [DAG](https://airflow.apache.org/docs/apache-airflow/1.10.12/concepts.html#dags) allows a customer to load data into Microsoft Energy Data Services Preview instance based on a custom schema that is, a schema that doesn't match the [OSDU™](https://osduforum.org) canonical schema. Customers must create and register the custom schema using the Schema service before loading the data. +A CSV Parser [DAG](https://airflow.apache.org/docs/apache-airflow/1.10.12/concepts.html#dags) allows a customer to load data into Microsoft Azure Data Manager for Energy Preview instance based on a custom schema that is, a schema that doesn't match the [OSDU™](https://osduforum.org) canonical schema. Customers must create and register the custom schema using the Schema service before loading the data. A CSV Parser DAG implements an ELT (Extract Load and Transform) approach to data loading, that is, data is first extracted from the source system in a CSV format, and it's loaded into the Microsoft Energy Data Service Preview instance. It could then be transformed to the [OSDU™](https://osduforum.org) canonical schema using a mapping service. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## What does CSV ingestion do?-A CSV Parser DAG allows the customers to load the CSV data into the Microsoft Energy Data Services Preview instance. It parses each row of a CSV file and creates a storage metadata record. It performs `schema validation` to ensure that the CSV data conforms to the registered custom schema. It automatically performs `type coercion` on the columns based on the schema data type definition. It generates `unique id` for each row of the CSV record by combining source, entity type and a Base64 encoded string formed by concatenating natural key(s) in the data. It performs `unit conversion` by converting declared frame of reference information into appropriate persistable reference using the Unit service. It performs `CRS conversion` for spatially aware columns based on the Frame of Reference (FoR) information present in the schema. It creates `relationships` metadata as declared in the source schema. Finally, it `persists` the metadata record using the Storage service. +A CSV Parser DAG allows the customers to load the CSV data into the Microsoft Azure Data Manager for Energy Preview instance. It parses each row of a CSV file and creates a storage metadata record. It performs `schema validation` to ensure that the CSV data conforms to the registered custom schema. It automatically performs `type coercion` on the columns based on the schema data type definition. It generates `unique id` for each row of the CSV record by combining source, entity type and a Base64 encoded string formed by concatenating natural key(s) in the data. It performs `unit conversion` by converting declared frame of reference information into appropriate persistable reference using the Unit service. It performs `CRS conversion` for spatially aware columns based on the Frame of Reference (FoR) information present in the schema. It creates `relationships` metadata as declared in the source schema. Finally, it `persists` the metadata record using the Storage service. ## CSV parser ingestion components To execute the CSV Parser DAG workflow, the user must have a valid authorization The below workflow diagram illustrates the CSV Parser DAG workflow: :::image type="content" source="media/concepts-csv-parser-ingestion/csv-ingestion-sequence-diagram.png" alt-text="Screenshot of the CSV ingestion sequence diagram." lightbox="media/concepts-csv-parser-ingestion/csv-ingestion-sequence-diagram-expanded.png"::: -To execute the CSV Parser DAG workflow, the user must first create and register the schema using the workflow service. Once the schema is created, the user then uses the File service to upload the CSV file to the Microsoft Energy Data Services Preview instances, and also creates the storage record of file generic kind. The file service then provides a file ID to the user, which is used while triggering the CSV Parser workflow using the Workflow service. The Workflow service provides a run ID, which the user could use to track the status of the CSV Parser workflow run. +To execute the CSV Parser DAG workflow, the user must first create and register the schema using the workflow service. Once the schema is created, the user then uses the File service to upload the CSV file to the Microsoft Azure Data Manager for Energy Preview instances, and also creates the storage record of file generic kind. The file service then provides a file ID to the user, which is used while triggering the CSV Parser workflow using the Workflow service. The Workflow service provides a run ID, which the user could use to track the status of the CSV Parser workflow run. OSDU™ is a trademark of The Open Group. ## Next steps Advance to the CSV parser tutorial and learn how to perform a CSV parser ingestion > [!div class="nextstepaction"]-> [Tutorial: Sample steps to perform a CSV parser ingestion](tutorial-csv-ingestion.md) +> [Tutorial: Sample steps to perform a CSV parser ingestion](tutorial-csv-ingestion.md) |
energy-data-services | Concepts Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-ddms.md | OSDU™ Technical Standard defines the following types of OSDU™ applic ## Who did we build this for? -**IT Developers** build systems to connect data to domain applications (internal and external ΓÇô for example, Petrel) which enables data managers to deliver projects to geoscientists. The DDMS suite on Microsoft Energy Data Services helps automate these workflows and eliminates time spent managing updates. +**IT Developers** build systems to connect data to domain applications (internal and external ΓÇô for example, Petrel) which enables data managers to deliver projects to geoscientists. The DDMS suite on Azure Data Manager for Energy Preview helps automate these workflows and eliminates time spent managing updates. -**Geoscientists** use domain applications for key Exploration and Production workflows such as Seismic interpretation and Well tie analysis. While these users won't directly interact with the DDMS, their expectations for data performance and accessibility will drive requirements for the DDMS in the Foundation Tier. Azure will enable geoscientists to stream cross domain data instantly in OSDU™ compatible applications (for example, Petrel) connected to Microsoft Energy Data Services. +**Geoscientists** use domain applications for key Exploration and Production workflows such as Seismic interpretation and Well tie analysis. While these users won't directly interact with the DDMS, their expectations for data performance and accessibility will drive requirements for the DDMS in the Foundation Tier. Azure will enable geoscientists to stream cross domain data instantly in OSDU™ compatible applications (for example, Petrel) connected to Azure Data Manager for Energy Preview. **Data managers** spend a significant number of time fulfilling requests for data retrieval and delivery. The Seismic, Wellbore, and Petrel Data Services enable them to discover and manage data in one place while tracking version changes as derivatives are created. ## Platform landscape -Microsoft Energy Data Services is an OSDU™ compatible product, meaning that its landscape and release model are dependent on OSDU™. +Azure Data Manager for Energy Preview is an OSDU™ compatible product, meaning that its landscape and release model are dependent on OSDU™. -Currently, OSDU™ certification and release process are not fully defined yet and this topic should be defined as a part of the Microsoft Energy Data Services Foundation Architecture. +Currently, OSDU™ certification and release process are not fully defined yet and this topic should be defined as a part of the Azure Data Manager for Energy Preview Foundation Architecture. -OSDU™ R3 M8 is the base for the scope of the Microsoft Energy Data Services Foundation Private Preview ΓÇô as a latest stable, tested version of the platform. +OSDU™ R3 M8 is the base for the scope of the Azure Data Manager for Energy Preview Foundation Private Preview ΓÇô as a latest stable, tested version of the platform. ## Learn more: OSDU™ DDMS community principles -[OSDU™ community DDMS Overview](https://community.opengroup.org/osdu/documentation/-/wikis/OSDU™-(C)/Design-and-Implementation/Domain-&-Data-Management-Services#ddms-requirements) provides an extensive overview of DDMS motivation and community requirements from a user, technical, and business perspective. These principles are extended to Microsoft Energy Data Services. +[OSDU™ community DDMS Overview](https://community.opengroup.org/osdu/documentation/-/wikis/OSDU™-(C)/Design-and-Implementation/Domain-&-Data-Management-Services#ddms-requirements) provides an extensive overview of DDMS motivation and community requirements from a user, technical, and business perspective. These principles are extended to Azure Data Manager for Energy Preview. ## DDMS requirements |
energy-data-services | Concepts Entitlements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md | Title: Microsoft Energy Data Services Preview entitlement concepts #Required; page title is displayed in search results. Include the brand. -description: This article describes the various concepts regarding the entitlement services in Microsoft Energy Data Services Preview #Required; article description that is displayed in search results. + Title: Microsoft Azure Data Manager for Energy Preview entitlement concepts #Required; page title is displayed in search results. Include the brand. +description: This article describes the various concepts regarding the entitlement services in Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. Previously updated : 08/19/2022 Last updated : 02/10/2023 Access management is a critical function for any service or resource. Entitlemen ## Groups -The entitlements service of Microsoft Energy Data Services allows you to create groups, and an entitlement group defines permissions on services/data sources for your Microsoft Energy Data Services instance. Users added by you to that group obtain the associated permissions. +The entitlements service of Azure Data Manager for Energy Preview allows you to create groups, and an entitlement group defines permissions on services/data sources for your Azure Data Manager for Energy Preview instance. Users added by you to that group obtain the associated permissions. The main motivation for entitlements service is data authorization, but the functionality enables three use cases: |
energy-data-services | Concepts Index And Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-index-and-search.md | Title: Microsoft Energy Data Services Preview - index and search workflow concepts #Required; page title is displayed in search results. Include the brand. + Title: Microsoft Azure Data Manager for Energy Preview - index and search workflow concepts #Required; page title is displayed in search results. Include the brand. description: Learn how to use indexing and search workflows #Required; article description that is displayed in search results. Previously updated : 08/23/2022 Last updated : 02/10/2023 #Customer intent: As a developer, I want to understand indexing and search workflows so that I could search for ingested data in the platform. -# Microsoft Energy Data Services Preview indexing and search workflows +# Azure Data Manager for Energy Preview indexing and search workflows All data and associated metadata ingested into the platform are indexed to enable search. The metadata is accessible to ensure awareness even when the data isn't available. |
energy-data-services | Concepts Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-manifest-ingestion.md | Title: Microsoft Energy Data Services Preview manifest ingestion concepts #Required; page title is displayed in search results. Include the brand. + Title: Microsoft Azure Data Manager for Energy Preview manifest ingestion concepts #Required; page title is displayed in search results. Include the brand. description: This article describes manifest ingestion concepts #Required; article description that is displayed in search results. -Manifest-based file ingestion provides end-users and systems a robust mechanism for loading metadata about datasets in Microsoft Energy Data Services Preview instance. This metadata is indexed by the system and allows the end-user to search the datasets. +Manifest-based file ingestion provides end-users and systems a robust mechanism for loading metadata about datasets in Azure Data Manager for Energy Preview instance. This metadata is indexed by the system and allows the end-user to search the datasets. Manifest-based file ingestion is an opaque ingestion that do not parse or understand the file contents. It creates a metadata record based on the manifest and makes the record searchable. Any arrays are ordered. should there be interdependencies, the dependent items m ## Manifest-based file ingestion workflow -Microsoft Energy Data Services Preview instance has out-of-the-box support for Manifest-based file ingestion workflow. `Osdu_ingest` Airflow DAG is pre-configured in your instance. +Azure Data Manager for Energy Preview instance has out-of-the-box support for Manifest-based file ingestion workflow. `Osdu_ingest` Airflow DAG is pre-configured in your instance. ### Manifest-based file ingestion workflow components The Manifest-based file ingestion workflow consists of the following components: The Manifest-based file ingestion workflow consists of the following components: * **Search Service** is used to perform referential integrity check during the manifest ingestion process. ### Pre-requisites-Before running the Manifest-based file ingestion workflow, customers must ensure that the user accounts running the workflow have access to the core services (Search, Storage, Schema, Entitlement and Legal) and Workflow service (see [Entitlement roles](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md) for details). As part of Microsoft Energy Data Services instance provisioning, the OSDU™ standard schemas and associated reference data are pre-loaded. Customers must ensure that the user account used for ingesting the manifests is included in appropriate owners and viewers ACLs. Customers must ensure that manifests are configured with correct legal tags, owners and viewers ACLs, reference data, etc. +Before running the Manifest-based file ingestion workflow, customers must ensure that the user accounts running the workflow have access to the core services (Search, Storage, Schema, Entitlement and Legal) and Workflow service (see [Entitlement roles](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md) for details). As part of Azure Data Manager for Energy Preview instance provisioning, the OSDU™ standard schemas and associated reference data are pre-loaded. Customers must ensure that the user account used for ingesting the manifests is included in appropriate owners and viewers ACLs. Customers must ensure that manifests are configured with correct legal tags, owners and viewers ACLs, reference data, etc. ### Workflow sequence The following illustration provides the Manifest-based file ingestion workflow: |
energy-data-services | How To Add More Data Partitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-add-more-data-partitions.md | Title: How to manage partitions -description: This is a how-to article on managing data partitions using the Microsoft Energy Data Services Preview instance UI. +description: This is a how-to article on managing data partitions using the Microsoft Azure Data Manager for Energy Preview instance UI. -In this article, you'll learn how to add data partitions to an existing Microsoft Energy Data Services instance. The concept of "data partitions" is picked from [OSDU™](https://osduforum.org/) where single deployment can contain multiple partitions. +In this article, you'll learn how to add data partitions to an existing Azure Data Manager for Energy Preview instance. The concept of "data partitions" is picked from [OSDU™](https://osduforum.org/) where single deployment can contain multiple partitions. Each partition provides the highest level of data isolation within a single deployment. All access rights are governed at a partition level. Data is separated in a way that allows for the partition's life cycle and deployment to be handled independently. (See [Partition Service](https://community.opengroup.org/osdu/platform/home/-/issues/31) in OSDU™) -You can create maximum five data partitions in one MEDS instance. Currently, in line with the data partition capabilities that are available in OSDU™, you can only create data partitions but can't delete or rename data existing data partitions. +You can create maximum five data partitions in one Azure Data Manager for Energy instance. Currently, in line with the data partition capabilities that are available in OSDU™, you can only create data partitions but can't delete or rename data existing data partitions. ## Create a data partition -1. Open the "Data Partitions" menu-item from left-panel of MEDS overview page. +1. Open the "Data Partitions" menu-item from left-panel of Azure Data Manager for Energy overview page. - [](media/how-to-add-more-data-partitions/dynamic-data-partitions-discovery-meds-overview-page.png#lightbox) + [](media/how-to-add-more-data-partitions/dynamic-data-partitions-discovery-meds-overview-page.png#lightbox) 2. Select "Create". - The page shows a table of all data partitions in your MEDS instance with the status of the data partition next to it. Clicking the "Create" option on the top opens a right-pane for next steps. + The page shows a table of all data partitions in your Azure Data Manager for Energy instance with the status of the data partition next to it. Clicking the "Create" option on the top opens a right-pane for next steps. [](media/how-to-add-more-data-partitions/start-create-data-partition.png#lightbox) 3. Choose a name for your data partition. - Each data partition name needs to be 1-10 characters long and be a combination of lowercase letters, numbers and hyphens only. The data partition name will be prepended with the name of the MEDS instance. Choose a name for your data partition and hit create. As soon as you hit create, the deployment of the underlying data partition resources such as Azure Cosmos DB and Azure Storage accounts is started. + Each data partition name needs to be 1-10 characters long and be a combination of lowercase letters, numbers and hyphens only. The data partition name will be prepended with the name of the Azure Data Manager for Energy instance. Choose a name for your data partition and hit create. As soon as you hit create, the deployment of the underlying data partition resources such as Azure Cosmos DB and Azure Storage accounts is started. >[!NOTE] >It generally takes 15-20 minutes to create a data partition. |
energy-data-services | How To Convert Segy To Ovds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md | Title: Microsoft Energy Data Services Preview - How to convert a segy to ovds file #Required; page title is displayed in search results. Include the brand. + Title: Microsoft Azure Data Manager for Energy Preview - How to convert a segy to ovds file #Required; page title is displayed in search results. Include the brand. description: This article explains how to convert a SGY file to oVDS file format #Required; article description that is displayed in search results. In this article, you will learn how to convert SEG-Y formatted data to the Open 1. Download and install [Postman](https://www.postman.com/) desktop app. 2. Import the [oVDS Conversions.postman_collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M9/Azure-M9/Services/DDMS/oVDS_Conversions.postman_collection.json) into Postman. All curl commands used below are added to this collection. Update your Environment file accordingly-3. Ensure that a Microsoft Energy Data Services Preview instance is created already +3. Ensure that an Azure Data Manager for Energy Preview instance is created already 4. Clone the **sdutil** repo as shown below: ```markdown |
energy-data-services | How To Convert Segy To Zgy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md | Title: Microsoft Energy Data Service - How to convert segy to zgy file #Required; page title is displayed in search results. Include the brand. + Title: Microsoft Azure Data Manager for Energy Preview - How to convert segy to zgy file #Required; page title is displayed in search results. Include the brand. description: This article describes how to convert a SEG-Y file to a ZGY file #Required; article description that is displayed in search results. In this article, you will learn how to convert SEG-Y formatted data to the ZGY f 1. Download and install [Postman](https://www.postman.com/) desktop app. 2. Import the [oZGY Conversions.postman_collection](https://github.com/microsoft/meds-samples/blob/main/postman/SegyToZgyConversion%20Workflow%20using%20SeisStore%20R3%20CI-CD%20v1.0.postman_collection.json) into Postman. All curl commands used below are added to this collection. Update your Environment file accordingly-3. Ensure that your Microsoft Energy Data Services Preview instance is created already +3. Ensure that your Azure Data Manager for Energy Preview instance is created already 4. Clone the **sdutil** repo as shown below: ```markdown git clone https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil.git In this article, you will learn how to convert SEG-Y formatted data to the ZGY f }' ``` -6. Patch Subproject with the legal tag you created above. Recall that the format of the legal tag will be prefixed with the Microsoft Energy Data Services instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`. +6. Patch Subproject with the legal tag you created above. Recall that the format of the legal tag will be prefixed with the Azure Data Manager for Energy instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`. ```bash curl --location --request PATCH '<url>/seistore-svc/api/v3/subproject/tenant/<data-partition>/subproject/<subproject-name>' \ In this article, you will learn how to convert SEG-Y formatted data to the ZGY f ZGY conversion uses a manifest file that you'll upload to your storage account in order to run the conversion. This manifest file is created by using multiple JSON files and running a script. The JSON files for this process are stored [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/doc/sample-records/volve). For more information on Volve, such as where the dataset definitions come from, visit [their website](https://www.equinor.com/energy/volve-data-sharing). Complete the following steps in order to create the manifest file: * Clone the [repo](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/) and navigate to the folder doc/sample-records/volve- * Edit the values in the `prepare-records.sh` bash script. Recall that the format of the legal tag will be prefixed with the Microsoft Energy Data Services instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`. + * Edit the values in the `prepare-records.sh` bash script. Recall that the format of the legal tag will be prefixed with the Azure Data Manager for Energy instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`. * `DATA_PARTITION_ID=<your-partition-id>` * `ACL_OWNER=data.default.owners@<your-partition-id>.<your-tenant>.com` OSDU™ is a trademark of The Open Group. ## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]-> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md) +> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md) |
energy-data-services | How To Create Lockbox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-create-lockbox.md | Title: Use Lockbox for Microsoft Energy Data Services + Title: Use Lockbox for Microsoft Azure Data Manager for Energy Preview description: Learn how to use Customer Lockbox as an interface to review and approve or reject access requests. -#Customer intent: As a developer, I want to set up Lockbox for Microsoft Energy Data Services. +#Customer intent: As a developer, I want to set up Lockbox for Azure Data Manager for Energy Preview. -# Use Customer Lockbox for Microsoft Energy Data Services +# Use Customer Lockbox for Azure Data Manager for Energy Preview -Microsoft Energy Data Services is the managed service offering for OSDU™. There are instances where Microsoft Support may need to access your data or compute layer during a support request. You can use Customer Lockbox as an interface to review and approve or reject these access requests. +Azure Data Manager for Energy Preview is the managed service offering for OSDU™. There are instances where Microsoft Support may need to access your data or compute layer during a support request. You can use Customer Lockbox as an interface to review and approve or reject these access requests. -This article covers how Customer Lockbox requests are initiated and tracked for Microsoft Energy Data Services. +This article covers how Customer Lockbox requests are initiated and tracked for Azure Data Manager for Energy Preview. -## Lockbox workflow for Microsoft Energy Data Services access +## Lockbox workflow for Azure Data Manager for Energy Preview access -The Microsoft Energy Data Services team at Microsoft typically does not access customer data. The team tries to resolve issues by using standard tools and telemetry. +The Azure Data Manager for Energy Preview team at Microsoft typically does not access customer data. The team tries to resolve issues by using standard tools and telemetry. If the issues cannot be resolved and require Microsoft Support to investigate, the team needs to request elevated access to the limited resources via Just in Time (JIT) portal (internal to Microsoft). The JIT portal validates permission level, provides multi-factor authentication, and includes approval from the Internal Microsoft Approvers. After the request for elevated access is approved via the JIT (just-in-time syst ## Prerequisites for access request Before you begin, make sure:-1. You have created a [Microsoft Energy Data Services instance](quickstart-create-microsoft-energy-data-services-instance.md). +1. You have created a [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). 2. You have enabled [Lockbox within the Azure portal](../security/fundamentals/customer-lockbox-overview.md). ## Track, approve request via Lockbox To track and approve a request to access customer data, follow these steps:-1. You raise an issue for Microsoft Energy Data Services using the Azure portal. The support engineer connects to Microsoft Energy Data Services via Support session and tries to troubleshoot the issue by using standard tools and telemetry. Let us say to mitigate the issue, the recommendation is to restart an AKS (Azure Kubernetes Service) cluster. +1. You raise an issue for Azure Data Manager for Energy Preview using the Azure portal. The support engineer connects to Azure Data Manager for Energy Preview via Support session and tries to troubleshoot the issue by using standard tools and telemetry. Let us say to mitigate the issue, the recommendation is to restart an AKS (Azure Kubernetes Service) cluster. 2. In this case, the support engineer creates a Lockbox request to access the AKS cluster for the given subscription. 3. When the request is created, usually the notification goes to the subscription owner, but you can also configure a group for notifications. 4. You can see the lockbox request in the Azure portal for your approval. To track and approve a request to access customer data, follow these steps: 6. Once the request is approved, the AKS clusters are accessible in the support session. 7. The support engineer restarts the AKS cluster to resolve the issue and then disables the support session or the session will expire in 4 to 8 hours. -If you have not enabled Lockbox, then your consent is not needed to access the compute or data layer of Microsoft Energy Data Services. +If you have not enabled Lockbox, then your consent is not needed to access the compute or data layer of Azure Data Manager for Energy Preview. ## Next steps <!-- Add a context sentence for the following links --> To learn more about data security and encryption > [!div class="nextstepaction"]-> [Data security and encryption in Microsoft Energy Data Services](how-to-manage-data-security-and-encryption.md) +> [Data security and encryption in Azure Data Manager for Energy Preview](how-to-manage-data-security-and-encryption.md) |
energy-data-services | How To Generate Refresh Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-refresh-token.md | Title: How to generate a refresh token for Microsoft Energy Data Service #Required; page title is displayed in search results. Include the brand. + Title: How to generate a refresh token for Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. description: This article describes how to generate a refresh token #Required; article description that is displayed in search results. In this article, you will learn how to generate a refresh token. The following a [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Register your app with Azure AD-To use the Microsoft Energy Data Services Preview platform endpoint, you must register your app using the [Azure app registration portal](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. +To use the Azure Data Manager for Energys Preview platform endpoint, you must register your app using the [Azure app registration portal](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. To configure an app to use the OAuth 2.0 authorization code grant flow, save the following values when registering the app: |
energy-data-services | How To Integrate Airflow Logs With Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-airflow-logs-with-azure-monitor.md | Title: Integrate airflow logs with Azure Monitor - Microsoft Energy Data Services Preview + Title: Integrate airflow logs with Azure Monitor - Microsoft Microsoft Azure Data Manager for Energy Preview description: This is a how-to article on how to start collecting Airflow Task logs in Azure Monitor, archiving them to a storage account, and querying them in Log Analytics workspace. -In this article, you'll learn how to start collecting Airflow Logs for your Microsoft Energy Data Services instances into Azure Monitor. This integration feature helps you debug Airflow DAG ([Directed Acyclic Graph](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html)) run failures. +In this article, you'll learn how to start collecting Airflow Logs for your Microsoft Azure Data Manager for Energy Preview instances into Azure Monitor. This integration feature helps you debug Airflow DAG ([Directed Acyclic Graph](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html)) run failures. ## Prerequisites In this article, you'll learn how to start collecting Airflow Logs for your Micr ## Enabling diagnostic settings to collect logs in a storage account-Every Microsoft Energy Data Services instance comes inbuilt with an Azure Data Factory-managed Airflow instance. We collect Airflow logs for internal troubleshooting and debugging purposes. Airflow logs can be integrated with Azure Monitor in the following ways: +Every Azure Data Manager for Energy Preview instance comes inbuilt with an Azure Data Factory-managed Airflow instance. We collect Airflow logs for internal troubleshooting and debugging purposes. Airflow logs can be integrated with Azure Monitor in the following ways: * Storage account * Log Analytics workspace To access logs via any of the above two options, you need to create a Diagnostic Follow the following steps to set up Diagnostic Settings: -1. Open Microsoft Energy Data Services' *Overview* page +1. Open Microsoft Azure Data Manager for Energy Preview' *Overview* page 1. Select *Diagnostic Settings* from the left panel [](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-diagnostic-settings-overview-page.png#lightbox) After a diagnostic setting is created for archiving Airflow task logs into a sto ## Enabling diagnostic settings to integrate logs with Log Analytics Workspace -You can integrate Airflow logs with Log Analytics Workspace by using **Diagnostic Settings** under the left panel of your Microsoft Energy Data Services instance overview page. +You can integrate Airflow logs with Log Analytics Workspace by using **Diagnostic Settings** under the left panel of your Microsoft Azure Data Manager for Energy Preview instance overview page. [](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-choosing-destination-retention.png#lightbox) |
energy-data-services | How To Integrate Elastic Logs With Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-elastic-logs-with-azure-monitor.md | Title: Integrate elastic logs with Azure Monitor - Microsoft Energy Data Services Preview + Title: Integrate elastic logs with Azure Monitor - Microsoft Azure Data Manager for Energy Preview description: This is a how-to article on how to start collecting ElasticSearch logs in Azure Monitor, archiving them to a storage account, and querying them in Log Analytics workspace. -In this article, you'll learn how to start collecting Elasticsearch logs for your Microsoft Energy Data Services instances in Azure Monitor. This integration feature is developed to help you debug Elasticsearch related issues inside Azure Monitor. +In this article, you'll learn how to start collecting Elasticsearch logs for your Azure Data Manager for Energy Preview instances in Azure Monitor. This integration feature is developed to help you debug Elasticsearch related issues inside Azure Monitor. ## Prerequisites In this article, you'll learn how to start collecting Elasticsearch logs for you ## Enabling Diagnostic Settings to collect logs in a storage account & a Log Analytics workspace-Every Microsoft Energy Data Services instance comes inbuilt with a managed Elasticsearch service. We collect Elasticsearch logs for internal troubleshooting and debugging purposes. You can get access to these logs by integrating Elasticsearch logs with Azure Monitor. +Every Azure Data Manager for Energy Preview instance comes inbuilt with a managed Elasticsearch service. We collect Elasticsearch logs for internal troubleshooting and debugging purposes. You can get access to these logs by integrating Elasticsearch logs with Azure Monitor. Each diagnostic setting has three basic parts: | Categories | Category of logs to send to each of the destinations. The set of categories will vary for each Azure service. Visit: [Supported Resource Log Categories](../azure-monitor/essentials/resource-logs-categories.md) | | Destinations | One or more destinations to send the logs. All Azure services share the same set of possible destinations. Each diagnostic setting can define one or more destinations but no more than one destination of a particular type. It should be a storage account, an Event Hubs namespace or an event hub. | -We support two destinations for your Elasticsearch logs from Microsoft Energy Data Services instance: +We support two destinations for your Elasticsearch logs from Azure Data Manager for Energy Preview instance: * Storage account * Log Analytics workspace We support two destinations for your Elasticsearch logs from Microsoft Energy Da ## Steps to enable diagnostic setting to collect Elasticsearch logs -1. Open *Microsoft Energy Data Services* overview page +1. Open *Azure Data Manager for Energy Preview* overview page 1. Select *Diagnostic Settings* from the left panel [](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-overview-page.png#lightbox) Go back to the Diagnostic Settings page. You would now see a new diagnostic sett ## View Elasticsearch logs in Log Analytics workspace or download them as JSON files using storage account ### How to view & query logs in Log Analytics workspace-The editor in Log Analytics workspace support Kusto (KQL) queries through which you can easily perform complicated queries to extract interesting logs data from the Elasticsearch service running in your Microsoft Energy Data Services instance. +The editor in Log Analytics workspace support Kusto (KQL) queries through which you can easily perform complicated queries to extract interesting logs data from the Elasticsearch service running in your Azure Data Manager for Energy Preview instance. * Run queries and see Elasticsearch logs in the Log Analytics workspace. After collecting resource logs as explained in this article, there are more capa * Create a log query alert to be proactively notified when interesting data is identified in your log data. [Create a log query alert for an Azure resource](../azure-monitor/alerts/tutorial-log-alert.md) -* Start collecting logs from other sources such as Airflow in your Microsoft Energy Data Services instance. +* Start collecting logs from other sources such as Airflow in your Azure Data Manager for Energy Preview instance. [How to Integrate Airflow logs with Azure Monitor](how-to-integrate-airflow-logs-with-azure-monitor.md) |
energy-data-services | How To Manage Data Security And Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-data-security-and-encryption.md | Title: Data security and encryption in Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand. -description: Guide on security in Microsoft Energy Data Services and how to set up customer managed keys on Microsoft Energy Data Services #Required; article description that is displayed in search results. + Title: Data security and encryption in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. +description: Guide on security in Azure Data Manager for Energy Preview and how to set up customer managed keys on Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. -#Customer intent: As a developer, I want to set up customer-managed keys on Microsoft Energy Data Services. +#Customer intent: As a developer, I want to set up customer-managed keys on Azure Data Manager for Energy Preview. -# Data security and encryption in Microsoft Energy Data Services Preview +# Data security and encryption in Azure Data Manager for Energy Preview -This article provides an overview of security features in Microsoft Energy Data Services Preview. It covers the major areas of [encryption at rest](../security/fundamentals/encryption-atrest.md), encryption in transit, TLS, https, microsoft-managed keys, and customer managed key. +This article provides an overview of security features in Azure Data Manager for Energy Preview. It covers the major areas of [encryption at rest](../security/fundamentals/encryption-atrest.md), encryption in transit, TLS, https, microsoft-managed keys, and customer managed key. ## Encrypt data at rest -Microsoft Energy Data Services Preview uses several storage resources for storing metadata, user data, in-memory data etc. The platform uses service-side encryption to automatically encrypt all the data when it is persisted to the cloud. Data encryption at rest protects your data to help you to meet your organizational security and compliance commitments. All data in Microsoft Energy Data Services is encrypted with Microsoft-managed keys by default. -In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Microsoft Energy Data Services Preview. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. +Azure Data Manager for Energy Preview uses several storage resources for storing metadata, user data, in-memory data etc. The platform uses service-side encryption to automatically encrypt all the data when it is persisted to the cloud. Data encryption at rest protects your data to help you to meet your organizational security and compliance commitments. All data in Azure Data Manager for Energy Preview is encrypted with Microsoft-managed keys by default. +In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Azure Data Manager for Energy Preview. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. ## Encrypt data in transit -Microsoft Energy Data Services Preview supports Transport Layer Security (TLS 1.2) protocol to protect data when itΓÇÖs traveling between the cloud services and customers. TLS provides strong authentication, message privacy, and integrity (enabling detection of message tampering, interception, and forgery), interoperability, and algorithm flexibility. +Azure Data Manager for Energy Preview supports Transport Layer Security (TLS 1.2) protocol to protect data when itΓÇÖs traveling between the cloud services and customers. TLS provides strong authentication, message privacy, and integrity (enabling detection of message tampering, interception, and forgery), interoperability, and algorithm flexibility. -In addition to TLS, when you interact with Microsoft Energy Data Services, all transactions take place over HTTPS. +In addition to TLS, when you interact with Azure Data Manager for Energy Preview, all transactions take place over HTTPS. -## Set up Customer Managed Keys (CMK) for Microsoft Energy Data Services Preview instance +## Set up Customer Managed Keys (CMK) for Azure Data Manager for Energy Preview instance > [!IMPORTANT]-> You cannot edit CMK settings once the Microsoft Energy Data Services instance is created. +> You cannot edit CMK settings once the Azure Data Manager for Energy Preview instance is created. ### Prerequisites **Step 1- Configure the key vault** 1. You can use a new or existing key vault to store customer-managed keys. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../key-vault/general/overview.md) and [What is Azure Key Vault](../key-vault/general/basic-concepts.md)?-2. Using customer-managed keys with Microsoft Energy Data Services requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created. +2. Using customer-managed keys with Azure Data Manager for Energy Preview requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created. 3. To learn how to create a key vault with the Azure portal, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md). When you create the key vault, select Enable purge protection. [](media/how-to-manage-data-security-and-encryption/customer-managed-key-1-create-key-vault.png#lightbox) In addition to TLS, when you interact with Microsoft Energy Data Services, all t 3. It is recommended that the RSA key size is 3072, see [Configure customer-managed keys for your Azure Cosmos DB account | Microsoft Learn](../cosmos-db/how-to-setup-customer-managed-keys.md#generate-a-key-in-azure-key-vault). **Step 3 - Choose a managed identity to authorize access to the key vault**-1. When you enable customer-managed keys for an existing Microsoft Energy Data Services Preview instance you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault. +1. When you enable customer-managed keys for an existing Azure Data Manager for Energy Preview instance you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault. 2. You can create a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). ### Configure customer-managed keys for an existing account-1. Create a **Microsoft Energy Data Services** instance. +1. Create a **Azure Data Manager for Energy Preview** instance. 2. Select the **Encryption** tab. - [](media/how-to-manage-data-security-and-encryption/customer-managed-key-2-encryption-tab.png#lightbox) + [](media/how-to-manage-data-security-and-encryption/customer-managed-key-2-encryption-tab.png#lightbox) 3. In the encryption tab, select **Customer-managed keys (CMK)**. 4. For using CMK, you need to select the key vault where the key is stored. In addition to TLS, when you interact with Microsoft Energy Data Services, all t 12. Next, select ΓÇ£**Review+Create**ΓÇ¥ after completing other tabs. 13. Select the "**Create**" button. -14. A Microsoft Energy Data Services instance is created with customer-managed keys. +14. An Azure Data Manager for Energy Preview instance is created with customer-managed keys. 15. Once CMK is enabled you will see its status on the **Overview** screen. - [](media/how-to-manage-data-security-and-encryption/customer-managed-key-6-cmk-enabled-meds-overview.png#lightbox) + [](media/how-to-manage-data-security-and-encryption/customer-managed-key-6-cmk-enabled-meds-overview.png#lightbox) 16. You can navigate to **Encryption** and see that CMK enabled with user managed identity. - [](media/how-to-manage-data-security-and-encryption/customer-managed-key-7-cmk-disabled-meds-instance-created.png#lightbox) + [](media/how-to-manage-data-security-and-encryption/customer-managed-key-7-cmk-disabled-meds-instance-created.png#lightbox) |
energy-data-services | How To Manage Legal Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-legal-tags.md | Title: How to manage legal tags in Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand. -description: This article describes how to manage legal tags in Microsoft Energy Data Services Preview #Required; article description that is displayed in search results. + Title: How to manage legal tags in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. +description: This article describes how to manage legal tags in Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. Previously updated : 08/19/2022 Last updated : 02/20/2023 # How to manage legal tags-In this article, you'll know how to manage legal tags in your Microsoft Energy Data Services Preview instance. A Legal tag is the entity that represents the legal status of data in the Microsoft Energy Data Services Preview instance. Legal tag is a collection of properties that governs how data can be ingested and consumed. A legal tag is required for data to be [ingested](concepts-csv-parser-ingestion.md) into your Microsoft Energy Data Services Preview instance. It's also required for the [consumption](concepts-index-and-search.md) of the data from your Microsoft Energy Data Services Preview instance. Legal tags are defined at a data partition level individually. +In this article, you'll know how to manage legal tags in your Azure Data Manager for Energy Preview instance. A Legal tag is the entity that represents the legal status of data in the Azure Data Manager for Energy Preview instance. Legal tag is a collection of properties that governs how data can be ingested and consumed. A legal tag is required for data to be [ingested](concepts-csv-parser-ingestion.md) into your Azure Data Manager for Energy Preview instance. It's also required for the [consumption](concepts-index-and-search.md) of the data from your Azure Data Manager for Energy Preview instance. Legal tags are defined at a data partition level individually. -While in Microsoft Energy Data Services Preview instance, [entitlement service](concepts-entitlements.md) defines access to data for a given user(s), legal tag defines the overall access to the data across users. A user may have access to manage the data within a data partition however, they may not be able to do so-until certain legal requirements are fulfilled. +While in Azure Data Manager for Energy Preview instance, [entitlement service](concepts-entitlements.md) defines access to data for a given user(s), legal tag defines the overall access to the data across users. A user may have access to manage the data within a data partition however, they may not be able to do so-until certain legal requirements are fulfilled. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Create a legal tag-Run the below curl command in Azure Cloud Bash to create a legal tag for a given data partition of your Microsoft Energy Data Services Preview instance. +Run the below curl command in Azure Cloud Bash to create a legal tag for a given data partition of your Azure Data Manager for Energy Preview instance. ```bash curl --location --request POST 'https://<URI>/api/legal/v1/legaltags' \ Run the below curl command in Azure Cloud Bash to create a legal tag for a given ``` ### Sample request-Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" ```bash curl --location --request POST 'https://medstest.energy.azure.com/api/legal/v1/legaltags' \ Consider a Microsoft Energy Data Services instance named "medstest" with a data --header 'Content-Type: application/json' \ --data-raw '{ "name": "medstest-dp1-legal-tag",- "description": "Microsoft Energy Data Services Preview Legal Tag", + "description": "Azure Data Manager for Energy Preview Legal Tag", "properties": { "contractId": "A1234", "countryOfOrigin": ["US"], Consider a Microsoft Energy Data Services instance named "medstest" with a data ```JSON { "name": "medsStest-dp1-legal-tag",- "description": "Microsoft Energy Data Services Preview Legal Tag", + "description": "Azure Data Manager for Energy Preview Legal Tag", "properties": { "countryOfOrigin": [ "US" The Create Legal Tag api, internally appends data-partition-id to legal tag name --header 'Content-Type: application/json' \ --data-raw '{ "name": "legal-tag",- "description": "Microsoft Energy Data Services Preview Legal Tag", + "description": "Azure Data Manager for Energy Preview Legal Tag", "properties": { "contractId": "A1234", "countryOfOrigin": ["US"], The sample response will have data-partition-id appended to the legal tag name a ```JSON { "name": "medstest-dp1-legal-tag",- "description": "Microsoft Energy Data Services Preview Legal Tag", + "description": "Azure Data Manager for Energy Preview Legal Tag", "properties": { "countryOfOrigin": [ "US" The sample response will have data-partition-id appended to the legal tag name a ``` ## Get a legal tag-Run the below curl command in Azure Cloud Bash to get the legal tag associated with a data partition of your Microsoft Energy Data Services Preview instance. +Run the below curl command in Azure Cloud Bash to get the legal tag associated with a data partition of your Azure Data Manager for Energy Preview instance. ```bash curl --location --request GET 'https://<URI>/api/legal/v1/legaltags/<legal-tag-name>' \ Run the below curl command in Azure Cloud Bash to get the legal tag associated w ``` ### Sample request-Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" ```bash curl --location --request GET 'https://medstest.energy.azure.com/api/legal/v1/legaltags/medstest-dp1-legal-tag' \ Consider a Microsoft Energy Data Services instance named "medstest" with a data ```JSON { "name": "medstest-dp1-legal-tag",- "description": "Microsoft Energy Data Services Preview Legal Tag", + "description": "Azure Data Manager for Energy Preview Legal Tag", "properties": { "countryOfOrigin": [ "US" |
energy-data-services | How To Manage Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md | Title: How to manage users in Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand. -description: This article describes how to manage users in Microsoft Energy Data Services Preview #Required; article description that is displayed in search results. + Title: How to manage users in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. +description: This article describes how to manage users in Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. -In this article, you'll know how to manage users in Microsoft Energy Data Services Preview. It uses the [entitlements API](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) and acts as a group-based authorization system for data partitions within Microsoft Energy Data Service instance. For more information about Microsoft Energy Data Services Preview entitlements, see [entitlement services](concepts-entitlements.md). +In this article, you'll know how to manage users in Azure Data Manager for Energy Preview. It uses the [entitlements API](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) and acts as a group-based authorization system for data partitions within Microsoft Energy Data Service instance. For more information about Azure Data Manager for Energy Preview entitlements, see [entitlement services](concepts-entitlements.md). [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Prerequisites -Create a Microsoft Energy Data Services Preview instance using the tutorial at [How to create Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). +Create an Azure Data Manager for Energy Preview instance using the tutorial at [How to create Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). -You will need to pass parameters for generating the access token, which you'll need to make valid calls to the Entitlements API of your Microsoft Energy Data Services Preview instance. You will also need these parameters for different user management requests to the Entitlements API. Hence Keep the following values handy for these actions. +You will need to pass parameters for generating the access token, which you'll need to make valid calls to the Entitlements API of your Azure Data Manager for Energy Preview instance. You will also need these parameters for different user management requests to the Entitlements API. Hence Keep the following values handy for these actions. #### Find `tenant-id` Navigate to the Azure Active Directory account for your organization. One way to do so is by searching for "Azure Active Directory" in the Azure portal's search bar. Once there, locate `tenant-id` under the basic information section in the *Overview* tab. Copy the `tenant-id` and paste in an editor to be used later. Navigate to the Azure Active Directory account for your organization. One way to :::image type="content" source="media/how-to-manage-users/tenant-id.png" alt-text="Screenshot of finding the tenant-id."::: #### Find `client-id`-Often called `app-id`, it's the same value that you used to register your application during the provisioning of your [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). You'll find the `client-id` in the *Essentials* pane of Microsoft Energy Data Services Preview *Overview* page. Copy the `client-id` and paste in an editor to be used later. +Often called `app-id`, it's the same value that you used to register your application during the provisioning of your [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). You'll find the `client-id` in the *Essentials* pane of Azure Data Manager for Energy Preview *Overview* page. Copy the `client-id` and paste in an editor to be used later. > [!IMPORTANT]-> The 'client-id' that is passed as values in the entitlement API calls needs to be the same which was used for provisioning of your Microsoft Energy Data Services Preview instance. +> The 'client-id' that is passed as values in the entitlement API calls needs to be the same which was used for provisioning of your Azure Data Manager for Energy Preview instance. :::image type="content" source="media/how-to-manage-users/client-id-or-app-id.png" alt-text="Screenshot of finding the client-id for your registered App."::: #### Find `client-secret`-Sometimes called an application password, a `client-secret` is a string value your app can use in place of a certificate to identity itself. Navigate to *App Registrations*. Once there, open 'Certificates & secrets' under the *Manage* section. Create a `client-secret` for the `client-id` that you used to create your Microsoft Energy Data Services Preview instance, you can add one now by clicking on *New Client Secret*. Record the secret's `value` for use in your client application code. +Sometimes called an application password, a `client-secret` is a string value your app can use in place of a certificate to identity itself. Navigate to *App Registrations*. Once there, open 'Certificates & secrets' under the *Manage* section. Create a `client-secret` for the `client-id` that you used to create your Azure Data Manager for Energy Preview instance, you can add one now by clicking on *New Client Secret*. Record the secret's `value` for use in your client application code. > [!CAUTION] > Don't forget to record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page at the time of creation of 'client secret'. :::image type="content" source="media/how-to-manage-users/client-secret.png" alt-text="Screenshot of finding the client secret."::: -#### Find the `url`for your Microsoft Energy Data Services Preview instance -Navigate to your Microsoft Energy Data Services Preview *Overview* page on Azure portal. Copy the URI from the essentials pane. +#### Find the `url`for your Azure Data Manager for Energy Preview instance +Navigate to your Azure Data Manager for Energy Preview *Overview* page on Azure portal. Copy the URI from the essentials pane. #### Find the `data-partition-id` for your group-You have two ways to get the list of data-partitions in your Microsoft Energy Data Services Preview instance. -- One option is to navigate *Data Partitions* menu item under the Advanced section of your Microsoft Energy Data Services Preview UI.+You have two ways to get the list of data-partitions in your Azure Data Manager for Energy Preview instance. +- One option is to navigate *Data Partitions* menu item under the Advanced section of your Azure Data Manager for Energy Preview UI. -- Another option is by clicking on the *view* below the *data partitions* field in the essentials pane of your Microsoft Energy Data Services Preview *Overview* page. +- Another option is by clicking on the *view* below the *data partitions* field in the essentials pane of your Azure Data Manager for Energy Preview *Overview* page. ## Generate access token You need to generate access token to use entitlements API. Run the below curl command in Azure Cloud Bash after replacing the placeholder values with the corresponding values found earlier in the pre-requisites step. curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa "access_token": "abcdefgh123456............." } ```-Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements API of your Microsoft Energy Data Services Preview instance. +Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements API of your Azure Data Manager for Energy Preview instance. ## User management activities-You can manage user's access to your Microsoft Energy Data Services instance or data partitions. As a prerequisite for this step, you need to find the 'object-id' (OID) of the user(s) first. -You'll need to input `object-id` (OID) of the users as parameters in the calls to the Entitlements API of your Microsoft Energy Data Services Preview Instance. `object-id`(OID) is the Azure Active Directory User Object ID. +You can manage users' access to your Microsoft Energy Data Services instance or data partitions. As a prerequisite for this step, you need to find the 'object-id' (OID) of the user(s) first. If you are managing an application's access to your instance or data partition, then you must find and use the application ID (or client ID) instead of the OID. ++You'll need to input the `object-id` (OID) of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Microsoft Energy Data Services Preview Instance. `object-id` (OID) is the Azure Active Directory User Object ID. :::image type="content" source="media/how-to-manage-users/azure-active-directory-object-id.png" alt-text="Screenshot of finding the object-id from Azure Active Directory."::: You'll need to input `object-id` (OID) of the users as parameters in the calls t ### Get the list of all available groups -Run the below curl command in Azure Cloud Bash to get all the groups that are available for your Microsoft Energy Data Services instance and its data partitions. +Run the below curl command in Azure Cloud Bash to get all the groups that are available for your Azure Data Manager for Energy Preview instance and its data partitions. ```bash curl --location --request GET "https://<URI>/api/entitlements/v2/groups/" \ The value to be sent for the param **"email"** is the **Object_ID (OID)** of the **Sample request** -Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" ```bash curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/users@medstest-dp1.dataservices.energy/members' \ The value to be sent for the param **"email"** is the **Object_ID (OID)** of the **Sample request** -Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" ```bash curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/service.search.user@medstest-dp1.dataservices.energy/members' \ Run the below curl command in Azure Cloud Bash to get all the groups associated **Sample request** -Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" ```bash curl --location --request GET 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX/groups?type=none' \ Consider a Microsoft Energy Data Services instance named "medstest" with a data ### Delete entitlement groups of a given user -Run the below curl command in Azure Cloud Bash to delete a given user to your Microsoft Energy Data Services instance data partition. +Run the below curl command in Azure Cloud Bash to delete a given user to your Azure Data Manager for Energy Preview instance data partition. As stated above, **DO NOT** delete the OWNER of a group unless you have another OWNER that can manage users in that group. As stated above, **DO NOT** delete the OWNER of a group unless you have another **Sample request** -Consider a Microsoft Energy Data Services instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" ```bash curl --location --request DELETE 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX' \ No output for a successful response ## Next steps <!-- Add a context sentence for the following links -->-Create a legal tag for your Microsoft Energy Data Services Preview instance's data partition. +Create a legal tag for your Azure Data Manager for Energy Preview instance's data partition. > [!div class="nextstepaction"] > [How to manage legal tags](how-to-manage-legal-tags.md) -Begin your journey by ingesting data into your Microsoft Energy Data Services Preview instance. +Begin your journey by ingesting data into your Azure Data Manager for Energy Preview instance. > [!div class="nextstepaction"] > [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md) > [!div class="nextstepaction"] |
energy-data-services | How To Set Up Private Links | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md | Title: Create a private endpoint for Microsoft Energy Data Services -description: Learn how to set up private endpoints for Microsoft Energy Data Services by using Azure Private Link. + Title: Create a private endpoint for Microsoft Azure Data Manager for Energy Preview +description: Learn how to set up private endpoints for Azure Data Manager for Energy Preview by using Azure Private Link. Last updated 09/29/2022 -#Customer intent: As a developer, I want to set up private endpoints for Microsoft Energy Data Services. +#Customer intent: As a developer, I want to set up private endpoints for Azure Data Manager for Energy Preview. -# Create a private endpoint for Microsoft Energy Data Services +# Create a private endpoint for Azure Data Manager for Energy Preview [Azure Private Link](../private-link/private-link-overview.md) provides private connectivity from a virtual network to Azure platform as a service (PaaS). It simplifies the network architecture and secures the connection between endpoints in Azure by eliminating data exposure to the public internet. -By using Azure Private Link, you can connect to a Microsoft Energy Data Services Preview instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Microsoft Energy Data Services instance over these private IP addresses. +By using Azure Private Link, you can connect to an Azure Data Manager for Energy Preview instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Azure Data Manager for Energy Preview instance over these private IP addresses. -You can connect to a Microsoft Energy Data Services instance that's configured with Private Link by using an automatic or manual approval method. To learn more, see the [Private Link documentation](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow). +You can connect to an Azure Data Manager for Energy Preview instance that's configured with Private Link by using an automatic or manual approval method. To learn more, see the [Private Link documentation](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow). -This article describes how to set up a private endpoint for Microsoft Energy Data Services. +This article describes how to set up a private endpoint for Azure Data Manager for Energy Preview. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Prerequisites -[Create a virtual network](../virtual-network/quick-create-portal.md) in the same subscription as the Microsoft Energy Data Services instance. This virtual network will allow automatic approval of the Private Link endpoint. +[Create a virtual network](../virtual-network/quick-create-portal.md) in the same subscription as the Azure Data Manager for Energy Preview instance. This virtual network will allow automatic approval of the Private Link endpoint. ## Create a private endpoint by using the Azure portal -Use the following steps to create a private endpoint for an existing Microsoft Energy Data Services Preview instance by using the Azure portal: +Use the following steps to create a private endpoint for an existing Azure Data Manager for Energy Preview instance by using the Azure portal: -1. From the **All resources** pane, choose a Microsoft Energy Data Services Preview instance. +1. From the **All resources** pane, choose an Azure Data Manager for Energy Preview instance. 1. Select **Networking** from the list of settings. 1. On the **Public Access** tab, select **Enabled from all networks** to allow traffic from all networks. Use the following steps to create a private endpoint for an existing Microsoft E [](media/how-to-manage-private-links/private-links-3-basics.png#lightbox) > [!NOTE]- > Automatic approval happens only when the Microsoft Energy Data Services instance and the virtual network for the private endpoint are in the same subscription. + > Automatic approval happens only when the Azure Data Manager for Energy Preview instance and the virtual network for the private endpoint are in the same subscription. 1. Select **Next: Resource**. On the **Resource** page, confirm the following information: Use the following steps to create a private endpoint for an existing Microsoft E |--|--| |**Subscription**| Your subscription| |**Resource type**| **Microsoft.OpenEnergyPlatform/energyServices**|- |**Resource**| Your Microsoft Energy Data Services instance| - |**Target sub-resource**| **MEDS** (for Microsoft Energy Data Services) by default| + |**Resource**| Your Azure Data Manager for Energy Preview instance| + |**Target sub-resource**| **MEDS** (for Azure Data Manager for Energy Preview) by default| [](media/how-to-manage-private-links/private-links-4-resource.png#lightbox) Use the following steps to create a private endpoint for an existing Microsoft E [](media/how-to-manage-private-links/private-links-8-request-response.png#lightbox) -1. Select the **Microsoft Energy Data Services** instance, select **Networking**, and then select the **Private Access** tab. Confirm that your newly created private endpoint connection appears in the list. +1. Select the **Azure Data Manager for Energy Preview** instance, select **Networking**, and then select the **Private Access** tab. Confirm that your newly created private endpoint connection appears in the list. [](media/how-to-manage-private-links/private-links-9-auto-approved.png#lightbox) > [!NOTE]-> When the Microsoft Energy Data Services instance and the virtual network are in different tenants or subscriptions, you have to manually approve the request to create a private endpoint. The **Approve** and **Reject** buttons appear on the **Private Access** tab. +> When the Azure Data Manager for Energy Preview instance and the virtual network are in different tenants or subscriptions, you have to manually approve the request to create a private endpoint. The **Approve** and **Reject** buttons appear on the **Private Access** tab. > > [](media/how-to-manage-private-links/private-links-10-awaiting-approval.png#lightbox) Use the following steps to create a private endpoint for an existing Microsoft E <!-- Add a context sentence for the following links --> To learn more about using customer Lockbox as an interface to review and approve or reject access requests. > [!div class="nextstepaction"]-> [Use Lockbox for Microsoft Energy Data Services](how-to-create-lockbox.md) +> [Use Lockbox for Azure Data Manager for Energy Preview](how-to-create-lockbox.md) |
energy-data-services | How To Use Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-use-managed-identity.md | Title: Use managed identities for Microsoft Energy Data Services on Azure -description: Learn how to use a managed identity to access Microsoft Energy Data Services from other Azure services. + Title: Use managed identities for Microsoft Azure Data Manager for Energy Preview on Azure +description: Learn how to use a managed identity to access Azure Data Manager for Energy Preview from other Azure services. Last updated 01/04/2023 -#Customer intent: As a developer, I want to use a managed identity to access Microsoft Energy Data Services from other Azure services, such as Azure Functions. +#Customer intent: As a developer, I want to use a managed identity to access Azure Data Manager for Energy Preview from other Azure services, such as Azure Functions. -# Use a managed identity to access Microsoft Energy Data Services from other Azure services +# Use a managed identity to access Azure Data Manager for Energy Preview from other Azure services -This article describes how to access the data plane or control plane of Microsoft Energy Data Services from other Microsoft Azure services by using a *managed identity*. +This article describes how to access the data plane or control plane of Azure Data Manager for Energy Preview from other Microsoft Azure services by using a *managed identity*. -There's a need for services such as Azure Functions to be able to consume Microsoft Energy Data Services APIs. This interoperability allows you to use the best capabilities of multiple Azure services. +There's a need for services such as Azure Functions to be able to consume Azure Data Manager for Energy Preview APIs. This interoperability allows you to use the best capabilities of multiple Azure services. -For example, you can write a script in Azure Functions to ingest data in Microsoft Energy Data Services. In that scenario, you should assume that Azure Functions is the source service and Microsoft Energy Data Services is the target service. +For example, you can write a script in Azure Functions to ingest data in Azure Data Manager for Energy Preview. In that scenario, you should assume that Azure Functions is the source service and Azure Data Manager for Energy Preview is the target service. -This article walks you through the five main steps for configuring Azure Functions to access Microsoft Energy Data Services. +This article walks you through the five main steps for configuring Azure Functions to access Azure Data Manager for Energy Preview. ## Overview of managed identities -A managed identity from Azure Active Directory (Azure AD) allows your application to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and doesn't require you to create or rotate any secrets. Any Azure service that wants to access Microsoft Energy Data Services control plane or data plane for any operation can use a managed identity to do so. +A managed identity from Azure Active Directory (Azure AD) allows your application to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and doesn't require you to create or rotate any secrets. Any Azure service that wants to access Azure Data Manager for Energy Preview control plane or data plane for any operation can use a managed identity to do so. There are two types of managed identities: There are two types of managed identities: To learn more about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md). -Currently, other services can connect to Microsoft Energy Data Services by using a system-assigned or user-assigned managed identity. However, Microsoft Energy Data Services doesn't support system-assigned managed identities. +Currently, other services can connect to Azure Data Manager for Energy Preview by using a system-assigned or user-assigned managed identity. However, Azure Data Manager for Energy Preview doesn't support system-assigned managed identities. -For the scenario in this article, you'll use a user-assigned managed identity in Azure Functions to call a data plane API in Microsoft Energy Data Services. +For the scenario in this article, you'll use a user-assigned managed identity in Azure Functions to call a data plane API in Azure Data Manager for Energy Preview. ## Prerequisites Before you begin, create the following resources: -* [Microsoft Energy Data Services instance](quickstart-create-microsoft-energy-data-services-instance.md) +* [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) * [Azure function app](../azure-functions/functions-create-function-app-portal.md) Before you begin, create the following resources: ## Step 1: Retrieve the object ID -To retrieve the object ID for the user-assigned identity that will access the Microsoft Energy Data Services APIs: +To retrieve the object ID for the user-assigned identity that will access the Azure Data Manager for Energy Preview APIs: 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Go to the managed identity, and then select **Overview**. Retrieve the application ID of the user-assigned identity by using the object ID ## Step 4: Add the application ID to entitlement groups -Next, add the application ID to the appropriate groups that will use the entitlement service to access Microsoft Energy Data Services APIs. The following example adds the application ID to two groups: +Next, add the application ID to the appropriate groups that will use the entitlement service to access Azure Data Manager for Energy Preview APIs. The following example adds the application ID to two groups: * users@[partition ID].dataservices.energy * users.datalake.editors@[partition ID].dataservices.energy To add the application ID: * Tenant ID * Client ID * Client secret- * Microsoft Energy Data Services URI + * Azure Data Manager for Energy Preview URI * Data partition ID * [Access token](how-to-manage-users.md#prerequisites) * Application ID of the managed identity To add the application ID: 1. To add the application ID to the users@[partition ID].dataservices.energy group, run the following cURL command via Bash in Azure: ```bash- curl --location --request POST 'https://<Microsoft Energy Data Services URI>/api/entitlements/v2/groups/users@ <data-partition-id>.dataservices.energy/members' \ + curl --location --request POST 'https://<Azure Data Manager for Energy Preview URI>/api/entitlements/v2/groups/users@ <data-partition-id>.dataservices.energy/members' \ --header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer \ --header 'Content-Type: application/json' \ To add the application ID: 1. To add the application ID to the users.datalake.editors@[partition ID].dataservices.energy group, run the following cURL command via Bash in Azure: ```bash- curl --location --request POST 'https://<Microsoft Energy Data Services URI>/api/entitlements/v2/groups/ users.datalake.editors@ <data-partition-id>.dataservices.energy/members' \ + curl --location --request POST 'https://<Azure Data Manager for Energy Preview URI>/api/entitlements/v2/groups/ users.datalake.editors@ <data-partition-id>.dataservices.energy/members' \ --header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer \ --header 'Content-Type: application/json' \ To add the application ID: ## Step 5: Generate a token -Now Azure Functions is ready to access Microsoft Energy Data Services APIs. +Now Azure Functions is ready to access Azure Data Manager for Energy Preview APIs. -The Azure function generates a token by using the user-assigned identity. The function uses the application ID that's present in the Microsoft Energy Data Services instance while generating the token. +The Azure function generates a token by using the user-assigned identity. The function uses the application ID that's present in the Azure Data Manager for Energy Preview instance while generating the token. Here's an example of the Azure function code: from msrestazure.azure_active_directory import MSIAuthentication def main(req: func.HttpRequest) -> str: logging.info('Python HTTP trigger function processed a request.') - //To authenticate by using a managed identity, you need to pass the Microsoft Energy Data Services application ID as the resource. + //To authenticate by using a managed identity, you need to pass the Azure Data Manager for Energy Preview application ID as the resource. //To use a user-assigned identity, you should include the //client ID as an additional parameter. //Managed identity using user-assigned identity: MSIAuthentication(client_id, resource) def main(req: func.HttpRequest) -> str: creds = MSIAuthentication(client_id="<client_id_of_managed_identity>ΓÇ¥, resource="<meds_app_id>") url = "https://<meds-uri>/api/entitlements/v2/groups" payload = {}- // Passing the data partition ID of Microsoft Energy Data Services in headers along with the token received using the managed instance. + // Passing the data partition ID of Azure Data Manager for Energy Preview in headers along with the token received using the managed instance. headers = { 'data-partition-id': '<data partition id>', 'Authorization': 'Bearer ' + creds.token["access_token"] You should get the following successful response from Azure Functions: [](media/how-to-use-managed-identity/5-azure-function-success.png#lightbox) -With the preceding steps completed, you can now use Azure Functions to access Microsoft Energy Data Services APIs with appropriate use of managed identities. +With the preceding steps completed, you can now use Azure Functions to access Azure Data Manager for Energy Preview APIs with appropriate use of managed identities. ## Next steps Learn about Lockbox: > [!div class="nextstepaction"]-> [Lockbox in Microsoft Energy Data Services](how-to-create-lockbox.md) +> [Lockbox in Azure Data Manager for Energy Preview](how-to-create-lockbox.md) |
energy-data-services | Overview Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-ddms.md | Title: Overview of domain data management services - Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand. + Title: Overview of domain data management services - Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. description: This article provides an overview of Domain data management services #Required; article description that is displayed in search results. |
energy-data-services | Overview Microsoft Energy Data Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-microsoft-energy-data-services.md | Title: What is Microsoft Energy Data Services Preview? #Required; page title is displayed in search results. Include the brand. -description: This article provides an overview of Microsoft Energy Data Services Preview #Required; article description that is displayed in search results. + Title: What is Microsoft Azure Data Manager for Energy Preview? #Required; page title is displayed in search results. Include the brand. +description: This article provides an overview of Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. Previously updated : 09/08/2022 #Required; mm/dd/yyyy format. Last updated : 02/08/2023 #Required; mm/dd/yyyy format. -# What is Microsoft Energy Data Services Preview? +# What is Azure Data Manager for Energy Preview? -Microsoft Energy Data Services Preview is a secure, reliable, hyperscale, fully managed cloud-based data platform solution for the energy industry. It is an enterprise-grade data platform that brings together the capabilities of OSDU™ Data Platform, Microsoft's secure and trusted Azure cloud platform, and SLB's extensive domain expertise. It allows customers to free data from silos, provides strong data management, storage, and federation strategy. Microsoft Energy Data Services ensures compatibility with evolving community standards like OSDU™ and enables value addition through interoperability with both first-party and third-party solutions. +Azure Data Manager for Energy Preview is a secure, reliable, hyperscale, fully managed cloud-based data platform solution for the energy industry. It is an enterprise-grade data platform that brings together the capabilities of OSDU™ Data Platform, Microsoft's secure and trusted Azure cloud platform, and SLB's extensive domain expertise. It allows customers to free data from silos, provides strong data management, storage, and federation strategy. Azure Data Manager for Energy Preview ensures compatibility with evolving community standards like OSDU™ and enables value addition through interoperability with both first-party and third-party solutions. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Principles -Microsoft Energy Data Services conforms to the following principles: +Azure Data Manager for Energy Preview conforms to the following principles: ### Fully managed OSDU™ platform -Microsoft Energy Data Services Preview is a first-party PaaS (Platform as a Service) offering where Microsoft manages the deployment, monitoring, management, scale, security, updates, and upgrades of the service so that the customers can focus on the value from the platform. Microsoft offers seamless upgrades to the latest OSDU™ milestone versions after testing and validation. +Azure Data Manager for Energy Preview is a first-party PaaS (Platform as a Service) offering where Microsoft manages the deployment, monitoring, management, scale, security, updates, and upgrades of the service so that the customers can focus on the value from the platform. Microsoft offers seamless upgrades to the latest OSDU™ milestone versions after testing and validation. -Furthermore, Microsoft Energy Data Services Preview provides security capabilities like encryption for data-in-transit and data-at-rest. The authentication and authorization are provided by Azure Active Directory. Microsoft also assumes the responsibility of providing regular security patches and updates. +Furthermore, Azure Data Manager for Energy Preview provides security capabilities like encryption for data-in-transit and data-at-rest. The authentication and authorization are provided by Azure Active Directory. Microsoft also assumes the responsibility of providing regular security patches and updates. -Microsoft Energy Data Services Preview also supports multiple data partitions for every platform instance. More data partitions can also be created after creating an instance, as needed. +Azure Data Manager for Energy Preview also supports multiple data partitions for every platform instance. More data partitions can also be created after creating an instance, as needed. As an Azure-based service, it also provides elasticity with auto-scaling to handle dynamically varying workload requirements. The service provides out-of-the-box compatibility and built-in integration with industry-leading applications from SLB, including Petrel to provide quick time to value. Microsoft will provide support for the platform to enable our customers' use cas ### Accelerated innovation with openness in mind -Microsoft Energy Data Services Preview is compatible with the OSDU™ Technical Standard enables seamless integration of existing applications that have been developed in alignment with the emerging requirements of the OSDU™ Standard. +Azure Data Manager for Energy Preview is compatible with the OSDU™ Technical Standard enables seamless integration of existing applications that have been developed in alignment with the emerging requirements of the OSDU™ Standard. The platform's openness and integration with Microsoft Azure Marketplace brings industry-leading applications, solutions, and integration services offered by our extensive partner ecosystem to our customers. ### Extensibility with the Microsoft ecosystem -Most of our customers rely on ubiquitous tools and applications from Microsoft. The Microsoft Energy Data Services Preview platform is piloting how it can seamlessly work with deeply used Microsoft apps like SharePoint for data ingestion, Synapse for data transformations and pipelines, Power BI for data visualization, and other possibilities. A Power BI connector has already been released in the community, and partners are leveraging these tools and connectors to enhance their integrations with Microsoft apps and services. +Most of our customers rely on ubiquitous tools and applications from Microsoft. The Azure Data Manager for Energy Preview platform is piloting how it can seamlessly work with deeply used Microsoft apps like SharePoint for data ingestion, Synapse for data transformations and pipelines, Power BI for data visualization, and other possibilities. A Power BI connector has already been released in the community, and partners are leveraging these tools and connectors to enhance their integrations with Microsoft apps and services. OSDU™ is a trademark of The Open Group. ## Next steps Follow the quickstart guide to quickly deploy Microsoft Energy Data Service in your Azure subscription > [!div class="nextstepaction"]-> [Quickstart: Create Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) +> [Quickstart: Create Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) |
energy-data-services | Quickstart Create Microsoft Energy Data Services Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/quickstart-create-microsoft-energy-data-services-instance.md | Title: Create a Microsoft Energy Data Services Preview instance #Required; page title is displayed in search results. Include the brand. -description: Quickly create a Microsoft Energy Data Services Preview instance #Required; article description that is displayed in search results. + Title: Create a Microsoft Azure Data Manager for Energy Preview instance #Required; page title is displayed in search results. Include the brand. +description: Quickly create an Azure Data Manager for Energy Preview instance #Required; article description that is displayed in search results. Last updated 08/18/2022 -# Quickstart: Create a Microsoft Energy Data Services Preview instance +# Quickstart: Create an Azure Data Manager for Energy Preview Preview instance [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] -Get started by creating a Microsoft Energy Data Services Preview instance on Azure portal on a web browser. You first register an Azure application on Active Directory and then use the application ID to create a Microsoft Energy Data Services instance in your chosen Azure Subscription and region. +Get started by creating an Azure Data Manager for Energy Preview instance on Azure portal on a web browser. You first register an Azure application on Active Directory and then use the application ID to create an Azure Data Manager for Energy Preview instance in your chosen Azure Subscription and region. -The setup of Microsoft Energy Data Services Preview instance can be triggered using a simple interface on Azure portal and takes about 50 minutes to complete. +The setup of Azure Data Manager for Energy Preview instance can be triggered using a simple interface on Azure portal and takes about 50 minutes to complete. -Microsoft Energy Data Services Preview is a managed "Platform as a service (PaaS)" offering from Microsoft that builds on top of the [OSDU™](https://osduforum.org/) Data Platform. Microsoft Energy Data Services Preview lets you ingest, transform, and export subsurface data by letting you connect your consuming in-house or third-party applications. +Azure Data Manager for Energy Preview is a managed "Platform as a service (PaaS)" offering from Microsoft that builds on top of the [OSDU™](https://osduforum.org/) Data Platform. Azure Data Manager for Energy Preview lets you ingest, transform, and export subsurface data by letting you connect your consuming in-house or third-party applications. ## Prerequisites | Prerequisite | Details | | | - |-Active Azure Subscription | You'll need the Azure subscription ID in which you want to install Microsoft Energy Data Services. You need to have appropriate permissions to create Azure resources in this subscription. -Application ID | You'll need an [application ID](../active-directory/develop/application-model.md) (often referred to as "App ID" or a "client ID"). This application ID will be used for authentication to Azure Active Directory and will be associated with your Microsoft Energy Data Services instance. You can [create an application ID](../active-directory/develop/quickstart-register-app.md) by navigating to Active directory and selecting *App registrations* > *New registration*. +Active Azure Subscription | You'll need the Azure subscription ID in which you want to install Azure Data Manager for Energy Preview. You need to have appropriate permissions to create Azure resources in this subscription. +Application ID | You'll need an [application ID](../active-directory/develop/application-model.md) (often referred to as "App ID" or a "client ID"). This application ID will be used for authentication to Azure Active Directory and will be associated with your Azure Data Manager for Energy Preview instance. You can [create an application ID](../active-directory/develop/quickstart-register-app.md) by navigating to Active directory and selecting *App registrations* > *New registration*. Client Secret | Sometimes called an application password, a client secret is a string value that your app can use in place of a certificate to identity itself. You can [create a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret) by selecting *Certificates & secrets* > *Client secrets* > *New client secret*. Record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page. -## Create a Microsoft Energy Data Services Preview instance +## Create an Azure Data Manager for Energy Preview instance 1. Save your **Application (client) ID** and **client secret** from Azure Active Directory to refer to them later in this quickstart. Client Secret | Sometimes called an application password, a client secret is a s 1. Sign in to [Microsoft Azure Marketplace](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden) > [!IMPORTANT]- > *Microsoft Energy Data Services* is accessible on the Azure Marketplace only if you use the above Azure portal link. + > *Azure Data Manager for Energy Preview* is accessible on the Azure Marketplace only if you use the above Azure portal link. -1. If you have access to multiple tenants, use the *Directories + subscriptions* filter in the top menu to switch to the tenant in which you want to install Microsoft Energy Data Services. +1. If you have access to multiple tenants, use the *Directories + subscriptions* filter in the top menu to switch to the tenant in which you want to install Azure Data Manager for Energy Preview. -1. Use the search bar in the Azure Marketplace (not the global Azure search bar on top of the screen) to search for *Microsoft Energy Data Services*. +1. Use the search bar in the Azure Marketplace (not the global Azure search bar on top of the screen) to search for *Azure Data Manager for Energy Preview*. - [](media/quickstart-create-microsoft-energy-data-services-instance/search-meds-on-azure-marketplace.png#lightbox) + [](media/quickstart-create-microsoft-energy-data-services-instance/search-meds-on-azure-marketplace.png#lightbox) -1. In the search page, select *Create* on the card titled "Microsoft Energy Data Services (Preview)". +1. In the search page, select *Create* on the card titled "Azure Data Manager for Energy Preview(Preview)". -1. A new window appears. Complete the *Basics* tab by choosing the *subscription*, *resource group*, and the *region* in which you want to create your instance of Microsoft Energy Data Services. Enter the *App ID* that you created during the prerequisite steps. +1. A new window appears. Complete the *Basics* tab by choosing the *subscription*, *resource group*, and the *region* in which you want to create your instance of Azure Data Manager for Energy Preview. Enter the *App ID* that you created during the prerequisite steps. - [](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details.png#lightbox) + [](media/quickstart-create-microsoft-energy-data-services-instance/input-basic-details.png#lightbox) Some naming conventions to guide you at this step: Client Secret | Sometimes called an application password, a client secret is a s | -- | | Instance name | Only alphanumeric characters are allowed, and the value must be 1-15 characters long. The name is **not** case-sensitive. One resource group can't have two instances with the same name. Application ID | Enter the valid Application ID that you generated and saved in the last section.- Data Partition name | Name should be 1-10 char long consisting of lowercase alphanumeric characters and hyphens. It should start with an alphanumeric character and not contain consecutive hyphens. The data partition names that you chose are automatically prefixed with your Microsoft Energy Data Services instance name. This compound name will be used to refer to your data partition in application and API calls. + Data Partition name | Name should be 1-10 char long consisting of lowercase alphanumeric characters and hyphens. It should start with an alphanumeric character and not contain consecutive hyphens. The data partition names that you chose are automatically prefixed with your Azure Data Manager for Energy Preview instance name. This compound name will be used to refer to your data partition in application and API calls. > [!NOTE]- > Microsoft Energy Data Services instance and data partition names, once created, cannot be changed later. + > Azure Data Manager for Energy Preview instance and data partition names, once created, cannot be changed later. 1. Select **Next: Tags** and enter any tags that you would want to specify. If nothing, this field can be left blank. Client Secret | Sometimes called an application password, a client secret is a s [](media/quickstart-create-microsoft-energy-data-services-instance/validation-check-after-entering-details.png#lightbox) -1. This step is optional. You can download an Azure Resource Manager (ARM) template and use it for automated deployments of Microsoft Energy Data Services in future. Select *Download a template for automation* located on the bottom-right of the screen. +1. This step is optional. You can download an Azure Resource Manager (ARM) template and use it for automated deployments of Azure Data Manager for Energy Preview in future. Select *Download a template for automation* located on the bottom-right of the screen. [](media/quickstart-create-microsoft-energy-data-services-instance/download-template-automation.png#lightbox) Client Secret | Sometimes called an application password, a client secret is a s [](media/quickstart-create-microsoft-energy-data-services-instance/deployment-complete.png#lightbox) - [](media/quickstart-create-microsoft-energy-data-services-instance/overview-energy-data-services.png#lightbox) + [](media/quickstart-create-microsoft-energy-data-services-instance/overview-energy-data-services.png#lightbox) -## Delete a Microsoft Energy Data Services Preview instance +## Delete an Azure Data Manager for Energy Preview instance -Deleting a Microsoft Energy Data instance also deletes any data that you've ingested. This action is permanent and the ingested data can't be recovered. To delete a Microsoft Energy Data Services instance, complete the following steps: +Deleting a Microsoft Energy Data instance also deletes any data that you've ingested. This action is permanent and the ingested data can't be recovered. To delete an Azure Data Manager for Energy Preview instance, complete the following steps: 1. Sign in to the Azure portal and delete the *resource group* in which these components are installed. -2. This step is optional. Go to Azure Active Directory and delete the *app registration* that you linked to your Microsoft Energy Data Services instance. +2. This step is optional. Go to Azure Active Directory and delete the *app registration* that you linked to your Azure Data Manager for Energy Preview instance. OSDU™ is a trademark of The Open Group. ## Next steps-After provisioning a Microsoft Energy Data Services instance, you can learn about user management on this instance. +After provisioning an Azure Data Manager for Energy Preview instance, you can learn about user management on this instance. > [!div class="nextstepaction"] > [How to manage users](how-to-manage-users.md) |
energy-data-services | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md | Title: Release notes for Microsoft Energy Data Services Preview #Required; page title is displayed in search results. Include the brand. -description: This topic provides release notes of Microsoft Energy Data Services Preview releases, improvements, bug fixes, and known issues. #Required; article description that is displayed in search results. + Title: Release notes for Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. +description: This topic provides release notes of Azure Data Manager for Energy Preview releases, improvements, bug fixes, and known issues. #Required; article description that is displayed in search results. Last updated 09/20/2022 #Required; mm/dd/yyyy format. -# Release Notes for Microsoft Energy Data Services Preview +# Release Notes for Azure Data Manager for Energy Preview [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] -Microsoft Energy Data Services is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about: +Azure Data Manager for Energy Preview is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about: - The latest releases - Known issues Microsoft Energy Data Services will begin billing February 15, 2023. Prices will ### Managed Identity Support -You can use a managed identity to authenticate to any [service that supports Azure AD (Active Directory) authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md) with Microsoft Energy Data Services. For example, you can write a script in Azure Function to ingest data in Microsoft Energy Data Services. Now, you can use managed identity to connect to Microsoft Energy Data Services using system or user assigned managed identity from other Azure services. [Learn more.]( ../energy-data-services/how-to-use-managed-identity.md) +You can use a managed identity to authenticate to any [service that supports Azure AD (Active Directory) authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md) with Azure Data Manager for Energy Preview. For example, you can write a script in Azure Function to ingest data in Azure Data Manager for Energy Preview. Now, you can use managed identity to connect to Azure Data Manager for Energy Preview using system or user assigned managed identity from other Azure services. [Learn more.]( ../energy-data-services/how-to-use-managed-identity.md) ### Availability zone support -Availability Zones are physically separate locations within an Azure region made up of one or more datacenters equipped with independent power, cooling, and networking. Availability Zones provide in-region High Availability and protection against local disasters. Microsoft Energy Data Services Preview supports zone-redundant instance by default and there's no setup required by the Customer. [Learn more.](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=energy-data-services®ions=all) +Availability Zones are physically separate locations within an Azure region made up of one or more datacenters equipped with independent power, cooling, and networking. Availability Zones provide in-region High Availability and protection against local disasters. Azure Data Manager for Energy Preview supports zone-redundant instance by default and there's no setup required by the Customer. [Learn more.](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=energy-data-services®ions=all) <hr width=100%> Availability Zones are physically separate locations within an Azure region made ### Lockbox -Most operations, support, and troubleshooting performed by Microsoft personnel do not require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Microsoft Energy Data Services provides you an interface to review and approve or reject data access requests. Microsoft Energy Data Services now supports Lockbox. [Learn more](../security/fundamentals/customer-lockbox-overview.md). +Most operations, support, and troubleshooting performed by Microsoft personnel do not require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Azure Data Manager for Energy Preview provides you an interface to review and approve or reject data access requests. Azure Data Manager for Energy Preview now supports Lockbox. [Learn more](../security/fundamentals/customer-lockbox-overview.md). <hr width=100%> Most operations, support, and troubleshooting performed by Microsoft personnel d ### Support for Private Links -Azure Private Link on Microsoft Energy Data Services provides private access to the service. This means traffic between your private network and Microsoft Energy Data Services travels over the Microsoft backbone network therefore limiting any exposure over the internet. By using Azure Private Link, you can connect to a Microsoft Energy Data Services instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Microsoft Energy Data Services instance over these private IP addresses. [Create a private endpoint for Microsoft Energy Data Services](how-to-set-up-private-links.md). +Azure Private Link on Azure Data Manager for Energy Preview provides private access to the service. This means traffic between your private network and Azure Data Manager for Energy Preview travels over the Microsoft backbone network therefore limiting any exposure over the internet. By using Azure Private Link, you can connect to an Azure Data Manager for Energy Preview instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Azure Data Manager for Energy Preview instance over these private IP addresses. [Create a private endpoint for Azure Data Manager for Energy Preview](how-to-set-up-private-links.md). ### Encryption at Rest using Customer Managed Keys-Microsoft Energy Data Services Preview supports customer managed encryption keys (CMK). All data in Microsoft Energy Data Services is encrypted with Microsoft-managed keys by default. In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Microsoft Energy Data Services. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. [Data security and encryption in Microsoft Energy Data Services](how-to-manage-data-security-and-encryption.md). +Azure Data Manager for Energy Preview supports customer managed encryption keys (CMK). All data in Azure Data Manager for Energy Preview is encrypted with Microsoft-managed keys by default. In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Azure Data Manager for Energy Preview. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. [Data security and encryption in Azure Data Manager for Energy Preview](how-to-manage-data-security-and-encryption.md). <hr width=100%> Microsoft Energy Data Services Preview supports customer managed encryption keys ### Key Announcement: Preview Release -Microsoft Energy Data Services is now available in public preview. Information on latest releases, bug fixes, & deprecated functionality for Microsoft Energy Data Services Preview will be updated monthly. Keep tracking this page. +Azure Data Manager for Energy Preview is now available in public preview. Information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Energy Preview will be updated monthly. Keep tracking this page. -Microsoft Energy Data Services is developed in alignment with the emerging requirements of the OSDUΓäó Technical Standard, Version 1.0. and is currently aligned with Mercury Release(R3), [Milestone-12](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M12-Release-Notes). +Azure Data Manager for Energy Preview is developed in alignment with the emerging requirements of the OSDUΓäó Technical Standard, Version 1.0. and is currently aligned with Mercury Release(R3), [Milestone-12](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M12-Release-Notes). ### Partition & User Management Microsoft Energy Data Services is developed in alignment with the emerging requi - Enabled support for user context in ingestion ([ADR: Issue 52](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/52)) - User identity is preserved and passed on to all ingestion workflow related services using the newly introduced _x-on-behalf-of_ header. A user needs to have appropriate service level entitlements on all dependent services involved in the ingestion workflow and only users with appropriate data level entitlements can modify data. - Workflow service payload is restricted to a maximum of 2 MB. If it exceeds, the service will throw an HTTP 413 error. This restriction is placed to prevent workflow requests from overwhelming the server.-- Microsoft Energy Data Services uses Azure Data Factory (ADF) to run large scale ingestion workloads.+- Azure Data Manager for Energy Preview uses Azure Data Factory (ADF) to run large scale ingestion workloads. ### Search Microsoft Energy Data Services is developed in alignment with the emerging requi ### Region Availability -- Currently, Microsoft Energy Data Services is being offered in the following regions - South Central US, East US, West Europe, and North Europe.+- Currently, Azure Data Manager for Energy Preview is being offered in the following regions - South Central US, East US, West Europe, and North Europe. |
energy-data-services | Resources Partner Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/resources-partner-solutions.md | Title: Microsoft Energy Data Services partners -description: Lists of third-party Microsoft Energy Data Services partners solutions. + Title: Microsoft Azure Data Manager for Energy Preview partners +description: Lists of third-party Azure Data Manager for Energy Preview partners solutions. Last updated 09/24/2022-# Microsoft Energy Data Services Preview partners +# Azure Data Manager for Energy Preview partners -Partner community is the growth engine for Microsoft. To help our customers quickly realize the benefits of Microsoft Energy Data Services Preview, we've worked closely with many partners who have tested their software applications and tools on our data platform. +Partner community is the growth engine for Microsoft. To help our customers quickly realize the benefits of Azure Data Manager for Energy Preview, we've worked closely with many partners who have tested their software applications and tools on our data platform. ## Partner solutions-This article highlights Microsoft partners with software solutions officially supporting Microsoft Energy Data Services. +This article highlights Microsoft partners with software solutions officially supporting Azure Data Manager for Energy Preview. | Partner | Description | Website/Product link | | - | -- | -- |-| Bluware | Bluware enables energy companies to explore the full value of seismic data for exploration, carbon capture, wind farms, and geothermal workflows. Bluware technology on Microsoft Energy Data Services is increasing workflow productivity utilizing the power of Azure. Bluware's flagship seismic deep learning solution, InteractivAI™, drastically improves the effectiveness of interpretation workflows. The interactive experience reduces seismic interpretation time by 10 times from weeks to hours and provides full control over interpretation results. | [Bluware technologies on Azure](https://go.bluware.com/bluware-on-azure-markeplace) [Bluware Products and Evaluation Packages](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bluwarecorp1581537274084.bluwareazurelisting)| +| Bluware | Bluware enables energy companies to explore the full value of seismic data for exploration, carbon capture, wind farms, and geothermal workflows. Bluware technology on Azure Data Manager for Energy Preview is increasing workflow productivity utilizing the power of Azure. Bluware's flagship seismic deep learning solution, InteractivAI™, drastically improves the effectiveness of interpretation workflows. The interactive experience reduces seismic interpretation time by 10 times from weeks to hours and provides full control over interpretation results. | [Bluware technologies on Azure](https://go.bluware.com/bluware-on-azure-markeplace) [Bluware Products and Evaluation Packages](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bluwarecorp1581537274084.bluwareazurelisting)| | Katalyst | Katalyst Data Management® provides the only integrated, end-to-end subsurface data management solution for the oil and gas industry. Over 160 employees operate in North America, Europe and Asia-Pacific, dedicated to enabling digital transformation and optimizing the value of geotechnical information for exploration, production, and M&A activity. |[Katalyst Data Management solution](https://www.katalystdm.com/seismic-news/katalyst-announces-sub-surface-data-management-solution-powered-by-microsoft-energy-data-services/) |-| Interica | Interica OneView™ harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the sub-element level. Quickly and easily discover data across multiple file systems and data silos, and clearly determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a complete holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Microsoft Energy Data Services adoption with Interica OneView™](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView™](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView™ connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)| +| Interica | Interica OneView™ harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the sub-element level. Quickly and easily discover data across multiple file systems and data silos, and clearly determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a complete holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Azure Data Manager for Energy Preview adoption with Interica OneView™](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView™](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView™ connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)| ## Next steps-To learn more about Microsoft Energy Data Services, visit +To learn more about Azure Data Manager for Energy Preview, visit > [!div class="nextstepaction"]-> [What is Microsoft Energy Data Services Preview?](overview-microsoft-energy-data-services.md) +> [What is Azure Data Manager for Energy Preview?](overview-microsoft-energy-data-services.md) |
energy-data-services | Troubleshoot Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/troubleshoot-manifest-ingestion.md | Title: Troubleshoot manifest ingestion in Microsoft Energy Data Services Preview #Required; this page title is displayed in search results; Always include the word "troubleshoot" in this line. + Title: Troubleshoot manifest ingestion in Microsoft Azure Data Manager for Energy Preview #Required; this page title is displayed in search results; Always include the word "troubleshoot" in this line. description: Find out how to troubleshoot manifest ingestion using Airflow task logs #Required; this article description is displayed in search results. Last updated 02/06/2023 # Troubleshoot manifest ingestion issues using Airflow task logs-This article helps you troubleshoot manifest ingestion workflow issues in Microsoft Energy Data Services Preview instance using the Airflow task logs. +This article helps you troubleshoot manifest ingestion workflow issues in Azure Data Manager for Energy Preview instance using the Airflow task logs. ## Manifest ingestion DAG workflow types The Manifest ingestion workflow is of two types: |
energy-data-services | Tutorial Csv Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-csv-ingestion.md | Title: Microsoft Energy Data Services - Steps to perform a CSV parser ingestion #Required; page title is displayed in search results. Include the brand. + Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a CSV parser ingestion #Required; page title is displayed in search results. Include the brand. description: This tutorial shows you how to perform CSV parser ingestion #Required; article description that is displayed in search results. -#Customer intent: As a customer, I want to learn how to use CSV parser ingestion so that I can load CSV data into the Microsoft Energy Data Services Preview instance. +#Customer intent: As a customer, I want to learn how to use CSV parser ingestion so that I can load CSV data into the Azure Data Manager for Energy Preview instance. # Tutorial: Sample steps to perform a CSV parser ingestion -CSV Parser ingestion provides the capability to ingest CSV files into the Microsoft Energy Data Services Preview instance. +CSV Parser ingestion provides the capability to ingest CSV files into the Azure Data Manager for Energy Preview instance. In this tutorial, you'll learn how to: > [!div class="checklist"]-> * Ingest a sample wellbore data CSV file into the Microsoft Energy Data Services Preview instance using Postman +> * Ingest a sample wellbore data CSV file into the Azure Data Manager for Energy Preview instance using Postman > * Search for storage metadata records created during the CSV Ingestion using Postman [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Prerequisites -### Get Microsoft Energy Data Services Preview instance details +### Get Azure Data Manager for Energy Preview instance details -* Microsoft Energy Data Services Preview instance is created already. If not, follow the steps outlined in [Quickstart: Create a Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) +* Azure Data Manager for Energy Preview instance is created already. If not, follow the steps outlined in [Quickstart: Create an Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) * For this tutorial, you will need the following parameters: | Parameter | Value to use | Example | Where to find these values? | In this tutorial, you'll learn how to: | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx | Hover over your account name in the Azure portal to get the directory or tenant ID. Alternately, search and select *Azure Active Directory > Properties > Tenant ID* in the Azure portal. | | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | Same as App ID or Client_ID mentioned above | | refresh_token | Refresh Token value | 0.ATcA01-XWHdJ0ES-qDevC6r........... | Follow the [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to create a refresh token and save it. This refresh token is required later to generate a user token. |- | DNS | URI | `<instance>`.energy.azure.com | Overview page of Microsoft Energy Data Services instance| - | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Microsoft Energy Data Services instance| + | DNS | URI | `<instance>`.energy.Azure.com | Overview page of Azure Data Manager for Energy Preview instance| + | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Azure Data Manager for Energy Preview instance| * Follow the [Manage users](how-to-manage-users.md) guide to add appropriate entitlements for the user running this tutorial In this tutorial, you'll learn how to: > [!NOTE] > To import the Postman collection and environment variables, follow the steps outlined in [Importing data into Postman](https://learning.postman.com/docs/getting-started/importing-and-exporting-data/#importing-data-into-postman) -* Update the **CURRENT_VALUE** of the Postman environment with the information obtained in [Microsoft Energy Data Services Preview instance details](#get-microsoft-energy-data-services-preview-instance-details) +* Update the **CURRENT_VALUE** of the Postman environment with the information obtained in [Azure Data Manager for Energy Preview instance details](#get-azure-data-manager-for-energy-preview-instance-details) * The Postman collection for CSV parser ingestion contains a total of 10 requests, which have to be executed in a sequential manner. * Make sure to choose the **Ingestion Workflow Environment** before triggering the Postman collection. :::image type="content" source="media/tutorial-csv-ingestion/tutorial-postman-choose-environment.png" alt-text="Screenshot of the postman environment." lightbox="media/tutorial-csv-ingestion/tutorial-postman-choose-environment.png"::: In this tutorial, you'll learn how to: :::image type="content" source="media/tutorial-csv-ingestion/tutorial-postman-test-failure.png" alt-text="Screenshot of a failure postman call." lightbox="media/tutorial-csv-ingestion/tutorial-postman-test-failure.png"::: -## Ingest a sample wellbore data CSV file into the Microsoft Energy Data Services Preview instance using Postman +## Ingest a sample wellbore data CSV file into the Azure Data Manager for Energy Preview instance using Postman Using the given Postman collection, you could execute the following steps to ingest the wellbore data: 1. **Get a user token** - Generate the User token, which will be used to authenticate further API calls. 2. **Create a schema** - Generate a schema that adheres to the columns present in the CSV file |
energy-data-services | Tutorial Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-manifest-ingestion.md | Title: Microsoft Energy Data Services - Steps to perform a manifest-based file ingestion #Required; page title is displayed in search results. Include the brand. + Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a manifest-based file ingestion #Required; page title is displayed in search results. Include the brand. description: This tutorial shows you how to perform Manifest ingestion #Required; article description that is displayed in search results. -#Customer intent: As a customer, I want to learn how to use manifest ingestion so that I can load manifest information into the Microsoft Energy Data Services Preview instance. +#Customer intent: As a customer, I want to learn how to use manifest ingestion so that I can load manifest information into the Azure Data Manager for Energy Preview instance. # Tutorial: Sample steps to perform a manifest-based file ingestion -Manifest ingestion provides the capability to ingest manifests into Microsoft Energy Data Services Preview instance +Manifest ingestion provides the capability to ingest manifests into Azure Data Manager for Energy Preview instance In this tutorial, you will learn how to: > [!div class="checklist"]-> * Ingest sample manifests into the Microsoft Energy Data Services Preview instance using Postman +> * Ingest sample manifests into the Azure Data Manager for Energy Preview instance using Postman > * Search for storage metadata records created during the manifest ingestion using Postman [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] In this tutorial, you will learn how to: ## Prerequisites Before beginning this tutorial, the following prerequisites must be completed:-### Get Microsoft Energy Data Services Preview instance details +### Get Azure Data Manager for Energy Preview instance details -* Microsoft Energy Data Services Preview instance is created already. If not, follow the steps outlined in [Quickstart: Create a Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) +* Azure Data Manager for Energy Preview instance is created already. If not, follow the steps outlined in [Quickstart: Create an Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) * For this tutorial, you will need the following parameters: | Parameter | Value to use | Example | Where to find these values? | Before beginning this tutorial, the following prerequisites must be completed: | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx | Hover over your account name in the Azure portal to get the directory or tenant ID. Alternately, search and select *Azure Active Directory > Properties > Tenant ID* in the Azure portal. | | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | Same as App ID or Client_ID mentioned above | | refresh_token | Refresh Token value | 0.ATcA01-XWHdJ0ES-qDevC6r........... | Follow the [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to create a refresh token and save it. This refresh token is required later to generate a user token. |- | DNS | URI | `<instance>`.energy.Azure.com | Overview page of Microsoft Energy Data Services instance| - | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Microsoft Energy Data Services instance| + | DNS | URI | `<instance>`.energy.Azure.com | Overview page of Azure Data Manager for Energy Preview instance| + | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Azure Data Manager for Energy Preview instance| * Follow the [Manage users](how-to-manage-users.md) guide to add appropriate entitlements for the user running this tutorial Before beginning this tutorial, the following prerequisites must be completed: * [Manifest Ingestion postman environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/IngestionWorkflowEnvironment.postman_environment.json) > [!NOTE] > To import the Postman collection and environment variables, follow the steps outlined in [Importing data into Postman](https://learning.postman.com/docs/getting-started/importing-and-exporting-data/#importing-data-into-postman)-* Update the **CURRENT_VALUE** of the postman environment with the information obtained in [Get Microsoft Energy Data Services Preview instance details](#get-microsoft-energy-data-services-preview-instance-details) +* Update the **CURRENT_VALUE** of the postman environment with the information obtained in [Get Azure Data Manager for Energy Preview instance details](#get-azure-data-manager-for-energy-preview-instance-details) * The Postman collection for manifest ingestion contains multiple requests, which will have to be executed in a sequential manner. * Make sure to choose the **Ingestion Workflow Environment** before triggering the Postman collection. :::image type="content" source="media/tutorial-manifest-ingestion/tutorial-postman-choose-environment.png" alt-text="Screenshot of the Postman environment." lightbox="media/tutorial-manifest-ingestion/tutorial-postman-choose-environment.png"::: Before beginning this tutorial, the following prerequisites must be completed: :::image type="content" source="media/tutorial-manifest-ingestion/tutorial-postman-test-failure.png" alt-text="Screenshot of a failure Postman call." lightbox="media/tutorial-manifest-ingestion/tutorial-postman-test-failure.png"::: -## Ingest sample manifests into the Microsoft Energy Data Services Preview instance using Postman +## Ingest sample manifests into the Azure Data Manager for Energy Preview instance using Postman 1. **Get a user token** - Generate the User token, which will be used to authenticate further API calls. 2. **Create a legal tag** - Create a legal tag that will be added to the Manifest data for data compliance purpose |
energy-data-services | Tutorial Seismic Ddms Sdutil | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms-sdutil.md | Title: Microsoft Energy Data Services Preview - Seismic store sdutil tutorial #Required; page title is displayed in search results. Include the brand. + Title: Microsoft Azure Data Manager for Energy Preview - Seismic store sdutil tutorial #Required; page title is displayed in search results. Include the brand. description: Information on setting up and using sdutil, a command-line interface (CLI) tool that allows users to easily interact with seismic store. #Required; article description that is displayed in search results. Windows - [64-bit Python 3.8.3](https://www.python.org/ftp/python/3.8.3/python-3.8.3-amd64.exe) - [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)-- [Linux Subsystem Ubuntu](https://learn.microsoft.com/windows/wsl/install)+- [Linux Subsystem Ubuntu](/windows/wsl/install) Linux Run the changelog script (`./changelog-generator.sh`) to automatically generate ./scripts/changelog-generator.sh ``` -## Usage for Microsoft Energy Data Services +## Usage for Azure Data Manager for Energy Preview -Microsoft Energy Data Services instance is using OSDU™ M12 Version of sdutil. Follow the below steps if you would like to use SDUTIL to leverage the SDMS API of your MEDS instance. +Azure Data Manager for Energy Preview instance is using OSDU™ M12 Version of sdutil. Follow the below steps if you would like to use SDUTIL to leverage the SDMS API of your Azure Data Manager for Energy instance. 1. Ensure you have followed the [installation](#prerequisites) and [configuration](#configuration) steps from above. This includes downloading the SDUTIL source code, configuring your Python virtual environment, editing the `config.yaml` file and setting your three environment variables. |
energy-data-services | Tutorial Seismic Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms.md | Title: Tutorial - Sample steps to interact with Seismic DDMS in Microsoft Energy Data Services #Required; page title is displayed in search results. Include the brand. -description: This tutorial shows you how to interact with Seismic DDMS Microsoft Energy Data Services #Required; article description that is displayed in search results. + Title: Tutorial - Sample steps to interact with Seismic DDMS in Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. +description: This tutorial shows you how to interact with Seismic DDMS Azure Data Manager for Energy Preview #Required; article description that is displayed in search results. -Seismic DDMS provides the capability to operate on seismic data in the Microsoft Energy Data Services instance. +Seismic DDMS provides the capability to operate on seismic data in the Azure Data Manager for Energy Preview instance. In this tutorial, you will learn how to: In this tutorial, you will learn how to: [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Prerequisites -### Microsoft Energy Data Services instance details +### Azure Data Manager for Energy Preview instance details -* Once the [Microsoft Energy Data Services instance](./quickstart-create-microsoft-energy-data-services-instance.md) is created, note down the following details: +* Once the [Azure Data Manager for Energy Preview instance](./quickstart-create-microsoft-energy-data-services-instance.md) is created, note down the following details: | Parameter | Value to use | Example | | | |-- | In this tutorial, you will learn how to: * [Smoke test Postman collection](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/raw/master/source/ddms-smoke-tests/Azure%20DDMS%20OSDU%20Smoke%20Tests.postman_collection.json) * [Smoke Test Environment](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/raw/master/source/ddms-smoke-tests/%5BShip%5D%20osdu-glab.msft-osdu-test.org.postman_environment.json) -3. Update the **CURRENT_VALUE** of the Postman Environment with the information obtained in [Microsoft Energy Data Services instance details](#microsoft-energy-data-services-instance-details) +3. Update the **CURRENT_VALUE** of the Postman Environment with the information obtained in [Azure Data Manager for Energy Preview instance details](#azure-data-manager-for-energy-preview-instance-details) ## Register data partition to seismic |
energy-data-services | Tutorial Well Delivery Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-well-delivery-ddms.md | Title: Tutorial - Work with well data records by using Well Delivery DDMS APIs -description: Learn how to work with well data records in your Microsoft Energy Data Services Preview instance by using Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman. +description: Learn how to work with well data records in your Azure Data Manager for Energy Preview instance by using Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman. -Use Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman to work with well data in your instance of Microsoft Energy Data Services Preview. +Use Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman to work with well data in your instance of Azure Data Manager for Energy Preview. In this tutorial, you'll learn how to: > [!div class="checklist"] For more information about DDMS, see [DDMS concepts](concepts-ddms.md). ## Prerequisites - An Azure subscription-- An instance of [Microsoft Energy Data Services Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription+- An instance of [Azure Data Manager for Energy Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription -## Get your Microsoft Energy Data Services instance details +## Get your Azure Data Manager for Energy Preview instance details -The first step is to get the following information from your [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): +The first step is to get the following information from your [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): | Parameter | Value | Example | | | |-- | Next, set up Postman: :::image type="content" source="media/tutorial-well-delivery/postman-import-files.png" alt-text="Screenshot that shows importing collection and environment files in Postman." lightbox="media/tutorial-well-delivery/postman-import-files.png"::: -1. In the Postman environment, update **CURRENT VALUE** with the information from your [Microsoft Energy Data Services instance](#get-your-microsoft-energy-data-services-instance-details): +1. In the Postman environment, update **CURRENT VALUE** with the information from your [Azure Data Manager for Energy Preview instance](#get-your-azure-data-manager-for-energy-preview-instance-details): 1. In Postman, in the left menu, select **Environments**, and then select **WellDelivery Environment**. - 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Microsoft Energy Data Services instance details](#get-your-microsoft-energy-data-services-instance-details). Scroll to see all relevant variables. + 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Azure Data Manager for Energy Preview instance details](#get-your-azure-data-manager-for-energy-preview-instance-details). Scroll to see all relevant variables. :::image type="content" source="media/tutorial-well-delivery/postman-environment-current-values.png" alt-text="Screenshot that shows where to enter current values in the Well Delivery DDMS environment."::: ## Send a Postman request -The Postman collection for Well Delivery DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Microsoft Energy Data Services instance. +The Postman collection for Well Delivery DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Azure Data Manager for Energy Preview instance. For an example of how to send a Postman request, see the [Wellbore DDMS tutorial](tutorial-wellbore-ddms.md#send-an-example-postman-request). In the next sections, generate a token, and then use it to work with Well Delive To generate a token: -1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Microsoft Energy Data Services instance. +1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Azure Data Manager for Energy Preview instance. ```bash curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \ To generate a token: ## Use Well Delivery DDMS APIs to work with well data records -Successfully completing the Postman requests that are described in the following Well Delivery DDMS APIs indicates successful ingestion and retrieval of well records in your Microsoft Energy Data Services instance. +Successfully completing the Postman requests that are described in the following Well Delivery DDMS APIs indicates successful ingestion and retrieval of well records in your Azure Data Manager for Energy Preview instance. ### Create a well Method: GET ### Delete a wellbore record -You can delete a wellbore record in your Microsoft Energy Data Services instance by using Well Delivery DDMS APIs. For example: +You can delete a wellbore record in your Azure Data Manager for Energy Preview instance by using Well Delivery DDMS APIs. For example: :::image type="content" source="media/tutorial-well-delivery/postman-api-delete-well-bore.png" alt-text="Screenshot that shows how to use an API to delete a wellbore record."::: ### Delete a well record -You can delete a well record in your Microsoft Energy Data Services instance by using Well Delivery DDMS APIs. For example: +You can delete a well record in your Azure Data Manager for Energy Preview instance by using Well Delivery DDMS APIs. For example: :::image type="content" source="media/tutorial-well-delivery/postman-api-delete-well.png" alt-text="Screenshot that shows how to use an API to delete a well record."::: |
energy-data-services | Tutorial Wellbore Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-wellbore-ddms.md | Title: Tutorial - Work with well data records by using Wellbore DDMS APIs -description: Learn how to work with well data records in your Microsoft Energy Data Services Preview instance by using Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman. +description: Learn how to work with well data records in your Azure Data Manager for Energy Preview instance by using Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman. -Use Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman to work with well data in your instance of Microsoft Energy Data Services Preview. +Use Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman to work with well data in your instance of Azure Data Manager for Energy Preview. In this tutorial, you'll learn how to: > [!div class="checklist"] For more information about DDMS, see [DDMS concepts](concepts-ddms.md). ## Prerequisites - An Azure subscription-- An instance of [Microsoft Energy Data Services Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription.+- An instance of [Azure Data Manager for Energy Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription. -## Get your Microsoft Energy Data Services instance details +## Get your Azure Data Manager for Energy Preview instance details -The first step is to get the following information from your [Microsoft Energy Data Services Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): +The first step is to get the following information from your [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): | Parameter | Value | Example | | | |-- | Next, set up Postman: :::image type="content" source="media/tutorial-wellbore-ddms/postman-import-files.png" alt-text="Screenshot that shows importing collection and environment files in Postman." lightbox="media/tutorial-wellbore-ddms/postman-import-files.png"::: -1. In the Postman environment, update **CURRENT VALUE** with the information from your [Microsoft Energy Data Services instance](#get-your-microsoft-energy-data-services-instance-details): +1. In the Postman environment, update **CURRENT VALUE** with the information from your [Azure Data Manager for Energy Preview instance details](#get-your-azure-data-manager-for-energy-preview-instance-details). 1. In Postman, in the left menu, select **Environments**, and then select **Wellbore DDMS Environment**. - 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Microsoft Energy Data Services instance details](#get-your-microsoft-energy-data-services-instance-details). + 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Azure Data Manager for Energy Preview instance details](#get-your-azure-data-manager-for-energy-preview-instance-details). :::image type="content" source="media/tutorial-wellbore-ddms/postman-environment-current-values.png" alt-text="Screenshot that shows where to enter current values in the Wellbore DDMS environment."::: ## Send an example Postman request -The Postman collection for Wellbore DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Microsoft Energy Data Services instance. +The Postman collection for Wellbore DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Azure Data Manager for Energy Preview instance. 1. In Postman, in the left menu, select **Collections**, and then select **Wellbore DDMS**. Under **Setup**, select **Get an SPN Token**. The Postman collection for Wellbore DDMS contains requests you can use to intera To generate a token: -1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Microsoft Energy Data Services instance. +1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Azure Data Manager for Energy Preview instance. ```bash curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \ To generate a token: ## Use Wellbore DDMS APIs to work with well data records -Successfully completing the Postman requests that are described in the following Wellbore DDMS APIs indicates successful ingestion and retrieval of well records in your Microsoft Energy Data Services instance. +Successfully completing the Postman requests that are described in the following Wellbore DDMS APIs indicates successful ingestion and retrieval of well records in your Azure Data Manager for Energy Preview instance. ### Create a legal tag For more information, see [Manage legal tags](how-to-manage-legal-tags.md). ### Create a well -Create a well record in your Microsoft Energy Data Services instance. +Create a well record in your Azure Data Manager for Energy Preview instance. API: **Well** > **Create Well**. Method: POST ### Get a well record -Get the well record data for your Microsoft Energy Data Services instance. +Get the well record data for your Azure Data Manager for Energy Preview instance. API: **Well** > **Well** Method: GET ### Get well versions -Get the versions of each ingested well record in your Microsoft Energy Data Services instance. +Get the versions of each ingested well record in your Azure Data Manager for Energy Preview instance. API: **Well** > **Well versions** Method: GET ### Get a specific well version -Get the details of a specific version for a specific well record in your Microsoft Energy Data Services instance. +Get the details of a specific version for a specific well record in your Azure Data Manager for Energy Preview instance. API: **Well** > **Well Specific version** Method: GET ### Delete a well record -Delete a specific well record from your Microsoft Energy Data Services instance. +Delete a specific well record from your Azure Data Manager for Energy Preview instance. API: **Clean up** > **Well Record** |
event-hubs | Event Hubs Capture Enable Through Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-enable-through-portal.md | Follow [Create a storage account](../storage/common/storage-account-create.md?ta 1. On the **Review + create** page, review settings, and select **Create** to create the event hub. > [!NOTE]- > The container you create in a Azure Data Lake Storage Gen 2 using this user interface (UI) is shown under **File systems** in **Storage Explorer**. Similarly, the file system you create in a Data Lake Storage Gen 2 account shows up as a container in this UI. + > The container you create in an Azure Data Lake Storage Gen 2 using this user interface (UI) is shown under **File systems** in **Storage Explorer**. Similarly, the file system you create in a Data Lake Storage Gen 2 account shows up as a container in this UI. ## Capture data to Azure Data Lake Storage Gen 1 |
event-hubs | Event Hubs Dedicated Cluster Create Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-cluster-create-portal.md | Title: Create an Event Hubs dedicated cluster using the Azure portal + Title: Create an Event Hubs Dedicated cluster using the Azure portal description: In this quickstart, you learn how to create an Azure Event Hubs cluster using Azure portal. Last updated 02/07/2023 -# Quickstart: Create a dedicated Azure Event Hubs cluster using Azure portal -Event Hubs clusters offer **single-tenant deployments** for customers with the most demanding streaming needs. This offering has a guaranteed **99.99%** SLA, which is available only in our dedicated pricing tier. An [Event Hubs cluster](event-hubs-dedicated-overview.md) can ingress millions of events per second with guaranteed capacity and subsecond latency. Namespaces and event hubs created within a cluster include all features of the premium offering and more, but without any ingress limits. The dedicated offering also includes the popular [Event Hubs Capture](event-hubs-capture-overview.md) feature at no additional cost, allowing you to automatically batch and log data streams to [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) or [Azure Data Lake Storage Gen 1](../data-lake-store/data-lake-store-overview.md). +# Quickstart: Create a Dedicated Azure Event Hubs cluster using Azure portal +Event Hubs clusters offer **single-tenant deployments** for customers with the most demanding streaming needs. This offering has a guaranteed **99.99%** SLA, which is available only in our Dedicated pricing tier. An [Event Hubs cluster](event-hubs-dedicated-overview.md) can ingress millions of events per second with guaranteed capacity and subsecond latency. Namespaces and event hubs created within a cluster include all features of the premium offering and more, but without any ingress limits. The Dedicated offering also includes the popular [Event Hubs Capture](event-hubs-capture-overview.md) feature at no additional cost, allowing you to automatically batch and log data streams to [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) or [Azure Data Lake Storage Gen 1](../data-lake-store/data-lake-store-overview.md). Dedicated clusters are provisioned and billed by **capacity units (CUs)**, a pre-allocated amount of CPU and memory resources. You can purchase up to 10 CUs for a cluster in the Azure portal. If you need a cluster larger than 10 CU, you can submit an Azure support request to scale up your cluster after its creation. In this quickstart, we'll walk you through creating a 1 CU Event Hubs cluster through the Azure portal. > [!NOTE]-> - The dedicated tier isn't available in all regions. Try to create a dedicated cluster in the Azure portal and see supported regions in the **Location** drop-down list on the **Create Event Hubs Cluster** page. -> - This [Azure Portal](https://aka.ms/eventhubsclusterquickstart) self-serve experience is currently in **preview**. If you have any questions about the dedicated offering, reach out to the [Event Hubs team](mailto:askeventhubs@microsoft.com). +> - The Dedicated tier isn't available in all regions. Try to create a Dedicated cluster in the Azure portal and see supported regions in the **Location** drop-down list on the **Create Event Hubs Cluster** page. +> - This [Azure Portal](https://aka.ms/eventhubsclusterquickstart) self-serve experience is currently in **preview**. If you have any questions about the Dedicated offering, reach out to the [Event Hubs team](mailto:askeventhubs@microsoft.com). ## Prerequisites To create a cluster in your resource group using the Azure portal, complete the :::image type="content" source="./media/event-hubs-dedicated-cluster-create-portal/create-namespace-cluster-page.png" alt-text="Image showing the Create namespace in the cluster page."::: 3. Once your namespace is created, you can [create an event hub](event-hubs-create.md#create-an-event-hub) as you would normally create one within a namespace. -## Scale Event Hubs dedicated cluster +## Scale a Dedicated cluster For clusters created with the **Support Scaling** option set, use the following steps to scale out or scale in your cluster. -1. On the **Event Hubs Cluster** page for your dedicated cluster, select **Scale** on the left menu. +1. On the **Event Hubs Cluster** page for your Dedicated cluster, select **Scale** on the left menu. :::image type="content" source="./media/event-hubs-dedicated-cluster-create-portal/scale-page.png" alt-text="Screenshot showing the Scale tab of the Event Hubs Cluster page."::: 1. Use the slider to increase (scale out) or decrease (scale in) capacity units assigned to the cluster. For clusters created with the **Support Scaling** option set, use the following 5. For **Problem type**, select **Quota or Configuration changes**. 6. For **Problem subtype**, select one of the following values from the drop-down list: 1. Select **Dedicated Cluster SKU requests** to request for the feature to be supported in your region.- 2. Select **Scale up or down a dedicated Cluster** if you want to scale up or scale down your dedicated cluster. + 2. Select **Scale up or down a Dedicated Cluster** if you want to scale up or scale down your Dedicated cluster. 7. For **Subject**, describe the issue.  - ## Delete a dedicated cluster + ## Delete a Dedicated cluster 1. To delete the cluster, select **Delete** from the toolbar on the **Event Hubs Cluster** page for your cluster. |
external-attack-surface-management | Deploying The Defender Easm Azure Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/deploying-the-defender-easm-azure-resource.md | Before you create a Defender EASM resource group, we recommend that you are fami - swedencentral - eastasia - japaneast+ - westeurope  |
firewall | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md | Azure Firewall Standard has the following known issues: |FQDN tags require a protocol: port to be set|Application rules with FQDN tags require port: protocol definition.|You can use **https** as the port: protocol value. We're working to make this field optional when FQDN tags are used.| |Moving a firewall to a different resource group or subscription isn't supported|Moving a firewall to a different resource group or subscription isn't supported.|Supporting this functionality is on our road map. To move a firewall to a different resource group or subscription, you must delete the current instance and recreate it in the new resource group or subscription.| |Threat intelligence alerts may get masked|Network rules with destination 80/443 for outbound filtering masks threat intelligence alerts when configured to alert only mode.|Create outbound filtering for 80/443 using application rules. Or, change the threat intelligence mode to **Alert and Deny**.|-|Azure Firewall DNAT doesn't work for private IP destinations|Azure Firewall DNAT support is limited to Internet egress/ingress. DNAT doesn't currently work for private IP destinations. For example, spoke to spoke.|This is a current limitation.| +|Azure Firewall DNAT doesn't work for private IP destinations|Azure Firewall DNAT support is limited to Internet egress/ingress. DNAT doesn't currently work for private IP destinations. For example, spoke to spoke.|A fix is being investigated.| |Availability zones can only be configured during deployment.|Availability zones can only be configured during deployment. You can't configure Availability Zones after a firewall has been deployed.|This is by design.| |SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall. |SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using non-standard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules. |
firewall | Threat Intel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/threat-intel.md | Title: Azure Firewall threat intelligence based filtering -description: Threat intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses and domains. +description: You can enable Threat intelligence-based filtering for your firewall to alert and deny traffic from/to known malicious IP addresses and domains. -Threat intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses, FQDNs, and URLs. The IP addresses, domains and URLs are sourced from the Microsoft Threat Intelligence feed, which includes multiple sources including the Microsoft Cyber Security team. [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence) powers Microsoft threat intelligence and is used by multiple services including Microsoft Defender for Cloud.<br> +You can enable Threat intelligence-based filtering for your firewall to alert and deny traffic from/to known malicious IP addresses, FQDNs, and URLs. The IP addresses, domains and URLs are sourced from the Microsoft Threat Intelligence feed, which includes multiple sources including the Microsoft Cyber Security team. [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence) powers Microsoft threat intelligence and uses multiple services including Microsoft Defender for Cloud.<br> <br> :::image type="content" source="media/threat-intel/firewall-threat.png" alt-text="Firewall threat intelligence" border="false"::: -If you've enabled threat intelligence-based filtering, the associated rules are processed before any of the NAT rules, network rules, or application rules. +If you've enabled threat intelligence-based filtering, the firewall processes the associated rules before any of the NAT rules, network rules, or application rules. -You can choose to just log an alert when a rule is triggered, or you can choose alert and deny mode. +When a rule triggers, you can choose to just log an alert, or you can choose alert and deny mode. -By default, threat intelligence-based filtering is enabled in alert mode. You canΓÇÖt turn off this feature or change the mode until the portal interface becomes available in your region. +By default, threat intelligence-based filtering is in alert mode. You canΓÇÖt turn off this feature or change the mode until the portal interface becomes available in your region. -You can define allowlists so threat intelligence won't filter traffic to any of the listed FQDNs, IP addresses, ranges, or subnets. +You can define allowlists so threat intelligence doesn't filter traffic to any of the listed FQDNs, IP addresses, ranges, or subnets. For a batch operation, you can upload a CSV file with list of IP addresses, ranges, and subnets. The following log excerpt shows a triggered rule: ## Testing -- **Outbound testing** - Outbound traffic alerts should be a rare occurrence, as it means that your environment has been compromised. To help test outbound alerts are working, a test FQDN has been created that triggers an alert. Use `testmaliciousdomain.eastus.cloudapp.azure.com` for your outbound tests.+- **Outbound testing** - Outbound traffic alerts should be a rare occurrence, as it means that your environment is compromised. To help test outbound alerts are working, a test FQDN exists that triggers an alert. Use `testmaliciousdomain.eastus.cloudapp.azure.com` for your outbound tests. -- **Inbound testing** - You can expect to see alerts on incoming traffic if DNAT rules are configured on the firewall. You'll see alerts even if only specific sources are allowed on the DNAT rule and traffic is otherwise denied. Azure Firewall doesn't alert on all known port scanners; only on scanners that are known to also engage in malicious activity.+ To prepare for your tests and to ensure you don't get a DNS resolution failure, configure the following items: ++ - Add a dummy record to the hosts file on your test computer. For example, on a computer running Windows, you could add `1.2.3.4 testmaliciousdomain.eastus.cloudapp.azure.com` to the `C:\Windows\System32\drivers\etc\hosts` file. + - Ensure that the tested HTTP/S request is allowed using an application rule, not a network rule. ++- **Inbound testing** - You can expect to see alerts on incoming traffic if the firewall has DNAT rules configured. You'll see alerts even if the firewall only allows specific sources on the DNAT rule and traffic is otherwise denied. Azure Firewall doesn't alert on all known port scanners; only on scanners that also engage in malicious activity. ## Next steps |
frontdoor | Migrate Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier.md | Azure Front Door Standard and Premium tier bring the latest cloud delivery netwo * Register the service principal for **Microsoft.AzureFrontDoor-Cdn** as an app in your Azure Active Directory using Azure PowerShell. * Grant **Microsoft.AzureFrontDoor-Cdn** access to your Key Vault. * Session affinity gets enabled from the origin group settings in the Azure Front Door Standard or Premium profile. In Azure Front Door (classic), session affinity is managed at the domain level. As part of the migration, session affinity is based on the Classic profile's configuration. If you have two domains in the Classic profile that shares the same backend pool (origin group), session affinity has to be consistent across both domains in order for migration to be compatible.- ++> [!NOTE] +> You don't need to make any DNS changes before or during the migration process. +> +> However, when the migration is completed and traffic flows through your new Azure Front Door Standard or Premium profile, you should update your DNS records. For more information, see [Update DNS records](#update-dns-records). + ## Validate compatibility 1. Go to the Azure Front Door (classic) resource and select **Migration** from under *Settings*. Select **Grant** to add managed identities from the last section to all the Key :::image type="content" source="./media/migrate-tier/classic-profile.png" alt-text="Screenshot of the overview page of a Front Door (classic) in a disabled state."::: +## Update DNS records ++Your old Azure Front Door (classic) instance uses a different fully qualified domain name (FQDN) than Azure Front Door Standard and Premium. For example, an Azure Front Door (classic) endpoint might be `contoso.azurefd.net`, while the Azure Front Door Standard or Premium endpoint might be `contoso-mdjf2jfgjf82mnzx.z01.azurefd.net`. For more information about Azure Front Door Standard and Premium endpoints, see [Endpoints in Azure Front Door](./endpoint.md). ++You don't need to update your DNS records before or during the migration process. Azure Front Door automatically sends traffic that it receives on the Azure Front Door (classic) endpoint to your Azure Front Door Standard or Premium profile without you making any configuration changes. ++However, when your migration is finished, we strongly recommend that you update your DNS records to direct traffic to the new Azure Front Door Standard or Premium endpoint. Changing your DNS records helps to ensure that your profile continues to work in the future. The change in DNS record doesn't cause any downtime. You don't need to plan this update to happen at any specific time, and you can schedule it at your convenience. + ## Next steps * Understand the [mapping between Front Door tiers](tier-mapping.md) settings. |
frontdoor | Tier Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-upgrade.md | Azure Front Door supports upgrading from Standard to Premium tier for more advan > Upgrading Azure Front Door tier is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. -This article will walk you through how to perform the tier upgrade on the configuration page of a Front Door Standard profile. Once upgraded, you'll be charge for the Azure Front Door Premium monthly base fee at an hourly rate. +This article will walk you through how to perform the tier upgrade on the configuration page of a Front Door Standard profile. Once upgraded, you'll be charged for the Azure Front Door Premium monthly base fee at an hourly rate. > [!IMPORTANT] > Downgrading from Premium to Standard tier is not supported. |
governance | Effects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md | Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported. Previously updated : 10/20/2022 Last updated : 02/22/2023 manages the evaluation and outcome and reports the results back to Azure Policy. Resource Manager mode. - **Deny** is then evaluated. By evaluating deny before audit, double logging of an undesired resource is prevented.-- **Audit** is evaluated. -- **Manual** is evaluated. -- **AuditIfNotExists** is evaluated. -- **denyAction** is evaluated last. +- **Audit** is evaluated. +- **Manual** is evaluated. +- **AuditIfNotExists** is evaluated. +- **denyAction** is evaluated last. After the Resource Provider returns a success code on a Resource Manager mode request, **AuditIfNotExists** and **DeployIfNotExists** evaluate to determine whether additional compliance location of the Constraint template to use in Kubernetes to limit the allowed co ### DenyAction evaluation When a request call with an applicable action name and targeted scope is submitted, `denyAction` prevents the request from succeeding. The request is returned as a `403 (Forbidden)`. In the portal, the Forbidden can be viewed as a status on the deployment that was prevented by the policy-assignment. +assignment. -`Microsoft.Authorization/policyAssignments`, `Microsoft.Authorization/denyAssignments`, `Microsoft.Blueprint/blueprintAssignments`, `Microsoft.Resources/deploymentStacks`, and `Microsoft.Authorization/locks` are all exempt from DenyAction enforcement to prevent lockout scenarios. +`Microsoft.Authorization/policyAssignments`, `Microsoft.Authorization/denyAssignments`, `Microsoft.Blueprint/blueprintAssignments`, `Microsoft.Resources/deploymentStacks`, and `Microsoft.Authorization/locks` are all exempt from DenyAction enforcement to prevent lockout scenarios. > [!NOTE] > Under preview, assignments with `denyAction` effect will show a `Not Started` compliance state. #### Subscription deletion-Policy won't block removal of resources that happens during a subscription deletion. +Policy won't block removal of resources that happens during a subscription deletion. -#### Resource group deletion -Policy will evaluate resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule will block a resource group deletion. Policy won't block removal of resources that don't support location and tags nor any policy with `mode:all`. +#### Resource group deletion +Policy will evaluate resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule will block a resource group deletion. Policy won't block removal of resources that don't support location and tags nor any policy with `mode:all`. #### Cascade deletion-Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy won't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child). +Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy won't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child). [!INCLUDE [policy-denyAction](../../../../includes/azure-policy-deny-action.md)] Cascade deletion occurs when deleting of a parent resource is implicitly deletes The **details** property of the DenyAction effect has all the subproperties that define the action and behaviors. - **actionNames** (required)- - An _array_ that specifies what actions to prevent from being executed. - - Supported action names are: `delete`. + - An _array_ that specifies what actions to prevent from being executed. + - Supported action names are: `delete`. - **cascadeBehaviors** (optional)- - An _object_ that defines what behavior will be followed when the resource is being implicitly deleted by the removal of a resource group. + - An _object_ that defines what behavior will be followed when the resource is being implicitly deleted by the removal of a resource group. - Only supported in policy definitions with [mode](./definition-structure.md#resource-manager-modes) set to `indexed`.- - Allowed values are `allow` or `deny`. - - Default value is `deny`. + - Allowed values are `allow` or `deny`. + - Default value is `deny`. ### DenyAction example-Example: Deny any delete calls targeting database accounts that have a tag environment that equals prod. Since cascade behavior is set to deny, block any DELETE call that targets a resource group with an applicable database account. +Example: Deny any delete calls targeting database accounts that have a tag environment that equals prod. Since cascade behavior is set to deny, block any DELETE call that targets a resource group with an applicable database account. ```json { related resources to match and the template deployment to execute. - Allowed values are _Subscription_ and _ResourceGroup_. - Sets the scope of where to fetch the related resource to match from. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource.- - For _ResourceGroup_, would limit to the **if** condition resource's resource group or the - resource group specified in **ResourceGroupName**. + - For _ResourceGroup_, would limit to the resource group in **ResourceGroupName** if specified. If **ResourceGroupName** isn't specified, would limit to the **if** condition resource's resource group, which is the default behavior. - For _Subscription_, queries the entire subscription for the related resource. Assignment scope should be set at subscription or higher for proper evaluation. - Default is _ResourceGroup_. - **EvaluationDelay** (optional) |
hdinsight | Hdinsight 50 Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-50-component-versioning.md | Title: Open-source components and versions - Azure HDInsight 5.0 description: Learn about the open-source components and versions in Azure HDInsight 5.0. Previously updated : 08/25/2022 Last updated : 02/22/2023 # HDInsight 5.0 component versions This table lists certain HDInsight 4.0 cluster types that have retired or will b ||-||--| | HDInsight 4.0 Kafka | 2.1.0 | Sep 30, 2022 | Oct 1, 2022 | +## Spark versions supported in Azure HDInsight ++Apache Spark versions supported in Azure HDIinsight ++|Apache Spark version on HDInsight|Release date|Release stage|End of life announcement date|[End of standard support]()|[End of basic support]()| +|--|--|--|--|--|--| +|2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024| +|3.1|March 11,2022|GA |-|-|-| +|3.3|March 22,2023|Public Preview|-|-|-| ++## Apache Spark 2.4 to Spark 3.x Migration Guides ++Spark 2.4 to Spark 3.x Migration Guides see [here](https://spark.apache.org/docs/latest/migration-guide.html). + ## Spark :::image type="content" source="./media/hdinsight-release-notes/spark-3-1-for-hdi-5-0.png" alt-text="Screenshot of Spark 3.1 for HDI 5.0"::: HDInsight team is working on upgrading other open-source components. - [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md) - [Enterprise Security Package](./enterprise-security-package.md) - [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)-- |
hdinsight | Hdinsight Hadoop Use Data Lake Storage Gen1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md | New-AzResourceGroupDeployment ` ## Use Data Lake Storage Gen1 as additional storage -You can use Data Lake Storage Gen1 as additional storage for the cluster as well. In such cases, the cluster default storage can either be an Azure Blob storage or a Azure Data Lake Storage Gen1 account. When running HDInsight jobs against the data stored in Azure Data Lake Storage Gen1 as additional storage, use the fully qualified path. For example: +You can use Data Lake Storage Gen1 as additional storage for the cluster as well. In such cases, the cluster default storage can either be an Azure Blob storage or an Azure Data Lake Storage Gen1 account. When running HDInsight jobs against the data stored in Azure Data Lake Storage Gen1 as additional storage, use the fully qualified path. For example: `adl://mydatalakestore.azuredatalakestore.net/<file_path>` |
healthcare-apis | Store Profiles In Fhir | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/store-profiles-in-fhir.md | Profiles are also specified by various Implementation Guides (IGs). Some common ### Storing profiles -To store profiles in Azure API for FHIR, you can `PUT` the `StructureDefinition` with the profile content in the body of the request. A standard `PUT` or a conditional update are both good methods to store profiles on the FHIR service. Use the conditional update if you are unsure which to use. +To store profiles in FHIR service, you can `PUT` the `StructureDefinition` with the profile content in the body of the request. A standard `PUT` or a conditional update are both good methods to store profiles on the FHIR service. Use the conditional update if you are unsure which to use. -Standard `PUT`: `PUT http://<your Azure API for FHIR base URL>/StructureDefinition/profile-id` +Standard `PUT`: `PUT http://<your FHIR service base URL>/StructureDefinition/profile-id` **or** -Conditional update: `PUT http://<your Azure API for FHIR base URL>/StructureDefinition?url=http://sample-profile-url` +Conditional update: `PUT http://<your FHIR service base URL>/StructureDefinition?url=http://sample-profile-url` ``` { Conditional update: `PUT http://<your Azure API for FHIR base URL>/StructureDefi For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We've included a snippet of this profile for the example. ```rest-PUT https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance +PUT https://<your FHIR service base URL>/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance ``` ```json You can access your existing custom profiles using a `GET` request, ``GET http:/ For example, if you want to view US Core Goal resource profile: -`GET https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-goal` +`GET https://<your FHIR service base URL>/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-goal` This will return the `StructureDefinition` resource for US Core Goal profile, that will start like this: You'll be returned with a `CapabilityStatement` that includes the following info "http://hl7.org/fhir/us/core/StructureDefinition/us-core-patient" ], ```++### Bindings in Profiles +A terminology service is a set of functions that can perform operations on medical ΓÇ£terminologies,ΓÇ¥ such as validating codes, translating codes, expanding value sets, etc. The FHIR service doesn't support terminology service. Information for supported operations ($), resource types and interactions can be found in the service's CapabilityStatement. Resource types ValueSet, StructureDefinition and CodeSystem are supported with basic CRUD operations and Search (as defined in the CapabilityStatement) as well as being leveraged by the system for use in $validate. ++ValueSets can contain a complex set of rules and external references. Today, the service will only consider the pre-expanded inline codes. Customers need to upload supported ValueSets to the FHIR server prior to utilizing the $validate operation. The ValueSet resources must be uploaded to the FHIR server, using PUT or conditional update as mentioned under Storing Profiles section above. ++ ## Next steps In this article, you've learned about FHIR profiles. Next, you'll learn how you can use $validate to ensure that resources conform to these profiles. |
hpc-cache | Hpc Cache Add Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-add-storage.md | description: How to define storage targets so that your Azure HPC Cache can use Previously updated : 10/05/2022 Last updated : 2/15/2023 -+ # Add storage targets *Storage targets* are back-end storage for files that are accessed through an Azure HPC Cache. You can add NFS storage (like an on-premises hardware system), or store data in Azure Blob. -You can define 10 different storage targets for any cache, and larger caches can [support up to 20 storage targets](#size-your-cache-correctly-to-support-your-storage-targets). - The cache presents all of the storage targets in one [aggregated namespace](hpc-cache-namespace.md). The namespace paths are configured separately after you add the storage targets. Remember that the storage exports must be accessible from your cache's virtual network. For on-premises hardware storage, you might need to set up a DNS server that can resolve hostnames for NFS storage access. Read more in [DNS access](hpc-cache-prerequisites.md#dns-access). The procedure to add a storage target is slightly different depending on the typ [](https://azure.microsoft.com/resources/videos/set-up-hpc-cache/) --> -## Size your cache correctly to support your storage targets --When you create the cache, make sure you select the type and size that will support the number of storage targets you need. --The number of supported storage targets depends on the cache type and the cache capacity. Cache capacity is a combination of throughput capacity (in GB/s) and storage capacity (in TB). --* Up to 10 storage targets - A standard cache with the smallest or medium cache storage value for your selected throughput can have a maximum of 10 storage targets. -- For example, if you choose 2 GB/second throughput and don't choose the largest cache storage size (12 TB), your cache supports a maximum of 10 storage targets. --* Up to 20 storage targets - -- * All read-only high-throughput caches (which have preconfigured cache storage sizes) can support up to 20 storage targets. - * Standard caches can support up to 20 storage targets if you choose the highest available cache size for your selected throughput value. (If using Azure CLI, choose the highest valid cache size for your cache SKU.) --Read [Choose cache type and capacity](hpc-cache-create.md#choose-cache-type-and-capacity) to learn more about throughput and cache size settings. - ## Choose the correct storage target type You can select from three storage target types: **NFS**, **Blob**, and **ADLS-NFS**. Choose the type that matches the kind of storage system you'll use to store your files during this HPC Cache project. |
hpc-cache | Hpc Cache Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-create.md | description: How to create an Azure HPC Cache instance Previously updated : 10/03/2022- Last updated : 2/15/2023+ ms.devlang: azurecli On the **Cache** page, specify the type and size of cache to create. These value * How many storage targets it can have * The cache's cost -First, choose the type of cache you want. Options include: + -* **Read-write standard caching** - A flexible, general-purpose cache -* **Read-only caching** - A high-throughput cache designed to minimize latency for file access +Before you can choose throughput or storage capacity, you need to choose the cache type. Options include: -Read more about these cache type options below in [Choose the cache type for your needs](#choose-the-cache-type-for-your-needs). +* Read-write standard caching: A flexible general-purpose cache +* Read-only caching: A high-throughput cache designed to minimize file access latency; modifications are handled with synchronous write-through operations +* Read-write premium caching (Preview): An NVMe-optimized cache with the lowest latency and highest throughput +<!-- * Read-only scalable standard caching (Preview): A general-purpose cache that can be made larger or smaller (at predefined sizes) to accommodate variable workloads --> -Second, select the cache's capacity. Cache capacity is a combination of two values: + ++Read more about these cache types below in [Choose the cache type for your needs](#choose-the-cache-type-for-your-needs). ++> [!TIP] +> "Read-write" cache types can be configured with storage targets using either read caching or read-write caching usage models. "Read-only" cache types only support NFS and ADLS-NFS storage target types with read-caching usage models only. Learn more about caching modes in [Understand cache usage models](cache-usage-models.md). ++The "Standard" cache SKU lets you choose the cache's capacity for a given throughput selection, while the "Premium" and "read-only" caches have fixed capacities for each given throughput selection. The cache's capabilities are defined by two deployment choices: * **Maximum throughput** - The data transfer rate for the cache, in GB/second * **Cache size** - The amount of storage allocated for cached data, in TB - + ### Understand throughput and cache size Azure HPC Cache manages which files are cached and pre-loaded to maximize cache Choose a cache storage size that can comfortably hold the active set of working files, plus additional space for metadata and other overhead. -Throughput and cache size also affect how many storage targets are supported for a particular cache. If you want to use more than 10 storage targets with your cache, you must choose the highest available cache storage size value available for your throughput size, or choose the high-throughput read-only configuration. Learn more in [Add storage targets](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets). - If you need help sizing your cache correctly, contact Microsoft Service and Support. + ### Choose the cache type for your needs -When you choose your cache capacity, you might notice that some cache types have one fixed cache size, and others let you select from multiple cache size options for each throughput value. This is because they use different styles of cache infrastructure. +There are two basic cache types: "standard" and "premium". ++**Standard** caches are general-use HPC Cache systems. You can select from multiple storage sizes after choosing your throughput value, and you can attach any of the HPC Cache supported storage target types. ++**Premium** caches are designed for the highest-performance file service. They use high-throughput NVMe storage devices, which means that premium caches have a different pricing structure, static cache capacities, and cannot be temporarily stopped. ++Cache options include: ++* **Read-write standard caching** ++ With standard caches, you can choose from several cache size values. These caches can be configured with storage target usage models for both read (write-through) and read-write caching. ++* **Read-only caching** ++ This type of cache provides higher throughput and lower latency than a standard cache, but is designed to optimize file and directory read access only. You cannot configure a read-only cache to use read-write cache usage models, but a read-after-write workload will result in a cache-hit, as the writes are persisted synchronously to the storage target. This type of cache has only one cache size option for each throughput choice. ++* **Read-write premium caching (Preview)** + + A high-throughput cache that can be configured for either read-only or read-write caching. These caches have only one cache size option for each throughput option. -* Standard caches - Cache type **Read-write caching** +<!-- * **Read-only scalable caching (Preview)** - With standard caches, you can choose from several cache size values. These caches can be configured for read-only or for read and write caching. + A standard throughput cache that can be made larger or smaller to accommodate variable workflows. You can choose from a variety of storage sizes for each throughput size. -* High-throughput caches - Cache type **Read-only caching** + > [!NOTE] + > For a scalable cache, the values you choose at create time determine the size options you will have when scaling the cache up or down later. Choose the highest throughput and largest storage size if you want to be able to maximize these values later. - The high-throughput read-only caches are preconfigured with only one cache size option per throughput value. They're designed to optimize file read access only. + Read [Use scalable caches](scale-cache.md) to learn more about creating and using scalable caches. --> - +This table explains important differences among the three cache types. -This table explains some important differences between the two options. +| Attribute | Read-Write Standard Caching | Read-Only Caching | Read-Write Premium Caching | +|--|--|--|--| +| Throughput sizes | 2, 4, or 8 GB/sec | 4.5, 9, or 16 GB/sec | 5, 10, or 20 GB/sec | +| Cache sizes | 3, 6, or 12 TB for 2 GB/sec<br/> 6, 12, or 24 TB for 4 GB/sec<br/> 12, 24, or 48 TB for 8 GB/sec| 21 TB for 4.5 GB/sec <br/> 42 TB for 9 GB/sec <br/> 84 TB for 16 GB/sec | 21 TB for 5 GB/sec <br/> 42 TB for 10 GB/sec <br/> 84 TB for 20 GB/sec | +| Compatible storage target types | Azure Blob <br/> NFS (on-premises)<br />ADLS-NFS (NFSv3-enabled Azure Blob) | NFS (on-premises)<br />ADLS-NFS (NFSv3-enabled Azure Blob) | Azure Blob <br/> NFS (on-premises)<br />ADLS-NFS (NFSv3-enabled Azure Blob) | +| Caching styles | Read-write caching | Read caching only | Read-write caching | +| Cache can be stopped to save cost when not needed | Yes | No | No | -| Attribute | Standard cache | Read-only high-throughput cache | -|--|--|--| -| Cache type |"Read-write standard caching"| "Read-only caching"| -| Throughput sizes | 2, 4, or 8 GB/sec | 4.5, 9, or 16 GB/sec | -| Cache sizes | 3, 6, or 12 TB for 2 GB/sec<br/> 6, 12, or 24 TB for 4 GB/sec<br/> 12, 24, or 48 TB for 8 GB/sec| 21 TB for 4.5 GB/sec <br/> 42 TB for 9 GB/sec <br/> 84 TB for 16 GB/sec | -| Maximum number of storage targets | [10 or 20](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets) depending on cache size selection | 20 | -| Compatible storage target types | Azure Blob, on-premises NFS storage, NFS-enabled blob | on-premises NFS storage <br/>NFS-enabled blob storage is in preview for this combination | -| Caching styles | Read caching or read-write caching | Read caching only | -| Cache can be stopped to save cost when not needed | Yes | No | +All three caching options have a maximum storage target count of 20. Learn more about these options: -* [Maximum number of storage targets](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets) * [Read and write caching modes](cache-usage-models.md#basic-file-caching-concepts) ## Enable Azure Key Vault encryption (optional) Supply these values: | 24576 GB | no | yes | yes | | 49152 GB | no | no | yes | - If you want to use more than 10 storage targets with your cache, choose the highest available cache size value for your SKU. These configurations support up to 20 storage targets. + <!-- If you want to use more than 10 storage targets with your cache, choose the highest available cache size value for your SKU. These configurations support up to 20 storage targets. --> Read the **Set cache capacity** section in the portal instructions tab for important information about pricing, throughput, and how to size your cache appropriately for your workflow. |
hpc-cache | Hpc Cache Netapp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-netapp.md | Use the minimum size for the delegated subnet when creating an Azure NetApp File The minimum size, which is specified with the netmask /28, provides 16 IP addresses. In practice, Azure NetApp Files uses only three of those available IP addresses for volume access. This means that you only need to create three storage targets in your Azure HPC Cache to cover all of the volumes. -If the delegated subnet is too large, it's possible for the Azure NetApp Files volumes to use more IP addresses than a single Azure HPC Cache instance can handle. A single cache has a [limit of 10 storage targets](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets) for most cache throughput/storage size combinations, or 20 storage targets for the largest configurations. +If the delegated subnet is too large, it's possible for the Azure NetApp Files volumes to use more IP addresses than a single Azure HPC Cache instance can handle. The quickstart example in Azure NetApp Files documentation uses 10.7.0.0/16 for the delegated subnet, which gives a subnet that's too large. |
hpc-cache | Hpc Cache Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-prerequisites.md | description: Prerequisites for using Azure HPC Cache Previously updated : 12/30/2022- Last updated : 2/15/2023+ # Prerequisites for Azure HPC Cache Check these permission-related prerequisites before starting to create your cach The cache supports Azure Blob containers, NFS hardware storage exports, and NFS-mounted ADLS blob containers. Add storage targets after you create the cache. -The size of your cache determines the number of storage targets it can support - up to 10 storage targets for most caches, or up to 20 for the largest sizes. Read [Size your cache correctly to support your storage targets](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets) for details. - Each storage type has specific prerequisites. ### Blob storage requirements |
iot-central | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md | IoT Central can also control devices by calling commands on the device. For exam The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Assign a device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template). -The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/libraries-sdks.md). +The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/about-iot-sdks.md). Devices connect to IoT Central using one the supported protocols: [MQTT, AMQP, or HTTP](../../iot-hub/iot-hub-devguide-protocols.md). |
iot-develop | About Iot Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-sdks.md | The main consideration in choosing an SDK is the device's own hardware. General ## Device SDKs -These SDKs can run on a general MPU-based computing device such as a PC, tablet, smartphone, or Raspberry Pi. The SDKs support development in C and in modern managed languages including in C#, Node.JS, Python, and Java. --The SDKs are available in **multiple languages** providing the flexibility to choose which best suits your team and scenario. --| Language | Package | Source | Quickstarts | Samples | Reference | -| :-- | :-- | :-- | :-- | :-- | :-- | -| **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples) | [Reference](/dotnet/api/microsoft.azure.devices.client) | -| **Python** | [pip](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | [Reference](/python/api/azure-iot-device) | -| **Node.js** | [npm](https://www.npmjs.com/package/azure-iot-device) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-nodejs) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) | [Reference](/javascript/api/azure-iot-device/) | -| **Java** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/iothub/device/iot-device-samples) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) | -| **C** | [packages](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#getting-the-sdk) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [IoT Hub](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c) / [IoT Central](quickstart-send-telemetry-central.md?pivots=programming-language-ansi-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples) | [Reference](https://github.com/Azure/azure-iot-sdk-c/) | --> [!WARNING] -> The **Azure IoT C SDK** is **not** suitable for embedded applications due to its memory management and threading model. For embedded device SDK options, refer to the [Embedded device SDKs](#embedded-device-sdks). ## Embedded device SDKs -These SDKs were designed and created to run on devices with limited compute and memory resources and are implemented using the C language. --The embedded device SDKs are available for **multiple operating systems** providing the flexibility to choose which best suits your scenario. --| RTOS | SDK | Source | Samples | Reference | -| :-- | :-- | :-- | :-- | :-- | -| **Azure RTOS** | Azure RTOS Middleware | [GitHub](https://github.com/azure-rtos/netxduo) | [Quickstarts](quickstart-devkit-mxchip-az3166.md) | [Reference](https://github.com/azure-rtos/netxduo/tree/master/addons/azure_iot) | -| **FreeRTOS** | FreeRTOS Middleware | [GitHub](https://github.com/Azure/azure-iot-middleware-freertos) | [Samples](https://github.com/Azure-Samples/iot-middleware-freertos-samples) | [Reference](https://azure.github.io/azure-iot-middleware-freertos) | -| **Bare Metal** | Azure SDK for Embedded C | [GitHub](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot) | [Samples](https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/samples/iot/README.md) | [Reference](https://azure.github.io/azure-sdk-for-c) | ## Next Steps To start using the device SDKs to connect devices to Azure IoT, see the following article that provides a set of quickstarts. |
iot-develop | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-architecture.md | The model repository has built-in role-based access controls that let you manage ## Devices -A device builder implements the code to run on an IoT device using one of the [Azure IoT device SDKs](./libraries-sdks.md). The device SDKs help the device builder to: +A device builder implements the code to run on an IoT device using one of the [Azure IoT device SDKs](./about-iot-sdks.md). The device SDKs help the device builder to: - Connect securely to an IoT hub. - Register the device with your IoT hub and announce the model ID that identifies the collection of DTDL interfaces the device implements. |
iot-develop | Concepts Developer Guide Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-service.md | IoT Plug and Play lets you build IoT devices that advertise their capabilities t IoT Plug and Play lets you use devices that have announced their model ID with your IoT hub. For example, you can access the properties and commands of a device directly. -To use an IoT Plug and Play device that's connected to your IoT hub, one of the Azure IoT service SDKs: - ## Service SDKs Use the Azure IoT service SDKs in your solution to interact with devices and modules. For example, you can use the service SDKs to read and update twin properties and invoke commands. Supported languages include C#, Java, Node.js, and Python. + The service SDKs let you access device information from a solution component such as a desktop or web application. The service SDKs include two namespaces and object models that you can use to retrieve the model ID: - Iot Hub service client. This service exposes the model ID as a device twin property. |
iot-develop | Concepts Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-digital-twin.md | The following snippets show the side-by-side JSON representation of the `thermos ## Digital twin APIs -The digital twin APIs include **Get Digital Twin**, **Update Digital Twin**, **Invoke Component Command** and **Invoke Command** operations more managing a digital twin. You can either use the [REST APIs](/rest/api/iothub/service/digitaltwin) directly or through a [Service SDK](../iot-develop/libraries-sdks.md). +The digital twin APIs include **Get Digital Twin**, **Update Digital Twin**, **Invoke Component Command** and **Invoke Command** operations more managing a digital twin. You can either use the [REST APIs](/rest/api/iothub/service/digitaltwin) directly or through one of the [service SDKs](concepts-developer-guide-service.md#service-sdks). ## Digital twin change events |
iot-develop | How To Use Reliability Features In Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/how-to-use-reliability-features-in-sdks.md | + + Title: Manage connectivity and reliable messaging ++description: How to manage device connectivity and ensure reliable messaging when you use the Azure IoT Hub device SDKs +++ Last updated : 02/20/2023++++++# Manage connectivity and reliable messaging by using Azure IoT Hub device SDKs ++This article provides high-level guidance to help you design device applications that are more resilient. It shows you how to take advantage of the connectivity and reliable messaging features in Azure IoT device SDKs. The goal of this guide is to help you manage the following scenarios: ++* Fixing a dropped network connection ++* Switching between different network connections ++* Reconnecting because of transient service connection errors ++Implementation details vary by language. For more information, see the API documentation or specific SDK: ++* [C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/main/doc/connection_and_messaging_reliability.md) ++* [.NET SDK](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/devdoc/retrypolicy.md) ++* [Java SDK](https://github.com/Azure/azure-iot-sdk-jav) ++* [Node SDK](https://github.com/Azure/azure-iot-sdk-node/wiki/Connectivity-and-Retries) ++* [Python SDK](https://github.com/Azure/azure-iot-sdk-python) ++## Design for resiliency ++IoT devices often rely on non-continuous or unstable network connections (for example, GSM or satellite). Errors can occur when devices interact with cloud-based services because of intermittent service availability and infrastructure-level or transient faults. An application that runs on a device has to manage the mechanisms for connection, re-connection, and the retry logic for sending and receiving messages. Also, the retry strategy requirements depend heavily on the device's IoT scenario, context, capabilities. ++The Azure IoT Hub device SDKs aim to simplify connecting and communicating from cloud-to-device and device-to-cloud. These SDKs provide a robust way to connect to Azure IoT Hub and a comprehensive set of options for sending and receiving messages. Developers can also modify existing implementation to customize a better retry strategy for a given scenario. ++The relevant SDK features that support connectivity and reliable messaging are covered in the following sections. ++## Connection and retry ++This section gives an overview of the re-connection and retry patterns available when managing connections. It details implementation guidance for using a different retry policy in your device application and lists relevant APIs from the device SDKs. ++### Error patterns ++Connection failures can happen at many levels: ++* Network errors: disconnected socket and name resolution errors ++* Protocol-level errors for HTTP, AMQP, and MQTT transport: detached links or expired sessions ++* Application-level errors that result from either local mistakes: invalid credentials or service behavior (for example, exceeding the quota or throttling) ++The device SDKs detect errors at all three levels. OS-related errors and hardware errors are not detected and handled by the device SDKs. The SDK design is based on [The Transient Fault Handling Guidance](/azure/architecture/best-practices/transient-faults#general-guidelines) from the Azure Architecture Center. ++### Retry patterns ++The following steps describe the retry process when connection errors are detected: ++1. The SDK detects the error and the associated error in the network, protocol, or application. ++1. The SDK uses the error filter to determine the error type and decide if a retry is needed. ++1. If the SDK identifies an **unrecoverable error**, operations like connection, send, and receive are stopped. The SDK notifies the user. Examples of unrecoverable errors include an authentication error and a bad endpoint error. ++1. If the SDK identifies a **recoverable error**, it retries according to the specified retry policy until the defined timeout elapses. Note that the SDK uses **Exponential back-off with jitter** retry policy by default. ++1. When the defined timeout expires, the SDK stops trying to connect or send. It notifies the user. ++1. The SDK allows the user to attach a callback to receive connection status changes. ++The SDKs typically provide three retry policies: ++* **Exponential back-off with jitter**: This default retry policy tends to be aggressive at the start and slow down over time until it reaches a maximum delay. The design is based on [Retry guidance from Azure Architecture Center](/azure/architecture/best-practices/retry-service-specific). ++* **Custom retry**: For some SDK languages, you can design a custom retry policy that is better suited for your scenario and then inject it into the RetryPolicy. Custom retry isn't available on the C SDK, and it is not currently supported on the Python SDK. The Python SDK reconnects as-needed. ++* **No retry**: You can set retry policy to "no retry", which disables the retry logic. The SDK tries to connect once and send a message once, assuming the connection is established. This policy is typically used in scenarios with bandwidth or cost concerns. If you choose this option, messages that fail to send are lost and can't be recovered. ++### Retry policy APIs ++| SDK | SetRetryPolicy method | Policy implementations | Implementation guidance | +||||| +| C | [IOTHUB_CLIENT_RESULT IoTHubDeviceClient_SetRetryPolicy](https://azure.github.io/azure-iot-sdk-c/iothub__device__client_8h.html#a53604d8d75556ded769b7947268beec8) | See: [IOTHUB_CLIENT_RETRY_POLICY](https://azure.github.io/azure-iot-sdk-c/iothub__client__core__common_8h.html#a361221e523247855ff0a05c2e2870e4a) | [C implementation](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/connection_and_messaging_reliability.md) | +| Java | [SetRetryPolicy](/jav) | +| .NET | [DeviceClient.SetRetryPolicy](/dotnet/api/microsoft.azure.devices.client.deviceclient.setretrypolicy) | **Default**: [ExponentialBackoff class](/dotnet/api/microsoft.azure.devices.client.exponentialbackoff)<BR>**Custom:** implement [IRetryPolicy interface](/dotnet/api/microsoft.azure.devices.client.iretrypolicy)<BR>**No retry:** [NoRetry class](/dotnet/api/microsoft.azure.devices.client.noretry) | [C# implementation](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/devdoc/retrypolicy.md) | +| Node | [setRetryPolicy](/javascript/api/azure-iot-device/client#azure-iot-device-client-setretrypolicy) | **Default**: [ExponentialBackoffWithJitter class](/javascript/api/azure-iot-common/exponentialbackoffwithjitter)<BR>**Custom:** implement [RetryPolicy interface](/javascript/api/azure-iot-common/retrypolicy)<BR>**No retry:** [NoRetry class](/javascript/api/azure-iot-common/noretry) | [Node implementation](https://github.com/Azure/azure-iot-sdk-node/wiki/Connectivity-and-Retries) | +| Python | Not currently supported | Not currently supported | Built-in connection retries: Dropped connections will be retried with a fixed 10 second interval by default. This functionality can be disabled if desired, and the interval can be configured. | ++## Next steps ++Suggested next steps include: ++* [Troubleshoot device disconnects](../iot-hub/iot-hub-troubleshoot-connectivity.md) ++* [Use the Azure IoT Device SDKs](./about-iot-sdks.md) |
iot-develop | Howto Manage Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-manage-digital-twin.md | -IoT Plug and Play supports **Get digital twin** and **Update digital twin** operations to manage digital twins. You can use either the [REST APIs](/rest/api/iothub/service/digitaltwin) or one of the [service SDKs](libraries-sdks.md). +IoT Plug and Play supports **Get digital twin** and **Update digital twin** operations to manage digital twins. You can use either the [REST APIs](/rest/api/iothub/service/digitaltwin) or one of the [service SDKs](concepts-developer-guide-service.md#service-sdks). ## Update a digital twin |
iot-develop | Libraries Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/libraries-sdks.md | - Title: IoT Plug and Play libraries and SDKs -description: Information about the device and service libraries available for developing IoT Plug and Play enabled solutions. -- Previously updated : 07/22/2020-------# Microsoft SDKs for IoT Plug and Play --The IoT Plug and Play libraries and SDKs enable developers to build IoT solutions using various programming languages on multiple platforms. The following table includes links to samples and quickstarts to help you get started: --## Device SDKs --| Language | Package | Code Repository | Samples | Quickstart | Reference | -||||||| -| C - Device | [vcpkg 1.3.9](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/setting_up_vcpkg.md) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/pnp) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](https://github.com/Azure/azure-iot-sdk-c/) | -| .NET - Device | [NuGet 1.41.2](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/solutions/PnpDeviceSamples) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/dotnet/api/microsoft.azure.devices.client) | -| Java - Device | [Maven 1.26.0](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-jav) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) | -| Python - Device | [pip 2.3.0](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/pnp) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/python/api/azure-iot-device/azure.iot.device) | -| Node - Device | [npm 1.17.2](https://www.npmjs.com/package/azure-iot-device)  | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples/javascript/) | [Connect to IoT Hub](tutorial-connect-device.md) | [Reference](/javascript/api/azure-iot-device/) | -| Embedded C - Device | N/A | [GitHub](https://github.com/Azure/azure-sdk-for-c/)| [Samples](tutorial-connect-device.md?pivots=programming-language-embedded-c#samples) | [How to use Embedded C](tutorial-connect-device.md?pivots=programming-language-embedded-c) | N/A --## Service SDKs --| Platform | Package | Code Repository | Samples | Quickstart | Reference | -||||||| -| .NET - IoT Hub service | [NuGet 1.38.1](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/solutions/PnpServiceSamples) | N/A | [Reference](/dotnet/api/microsoft.azure.devices) | -| Java - IoT Hub service | [Maven 1.26.0](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client/1.26.0) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | N/A | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) | -| Node - IoT Hub service | [npm 1.13.0](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) | N/A | [Reference](/javascript/api/azure-iothub/) | -| Python - IoT Hub service | [pip 2.2.3](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-hub-python) | [Samples](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) | N/A | [Reference](/python/api/azure-iot-hub/) | --## Next steps --To try out the SDKs and libraries, see the [Developer Guide](concepts-developer-guide-device.md) and the [device tutorials](tutorial-connect-device.md) and [service tutorials](tutorial-service.md). |
iot-edge | Tutorial Develop For Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md | Cloud resources: This tutorial walks through the development of an IoT Edge module. An *IoT Edge module*, or sometimes just *module* for short, is a container with executable code. You can deploy one or more modules to an IoT Edge device. Modules perform specific tasks like ingesting data from sensors, cleaning and analyzing data, or sending messages to an IoT hub. For more information, see [Understand Azure IoT Edge modules](iot-edge-modules.md). -When developing IoT Edge modules, it's important to understand the difference between the development machine and the target IoT Edge device where the module will eventually be deployed. The container that you build to hold your module code must match the operating system (OS) of the *target device*. For example, the most common scenario is someone developing a module on a Windows computer intending to target a Linux device running IoT Edge. In that case, the container operating system would be Linux. As you go through this tutorial, keep in mind the difference between the *development machine OS* and the *container OS*. +When developing IoT Edge modules, it's important to understand the difference between the development machine and the target IoT Edge device where the module eventually deploys. The container that you build to hold your module code must match the operating system (OS) of the *target device*. For example, the most common scenario is someone developing a module on a Windows computer intending to target a Linux device running IoT Edge. In that case, the container operating system would be Linux. As you go through this tutorial, keep in mind the difference between the *development machine OS* and the *container OS*. >[!TIP] >If you're using [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md), then the *target device* in your scenario is the Linux virtual machine, not the Windows host. -This tutorial targets devices running IoT Edge with Linux containers. You can use your preferred operating system as long as your development machine runs Linux containers. We recommend using Visual Studio Code to develop with Linux containers, so that's what this tutorial will use. You can use Visual Studio as well, although there are differences in support between the two tools. +This tutorial targets devices running IoT Edge with Linux containers. You can use your preferred operating system as long as your development machine runs Linux containers. We recommend using Visual Studio Code to develop with Linux containers, so that's what this tutorial uses. You can use Visual Studio as well, although there are differences in support between the two tools. The following table lists the supported development scenarios for **Linux containers** in Visual Studio Code and Visual Studio. Use the Docker documentation to install on your development machine: * [Install Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/install/) - * When you install Docker Desktop for Windows, you're asked whether you want to use Linux or Windows containers. This decision can be changed at any time using an easy switch. For this tutorial, we use Linux containers because our modules are targeting Linux devices. For more information, see [Switch between Windows and Linux containers](https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers). + * When you install Docker Desktop for Windows, you're asked whether you want to use Linux or Windows containers. You can change this decision at any time, using an easy switch. For this tutorial, we use Linux containers because our modules are targeting Linux devices. For more information, see [Switch between Windows and Linux containers](https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers). * [Install Docker Desktop for Mac](https://docs.docker.com/docker-for-mac/install/) Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These 1. Install [Visual Studio Code](https://code.visualstudio.com/) on your development machine. -2. Once the installation is finished, open Visual Studio Code and select **View** > **Extensions**. +2. Once the installation finishes, open Visual Studio Code and select **View** > **Extensions**. 3. Search for **Azure IoT Edge** and **Azure IoT Hub**, which are extensions that help you interact with IoT Hub and IoT devices, as well as developing IoT Edge modules. 4. On each extension, select **Install**. -5. When the extensions are done installing, open the command palette by selecting **View** > **Command Palette**. +5. After you install extensions, open the command palette by selecting **View** > **Command Palette**. 6. In the command palette, search for and select **Azure: Sign in**. Follow the prompts to sign in to your Azure account. Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These 9. At the bottom of the explorer section, expand the collapsed **Azure IoT Hub / Devices** menu. You should see the devices and IoT Edge devices associated with the IoT hub that you selected through the command palette. [!INCLUDE [iot-edge-create-container-registry](includes/iot-edge-create-container-registry.md)] Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These The Azure IoT Edge extension provides project templates for all supported IoT Edge module languages in Visual Studio Code. These templates have all the files and code that you need to deploy a working module to test IoT Edge, or give you a starting point to customize the template with your own business logic. -For this tutorial, we use the C# module template because it is the most commonly used template. +For this tutorial, we use the C# module template because it's the most commonly used template. ### Create a project template In the Visual Studio Code command palette, search for and select **Azure IoT Edg 1. Provide a solution name: enter a descriptive name for your solution or accept the default **EdgeSolution**. 1. Select a module template: choose **C# Module**. 1. Provide a module name: accept the default **SampleModule**.-1. Provide Docker image repository for the module: an image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server value from the Overview page of your container registry in the Azure portal. +1. Provide Docker image repository for the module: an image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the **Login server** value from the Overview page of your container registry in the Azure portal. The final image repository looks like: In the Visual Studio Code command palette, search for and select **Azure IoT Edg Once your new solution loads in the Visual Studio Code window, take a moment to familiarize yourself with the files that it created: -* The **.vscode** folder contains a file called **launch.json**, which is used for debugging modules. +* The **.vscode** folder contains a file called **launch.json**, which you use for debugging modules. * The **modules** folder contains a folder for each module in your solution. Right now, that should only be **SampleModule**, or whatever name you gave to the module. The SampleModule folder contains the main program code, the module metadata, and several Docker files.-* The **.env** file holds the credentials to your container registry. These credentials are shared with your IoT Edge device so that it has access to pull the container images. -* The **deployment.debug.template.json** file and **deployment.template.json** file are templates that help you create a deployment manifest. A *deployment manifest* is a file that defines exactly which modules you want deployed on a device, how they should be configured, and how they can communicate with each other and the cloud. The template files use pointers for some values. When you transform the template into a true deployment manifest, the pointers are replaced with values taken from other solution files. +* The **.env** file holds the credentials to your container registry. Your IoT Edge device also has access to these credentials so it can pull the container images. +* The **deployment.debug.template.json** file and **deployment.template.json** file are templates that help you create a deployment manifest. A *deployment manifest* is a file that defines exactly which modules you want deployed on a device. The manifest also determines how you want to configure the modules and how they communicate with each other and the cloud. The template files use pointers for some values. When you transform the template into a true deployment manifest, the pointers replace values taken from other solution files. * Open the **deployment.template.json** file and locate two common placeholders:- * In the `registryCredentials` section, the address is auto-filled from the information you provided when you created the solution. However, the username and password reference the variables stored in the .env file. This configuration is for security, as the .env file is git ignored, but the deployment template is not. - * In the `SampleModule` section, the container image isn't filled in even though you provided the image repository when you created the solution. This placeholder points to the **module.json** file inside the SampleModule folder. If you go to that file, you'll see that the image field does contain the repository, but also a tag value that is made up of the version and the platform of the container. You can iterate the version manually as part of your development cycle, and you select the container platform using a switcher that we introduce later in this section. + * In the `registryCredentials` section, the auto-filled address has information you provided when you created the solution. However, the username and password reference the variables stored in the .env file. This configuration is for security, as the .env file is git ignored, but the deployment template isn't. + * In the `SampleModule` section, the container image isn't auto-filled even though you provided the image repository when you created the solution. This placeholder points to the **module.json** file inside the SampleModule folder. If you go to that file, you see that the image field does contain the repository, but also a tag value that contains the version and the platform of the container. You can iterate the version manually as part of your development cycle, and you select the container platform using a switcher that we introduce later in this section. ### Set IoT Edge runtime version The IoT Edge extension defaults to the latest stable version of the IoT Edge run 1. Choose the runtime version that your IoT Edge devices are running from the list. -After selecting a new runtime version, your deployment manifest is dynamically updated to reflect the change to the runtime module images. +After you select a new runtime version, your deployment manifest becomes dynamically updated to reflect the change to the runtime module images. ### Provide your registry credentials to the IoT Edge agent The environment file stores the credentials for your container registry and shar >[!NOTE] >If you didn't replace the **localhost:5000** value with the login server value from your Azure container registry, in the [**Create a project template**](#create-a-project-template) step, the **.env** file and the `registryCredentials` section of the deployment manifest will be missing. If that section is missing, return to the **Provide Docker image repository for the module** step in the **Create a project template** section to see how to replace the **localhost:5000** value. -The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now: +The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials exist. If not, add them now: 1. Open the **.env** file in your module solution. 2. Add the **username** and **password** values that you copied from your Azure container registry. The IoT Edge extension tries to pull your container registry credentials from Az ### Select your target architecture -Currently, Visual Studio Code can develop C# modules for Linux AMD64 and ARM32v7 devices. You need to select which architecture you're targeting with each solution, because that affects how the container is built and runs. The default is Linux AMD64. +Currently, Visual Studio Code can develop C# modules for Linux AMD64 and ARM32v7 devices. You need to select which architecture you're targeting with each solution, because that affects how the container gets built and runs. The default is Linux AMD64. 1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon at the bottom of the window. :::image type="content" source="./media/tutorial-develop-for-linux/select-architecture.png" alt-text="Screenshot showing the location of the architecture icon at the bottom of the Visual Studio Code window." lightbox="./media/tutorial-develop-for-linux/select-architecture.png"::: -2. In the command palette, select the target architecture from the list of options. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device, so will keep the default **amd64**. +2. In the command palette, select the target architecture from the list of options. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device, so we keep the default **amd64**. ### Review the sample code Each module can have multiple *input* and *output* queues declared in their code The sample C# code that comes with the project template uses the [ModuleClient Class](/dotnet/api/microsoft.azure.devices.client.moduleclient) from the IoT Hub SDK for .NET. -1. Open the **Program.cs** file, which is inside the **modules/SampleModule/** folder. +1. Open the **ModuleBackgroundService.cs** file, which is inside the **modules/SampleModule/** folder. -2. In program.cs, find the **SetInputMessageHandlerAsync** method. +2. In **ModuleBackgroundService.cs**, find the **SetInputMessageHandlerAsync** method. -3. The [SetInputMessageHandlerAsync](/dotnet/api/microsoft.azure.devices.client.moduleclient.setinputmessagehandlerasync) method sets up an input queue to receive incoming messages. Review this method and see how it initializes an input queue called **input1**. + The [SetInputMessageHandlerAsync](/dotnet/api/microsoft.azure.devices.client.moduleclient.setinputmessagehandlerasync) method sets up an input queue to receive incoming messages. Review this method and see how it initializes an input queue called **input1**. :::image type="content" source="./media/tutorial-develop-for-linux/declare-input-queue.png" alt-text="Screenshot showing where to find the input name in the SetInputMessageCallback constructor." lightbox="./media/tutorial-develop-for-linux/declare-input-queue.png"::: You've reviewed the module code and the deployment template to understand some k ### Sign in to Docker -Provide your container registry credentials to Docker so that it can push your container image to be stored in the registry. +Provide your container registry credentials to Docker so that it can push your container image to storage in the registry. -1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**. +1. Open the Visual Studio Code integrated terminal by selecting **Terminal** > **New Terminal** or `Ctrl` + `Shift` + **`** (backtick). 2. Sign in to Docker with the Azure Container Registry (ACR) credentials that you saved after creating the registry. Provide your container registry credentials to Docker so that it can push your c docker login -u <ACR username> -p <ACR password> <ACR login server> ``` - You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference. + You may receive a security warning recommending the use of `--password-stdin`. While that is a recommended best practice for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference. -3. Log in to Azure Container Registry. [Install Azure CLI](/cli/azure/install-azure-cli) to use the `az` command. +3. Sign in to the Azure Container Registry. You may need to [Install Azure CLI](/cli/azure/install-azure-cli) to use the `az` command. This command asks for your user name and password found in your container registry in **Settings** > **Access keys**. ```azurecli az acr login -n <ACR registry name> ``` >[!TIP]->If you get logged out at any point in this tutorial, repeat the Docker and Azure Container Registry sign in steps above to continue. +>If you get logged out at any point in this tutorial, repeat the Docker and Azure Container Registry sign in steps to continue. ### Build and push Visual Studio Code now has access to your container registry, so it's time to tu 1. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**. - :::image type="content" source="./media/tutorial-develop-for-linux/build-and-push-modules.png" alt-text="Screenshot showing the right-click menu option Build and Push IoT Edge Solution." lightbox="./media/tutorial-develop-for-linux/build-and-push-modules.png"::: + :::image type="content" source="./media/tutorial-develop-for-linux/build-and-push-modules.png" alt-text="Screenshot showing the right-click menu option Build and Push I o T Edge Solution." lightbox="./media/tutorial-develop-for-linux/build-and-push-modules.png"::: The build and push command starts three operations. First, it creates a new folder in the solution called **config** that hold |