Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | On Premises Migrate Microsoft Identity Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md | At this point, the MIM Sync server is no longer needed. ## Import a connector configuration - 1. Install the ECMA Connector host and provisioning agent on a Windows Server, using the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#3-install-and-configure-the-azure-ad-connect-provisioning-agent) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#download-install-and-configure-the-azure-ad-connect-provisioning-agent-package) articles. + 1. Install the ECMA Connector host and provisioning agent on a Windows Server, using the [provisioning users into SQL based applications](on-premises-sql-connector-configure.md#3-install-and-configure-the-azure-ad-connect-provisioning-agent) or [provisioning users into LDAP directories](on-premises-ldap-connector-configure.md#install-and-configure-the-azure-ad-connect-provisioning-agent) articles. 1. Sign in to the Windows server as the account that the Azure AD ECMA Connector Host runs as. 1. Change to the directory C:\Program Files\Microsoft ECMA2host\Service\ECMA. Ensure there are one or more DLLs already present in that directory. Those DLLs correspond to Microsoft-delivered connectors. 1. Copy the MA DLL for your connector, and any of its prerequisite DLLs, to that same ECMA subdirectory of the Service directory. |
active-directory | Concept Continuous Access Evaluation Strict Enforcement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-strict-enforcement.md | + + Title: Continuous access evaluation strict location enforcement in Azure AD +description: Responding to changes in user state faster with continuous access evaluation strict location enforcement in Azure AD +++++ Last updated : 07/10/2023+++++++++# Strictly enforce location policies using continuous access evaluation (preview) ++Strictly enforce location policies is a new enforcement mode for continuous access evaluation (CAE), used in Conditional Access policies. This new mode provides protection for resources, immediately stopping access if the IP address detected by the resource provider isn't allowed by Conditional Access policy. This option is the highest security modality of CAE location enforcement, and requires that administrators understand the routing of authentication and access requests in their network environment. See our [Introduction to continuous access evaluation](concept-continuous-access-evaluation.md) for a review of how CAE-capable clients and resource providers, like the Outlook email client and Exchange Online evaluate location changes. ++| Location enforcement mode | Recommended network topology | If the IP address detected by the Resource isn't in the allowed list | Benefits | Configuration | +| | | | | | +| Standard (Default) | Suitable for all topologies | A short-lived token is issued only if Azure AD detects an allowed IP address. Otherwise, access is blocked | Falls back to the pre-CAE location detection mode in split tunnel network deployments where CAE enforcement would affect productivity. CAE still enforces other events and policies. | None (Default Setting) | +| Strictly enforced location policies | Egress IP addresses are dedicated and enumerable for both Azure AD and all resource provider traffic | Access blocked | Most secure, but requires well understood network paths | 1. Test IP address assumptions with a small population <br><br> 2. Enable ΓÇ£Strictly enforceΓÇ¥ under Session controls | ++> [!NOTE] +> The **IP address (seen by resource)** is blank when that IP matches the IP address. ++## Configure strictly enforced location policies ++### Step 1 - Configure a Conditional Access location based policy for your target users ++Before administrators create a Conditional Access policy requiring strict location enforcement, they must be comfortable using policies like the one described in [Conditional Access location based policies](howto-conditional-access-policy-location.md). Policies like this one should be tested with a subset of users before proceeding to the next step. Administrators can avoid discrepancies between the allowed and actual IP addresses seen by Azure AD during authentication, by testing before enabling strict enforcement. ++### Step 2 - Test policy on a small subset of users ++ ++After enabling policies requiring strict location enforcement on a subset of test users, validate your testing experience using the filter **IP address (seen by resource)** in the Azure AD Sign-in logs. This validation allows administrators to find scenarios where strict location enforcement may block users with an unallowed IP seen by the CAE-enabled resource provider. ++ - Admins must ensure all authentication traffic towards Azure AD and access traffic to resource providers are from dedicated egress IPs that are known. + - Like Exchange Online, Teams, SharePoint Online, and Microsoft Graph + - Before administrators turn on Conditional Access policies requiring strict location enforcement, they should ensure that all IP addresses from which your users can access Azure AD and resource providers are included in their [IP-based named locations](location-condition.md#ipv4-and-ipv6-address-ranges). ++If administrators don't perform this validation, their users may be negatively impacted. If traffic to Azure AD or a CAE supported resource is through a shared or undefinable egress IP, don't enable strict location enforcement in your Conditional Access policies. ++### Step 3 - Identify IP addresses that should be added to your named locations ++If the filter search of **IP address (seen by resource)** in the Azure AD Sign-in logs isn't empty, you might have a split-tunnel network configuration. To ensure your users aren't accidentally locked out by policies requiring strict location enforcement, administrators should: ++- Investigate and identify any IP addresses identified in the Sign-in logs. +- Add public IP addresses associated with known organizational egress points to their defined [named locations](location-condition.md#named-locations). ++ [  ](./media/concept-continuous-access-evaluation-strict-enforcement/sign-in-logs-ip-address-seen-by-resource.png#lightbox) ++The following screenshot shows an example of a clientΓÇÖs access to a resource being blocked. This block is due to policies requiring CAE strict location enforcement being triggered revoking the clientΓÇÖs session. ++  ++This behavior can be verified in the sign-in logs. Look for **IP address (seen by resource)** and investigate adding this IP to [named locations](location-condition.md#named-locations) if experiencing unexpected blocks from Conditional Access on users. ++  ++Looking at the **Conditional Access Policy details** tab provides more details of blocked sign-in events. ++  ++### Step 4 - Continue deployment ++Repeat steps 2 and 3 with expanding groups of users until Strictly Enforce Location Policies are applied across your target user base. Roll out carefully to avoid impacting user experience. ++## Troubleshooting with Sign-in logs ++Administrators can investigate the Sign-in logs to find cases with **IP address (seen by resource)**. ++1. Sign in to the **Azure portal** as at least a Global Reader. +1. Browse to **Azure Active Directory** > **Sign-ins**. +1. Find events to review by adding filters and columns to filter out unnecessary information. + 1. Add the **IP address (seen by resource)** column and filter out any blank items to narrow the scope. ++ [  ](./media/concept-continuous-access-evaluation-strict-enforcement/sign-in-logs-ip-address-seen-by-resource.png#lightbox) ++**IP address (seen by resource)** contains filter isn't empty in the following examples: ++### Initial authentication ++1. Authentication succeeds using a CAE token. ++  ++1. The **IP address (seen by resource)** is different from the IP address seen by Azure AD. Although the IP address seen by the resource is known, there's no enforcement until the resource redirects the user for reevaluation of the IP address seen by the resource. ++  ++1. Azure AD authentication is successful because strict location enforcement isn't applied at the resource level. ++  + +### Resource redirect for reevaluation ++1. Authentication fails and a CAE token isn't issued. ++  ++1. **IP address (seen by resource)** is different from the IP seen by Azure AD. ++  ++1. Authentication isn't successful because **IP address (seen by resource)** isn't a known [named location](location-condition.md#named-locations) in Conditional Access. ++  ++## Next steps ++- [Continuous access evaluation in Azure AD](concept-continuous-access-evaluation.md) +- [Claims challenges, claims requests, and client capabilities](../develop/claims-challenge.md) +- [How to use continuous access evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) +- [Monitor and troubleshoot sign-ins with continuous access evaluation](howto-continuous-access-evaluation-troubleshoot.md#potential-ip-address-mismatch-between-azure-ad-and-resource-provider) |
active-directory | Reference Claims Mapping Policy Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md | The following claims are in the restricted claim set for a JWT. - `acr` - `acrs` - `actor`+- `actortoken` - `ageGroup` - `aio` - `altsecid` The following claims are in the restricted claim set for a JWT. - `appctxsender` - `appid` - `appidacr`+- `assertion` - `at_hash`+- `aud` +- `auth_data` - `auth_time`+- `authorization_code` - `azp` - `azpacr`+- `bk_claim` +- `bk_enclave` +- `bk_pub` +- `brk_client_id` +- `brk_redirect_uri` - `c_hash` - `ca_enf` - `ca_policy_result`-- `capolids_latebind` - `capolids`+- `capolids_latebind` - `cc`+- `cert_token_use` +- `child_client_id` +- `child_redirect_uri` +- `client_id` +- `client_ip` +- `cloud_graph_host_name` +- `cloud_instance_host_name` +- `cloud_instance_name` +- `CloudAssignedMdmId` - `cnf` - `code`-- `controls_auds` - `controls`+- `controls_auds` - `credential_keys`+- `csr` +- `csr_type` - `ctry` - `deviceid`+- `dns_names` - `domain_dns_name` - `domain_netbios_name` - `e_exp` - `email` - `endpoint` - `enfpolids`+- `exp` - `expires_on`+- `extn. as prefix` - `fido_auth_data`-- `fwd_appidacr`+- `fido_ver` - `fwd`+- `fwd_appidacr` +- `grant_type` - `graph` - `group_sids` - `groups` - `hasgroups`+- `hash_alg` - `haswids` - `home_oid` - `home_puid` - `home_tid`+- `iat` - `identityprovider` - `idp` - `idtyp` The following claims are in the restricted claim set for a JWT. - `inviteTicket` - `ipaddr` - `isbrowserhostedapp`+- `iss` - `isViral`+- `jwk` +- `key_id` +- `key_type` - `login_hint` - `mam_compliance_url` - `mam_enrollment_url` The following claims are in the restricted claim set for a JWT. - `mdm_compliance_url` - `mdm_enrollment_url` - `mdm_terms_of_use_url`+- `msgraph_host` - `msproxy` - `nameid`+- `nbf` +- `netbios_name` - `nickname` - `nonce` - `oid` The following claims are in the restricted claim set for a JWT. - `onprem_sid` - `openid2_id` - `origin_header`+- `password` - `platf` - `polids` - `pop_jwk` - `preferred_username`+- `previous_refresh_token` - `primary_sid` - `prov_data` - `puid` - `pwd_exp` - `pwd_url` - `rdp_bt`+- `redirect_uri` +- `refresh_token` - `refresh_token_issued_on` - `refreshtoken`+- `request_nonce` +- `resource` - `rh`+- `role` - `roles`+- `rp_id` - `rt_type`+- `scope` - `scp` - `secaud` - `sid` - `sid`+- `signature` - `signin_state` - `source_anchor` - `src1` The following claims are in the restricted claim set for a JWT. - `tbidv2` - `tenant_ctry` - `tenant_display_name`+- `tenant_id` - `tenant_region_scope` - `tenant_region_sub_scope` - `thumbnail_photo` The following claims are in the restricted claim set for a JWT. - `ttr` - `unique_name` - `upn`+- `user_agent` - `user_setting_sync_url`+- `username` - `uti` - `ver` - `verified_primary_email` - `verified_secondary_email` - `vnet`+- `vsm_binding_key` - `wamcompat_client_info` - `wamcompat_id_token` - `wamcompat_scopes` - `wids`+- `win_ver` +- `x5c_ca` - `xcb2b_rclient` - `xcb2b_rcloud` - `xcb2b_rtenant` - `ztdid` + > [!NOTE] > Any claim starting with `xms_` is restricted. The following claims are in the restricted claim set for a JWT. The following table lists the SAML claims that are in the restricted claim set. -| Claim type (URI) | -| -- | -|`http://schemas.microsoft.com/2012/01/devicecontext/claims/ismanaged`| -|`http://schemas.microsoft.com/2014/02/devicecontext/claims/isknown`| -|`http://schemas.microsoft.com/2014/03/psso`| -|`http://schemas.microsoft.com/2014/09/devicecontext/claims/iscompliant`| -|`http://schemas.microsoft.com/claims/authnmethodsreferences`| -|`http://schemas.microsoft.com/claims/groups.link`| -|`http://schemas.microsoft.com/identity/claims/accesstoken`| -|`http://schemas.microsoft.com/identity/claims/acct`| -|`http://schemas.microsoft.com/identity/claims/agegroup`| -|`http://schemas.microsoft.com/identity/claims/aio`| -|`http://schemas.microsoft.com/identity/claims/identityprovider`| -|`http://schemas.microsoft.com/identity/claims/objectidentifier`| -|`http://schemas.microsoft.com/identity/claims/openid2_id`| -|`http://schemas.microsoft.com/identity/claims/puid`| -|`http://schemas.microsoft.com/identity/claims/tenantid`| -|`http://schemas.microsoft.com/identity/claims/xms_et`| -|`http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant`| -|`http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod`| -|`http://schemas.microsoft.com/ws/2008/06/identity/claims/expiration`| -|`http://schemas.microsoft.com/ws/2008/06/identity/claims/groups`| -|`http://schemas.microsoft.com/ws/2008/06/identity/claims/role`| -|`http://schemas.microsoft.com/ws/2008/06/identity/claims/wids`| -|`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier`| -| `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname` | -| `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid` | -| `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarygroupsid` | -| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid` | -| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname` | -| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` | -| `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` | +Restricted Claim type (URI): +- `http://schemas.microsoft.com/2012/01/devicecontext/claims/ismanaged` +- `http://schemas.microsoft.com/2014/02/devicecontext/claims/isknown` +- `http://schemas.microsoft.com/2014/03/psso` +- `http://schemas.microsoft.com/2014/09/devicecontext/claims/iscompliant` +- `http://schemas.microsoft.com/claims/authnmethodsreferences` +- `http://schemas.microsoft.com/claims/groups.link` +- `http://schemas.microsoft.com/identity/claims/accesstoken` +- `http://schemas.microsoft.com/identity/claims/acct` +- `http://schemas.microsoft.com/identity/claims/agegroup` +- `http://schemas.microsoft.com/identity/claims/aio` +- `http://schemas.microsoft.com/identity/claims/identityprovider` +- `http://schemas.microsoft.com/identity/claims/objectidentifier` +- `http://schemas.microsoft.com/identity/claims/openid2_id` +- `http://schemas.microsoft.com/identity/claims/puid` +- `http://schemas.microsoft.com/identity/claims/scope` +- `http://schemas.microsoft.com/identity/claims/tenantid` +- `http://schemas.microsoft.com/identity/claims/xms_et` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/confirmationkey` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlyprimarygroupsid` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlyprimarysid` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/denyonlywindowsdevicegroup` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/expiration` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/expired` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/groups` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/ispersistent` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarygroupsid` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/samlissuername` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/wids` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdeviceclaim` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsdevicegroup` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsfqbnversion` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowssubauthority` +- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsuserclaim` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authentication` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/authorizationdecision` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/denyonlysid` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/privatepersonalidentifier` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` +- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname` +- `http://schemas.xmlsoap.org/ws/2009/09/identity/claims/actor` + These claims are restricted by default, but aren't restricted if you [set the AcceptMappedClaims property](saml-claims-customization.md) to `true` in your app manifest *or* have a [custom signing key](saml-claims-customization.md): |
active-directory | Concept Azure Ad Register | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-register.md | The goal of Azure AD registered - also known as Workplace joined - devices is to Azure AD registered devices are signed in to using a local account like a Microsoft account on a Windows 10 or newer device. These devices have an Azure AD account for access to organizational resources. Access to resources in the organization can be limited based on that Azure AD account and Conditional Access policies applied to the device identity. -Administrators can further control these Azure AD registered devices by enrolling the device(s) into Mobile Device Management (MDM) tools like Microsoft Intune. MDM provides a means to enforce organization-required configurations like requiring storage to be encrypted, password complexity, and security software kept updated. +Azure AD Registration is not the same as device enrolment. If Administrators permit users to enrol their devices, organisations can further control these Azure AD registered devices by enrolling the device(s) into Mobile Device Management (MDM) tools like Microsoft Intune. MDM provides a means to enforce organization-required configurations like requiring storage to be encrypted, password complexity, and security software kept updated. Azure AD registration can be accomplished when accessing a work application for the first time or manually using the Windows 10 or Windows 11 Settings menu. |
active-directory | How To Create Customer Tenant Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-create-customer-tenant-portal.md | In this article, you learn how to: If you're not sure which directory contains your customer tenant, you can find the tenant name and ID both in the Microsoft Entra admin center and in the Azure portal. -1. To make sure you're using the directory that contains your customer tenant, select the **Directories + subscriptions** icon in the toolbar. +1. To make sure you're using the directory that contains your customer tenant, select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. :::image type="content" source="media/how-to-create-customer-tenant-portal/directories-subscription.png" alt-text="Screenshot of the Directories + subscriptions icon."::: |
active-directory | How To Customize Branding Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-branding-customers.md | The following image displays the neutral default branding of the customer tenant Before you customize any settings, the neutral default branding will appear in your sign-in and sign-up pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a [custom CSS](/azure/active-directory/fundamentals/reference-company-branding-css-template). 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to the customer tenant you created earlier. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. In the search bar, type and select **Company branding**. 1. Under **Default sign-in** select **Edit**. Your customer tenant name replaces the Microsoft banner logo in the neutral defa :::image type="content" source="media/how-to-customize-branding-customers/tenant-name.png" alt-text="Screenshot of the tenant name." lightbox="media/how-to-customize-branding-customers/tenant-name.png"::: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to the customer tenant you created earlier. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. In the search bar, type and select **Properties**. 1. Edit the **Name** field. Your customer tenant name replaces the Microsoft banner logo in the neutral defa When no longer needed, you can remove the sign-in customization from your customer tenant via the Azure portal. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to the customer tenant you created earlier. +1.If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. In the search bar, type and select **Company branding**. 1. Under **Default sign-in experience**, select **Edit**. 1. Remove the elements you no longer need. |
active-directory | How To Customize Languages Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-languages-customers.md | You can create a personalized sign-in experience for users who sign in using a s ## Add browser language under Company branding 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to the customer tenant you created earlier. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. In the search bar, type and select **Company branding**. 1. Under **Browser language customizations**, select **Add browser language**. The following languages are supported in the customer tenant: Language customization in the customer tenant allows your user flow to accommodate different languages to suit your customer's needs. You can use languages to modify the strings displayed to your customers as part of the attribute collection process during sign-up. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).-2. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to the customer tenant you created earlier. +2. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 3. In the left menu, select **Azure Active Directory** > **External Identities**. 4. Select **User flows**. 5. Select the user flow that you want to enable for translations. |
active-directory | How To Define Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-define-custom-attributes.md | Follow these steps to add sign-up attributes to a user flow you've already creat 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). -1. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to your customer tenant. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. 1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**. |
active-directory | How To Enable Password Reset Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-enable-password-reset-customers.md | The following screenshots show the self-service password rest flow. From the app ## Enable self-service password reset for customers 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/).-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to the customer tenant you created earlier. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. In the navigation pane, select **Azure Active Directory**. 1. Select **External Identities** > **User flows**. 1. From the list of **User flows**, select the user flow you want to enable SSPR. |
active-directory | How To Identity Protection Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-identity-protection-customers.md | An administrator can choose to dismiss a user's risk in the Microsoft Entra admi 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the Directories + subscriptions icon  in the toolbar and find your customer tenant in the list. If it's not the current directory, select **Switch**. +1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the Directories + subscriptions icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar and find your customer tenant in the list. If it's not the current directory, select **Switch**. 1. Browse to **Azure Active Directory** > **Protect & secure** > **Security Center**. |
active-directory | How To Manage Admin Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-manage-admin-accounts.md | In Azure Active Directory (Azure AD) for customers, a customer tenant represents To create a new admin account, follow these steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon in the toolbar. +1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. 1. Under **Azure Active Directory**, select **Users** > **All users**. 1. Select **New user** > **Create new user**. The admin is created and added to your customer tenant. It's preferable to have You can also invite a new guest user to manage your tenant. To invite an admin, follow these steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon in the toolbar. +1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. 1. Under **Azure Active Directory**, select **Users** > **All users**. 1. Select **New user** > **Invite external user**. An invitation email is sent to the user. The user needs to accept the invitation You can assign a role when you create a user or invite a guest user. You can add a role, change the role, or remove a role for a user: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon in the toolbar. +1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. 1. Under **Azure Active Directory**, select **Users** > **All users**. 1. Select the user you want to change the roles for. Then select **Assigned roles**. You can assign a role when you create a user or invite a guest user. You can add If you need to remove a role assignment from a user, follow these steps: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon in the toolbar. +1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. 1. Under **Azure Active Directory**, select **Users** > **All users**. 1. Select the user you want to change the roles for. Then select **Assigned roles**. If you need to remove a role assignment from a user, follow these steps: As part of an auditing process, you typically review which users are assigned to specific roles in your customer directory. Use the following steps to audit which users are currently assigned privileged roles. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon in the toolbar. +1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. 1. Under **Azure Active Directory**, select **Roles & admins** > **Roles & admins**. 2. Select a role, such as **Global administrator**. The **Assignments** page lists the users with that role. As part of an auditing process, you typically review which users are assigned to To delete an existing user, you must have a *Global administrator* role assignment. Global admins can delete any user, including other admins. *User administrators* can delete any non-admin user. 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon in the toolbar. +1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. 1. Under **Azure Active Directory**, select **Users** > **All users**. 1. Select the user you want to delete. |
active-directory | How To Manage Customer Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-manage-customer-accounts.md | To add or delete users, your account must be assigned the *User administrator* o ## Create a customer account 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon in the toolbar. +1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. 1. Under **Azure Active Directory**, select **Users** > **All users**. 1. Select **New user** > **Create new user**. As an administrator, you can reset a user's password, if the user forgets their To reset a customer's password: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon in the toolbar. +1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. 1. Under **Azure Active Directory**, select **Users** > **All users**. 1. Search for and select the user that needs the reset, and then select **Reset Password**. To reset a customer's password: ## Delete a customer account 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions.-1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon in the toolbar. +1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. 1. Under **Azure Active Directory**, select **Users** > **All users**. 1. Search for and select the user to delete. |
active-directory | How To Multifactor Authentication Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-multifactor-authentication-customers.md | Create a Conditional Access policy in your customer tenant that prompts users fo 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the Directories + subscriptions icon  in the toolbar and find your customer tenant in the list. If it's not the current directory, select **Switch**. +1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar and find your customer tenant in the list. If it's not the current directory, select **Switch**. 1. Browse to **Azure Active Directory** > **Protect & secure** > **Security Center**. Enable the email one-time passcode authentication method in your customer tenant ## Test the sign-in In a private browser, open your application and select **Sign-in**. You should be prompted for another authentication method.- + |
active-directory | How To Register Ciam App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-register-ciam-app.md | description: Learn about how to register an app in the customer tenant. -+ Previously updated : 05/09/2023 Last updated : 07/12/2023 The following steps show you how to register your app in the admin center: 1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant: - 1. Select the **Directories + subscriptions** icon in the portal toolbar. + 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**. This app signs in users. You can add delegated permissions to it, by following t [!INCLUDE [grant permision for signing in users](../customers/includes/register-app/grant-api-permission-sign-in.md)] -### If you want to call an API follow the steps below (optional): +### To call an API follow the steps below (optional): [!INCLUDE [grant permisions for calling an API](../customers/includes/register-app/grant-api-permission-call-api.md)] If you'd like to learn how to expose the permissions by adding a link, go to the [Web API](how-to-register-ciam-app.md?tabs=webapi) section. The following steps show you how to register your app in the admin center: 1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant: - 1. Select the **Directories + subscriptions** icon in the portal toolbar. + 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**. This app signs in users. You can add delegated permissions to it, by following t ### Create a client secret  [!INCLUDE [add a client secret](../customers/includes/register-app/add-app-client-secret.md)] -### If you want to call an API follow the steps below (optional): +### To call an API follow the steps below (optional): [!INCLUDE [grant permissions for calling an API](../customers/includes/register-app/grant-api-permission-call-api.md)] ## Next steps This app signs in users. You can add delegated permissions to it, by following t [!INCLUDE [expose permissions](../customers/includes/register-app/add-api-scopes.md)] -### If you want to add app roles follow the steps below (optional): +### To add app roles follow the steps below (optional): [!INCLUDE [configure app roles](../customers/includes/register-app/add-app-role.md)] The following steps show you how to register your app in the admin center: 1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant: - 1. Select the **Directories + subscriptions** icon in the portal toolbar. + 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**. The following steps show you how to register your app in the admin center: ### Add delegated permissions [!INCLUDE [grant permission for signing in users](../customers/includes/register-app/grant-api-permission-sign-in.md)] -### If you want to call an API follow the steps below (optional): +### To call an API follow the steps below (optional): [!INCLUDE [grant permissions for calling an API](../customers/includes/register-app/grant-api-permission-call-api.md)] ## Next steps The following steps show you how to register your app in the admin center: [!INCLUDE [register daemon app](../customers/includes/register-app/register-daemon-app.md)] -### If you want to call an API follow the steps below (optional) +### To call an API follow the steps below (optional) A daemon app signs-in as itself using the [OAuth 2.0 client credentials flow](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow), you add application permissions, which is required by apps that authenticate as themselves: [!INCLUDE [register daemon app](../customers/includes/register-app/grant-api-permissions-app-permissions.md)] A daemon app signs-in as itself using the [OAuth 2.0 client credentials flow](/a - Learn more about a [daemon app that calls a web API in the daemon's name](/azure/active-directory/develop/authentication-flows-app-scenarios#daemon-app-that-calls-a-web-api-in-the-daemons-name) - [Create a sign-up and sign-in user flow](how-to-user-flow-sign-up-sign-in-customers.md)++# [Microsoft Graph API](#tab/graphapi) +## How to register a Microsoft Graph API application? ++### Grant API Access to your application ++### Create a client secret ++## Next steps +- Learn more how to manage [Azure Active Directory for customers resources with Microsoft Graph](microsoft-graph-operations.md) |
active-directory | How To User Flow Sign Up Sign In Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-user-flow-sign-up-sign-in-customers.md | Follow these steps to create a user flow a customer can use to sign in or sign u 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). -1. If you have access to multiple tenants, use the **Directories + subscriptions** filter in the top menu to switch to your customer tenant. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. 1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**. |
active-directory | Microsoft Graph Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/microsoft-graph-operations.md | The following steps show you how to register your app in the Microsoft Entra adm 1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant: - 1. Select the **Directories + subscriptions** icon in the portal toolbar. + 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**. |
active-directory | Quickstart Tenant Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/quickstart-tenant-setup.md | In this quickstart, you'll learn how to create a tenant with customer configurat If you're not going to continue to use this tenant, you can delete it using the following steps: -1. Ensure that you're signed in to the directory that you want to delete through the **Directory + subscription** filter in the Azure portal. Switch to the target directory if needed. +1. Ensure that you're signed in to the directory that you want to delete through the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the Azure portal. Switch to the target directory if needed. 1. From the left menu, select **Azure Active Directory** > **Overview**. 1. Select **Manage tenants** at the top of the page. 1. Select the tenant you want to delete, and then select **Delete**. |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/whats-new-docs.md | + + Title: "What's new in Azure Active Directory for customers" +description: "New and updated documentation for the Azure Active Directory for customers documentation." Last updated : 07/12/2023++++++++++# Azure Active Directory for customers: What's new ++Welcome to what's new in Azure Active Directory for customers documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. ++## June 2023 ++### New articles ++- [Quickstart: Create a tenant (preview)](quickstart-tenant-setup.md) +- [Tutorial: Create a .NET MAUI shell app](tutorial-mobile-app-maui-sign-in-prepare-app.md) +- [Tutorial: Register and configure .NET MAUI mobile app in a customer tenant](tutorial-mobile-app-maui-sign-in-prepare-tenant.md) +- [Tutorial: Sign in users in .NET MAUI shell app](tutorial-mobile-app-maui-sign-in-sign-out.md) +- [Use role-based access control in your Node.js web application](how-to-web-app-role-based-access-control.md) +- [Tutorial: Handle authentication flows in a React single-page app](how-to-single-page-application-react-configure-authentication.md) +- [Tutorial: Create a .NET MAUI app](tutorial-desktop-app-maui-sign-in-prepare-app.md) +- [Tutorial: Register and configure .NET MAUI app in a customer tenant](tutorial-desktop-app-maui-sign-in-prepare-tenant.md) +- [Tutorial: Sign in users in .NET MAUI app](tutorial-desktop-app-maui-sign-in-sign-out.md) ++### Updated articles ++- [What is Microsoft Entra External ID for customers?](overview-customers-ciam.md) - Added a section regarding Azure AD B2C to the overview and emphasized tenant creation when getting started. +- [Add user attributes to token claims](how-to-add-attributes-to-token.md) - Added attributes to token claims: fixed steps for updating the app manifest. +- [Tutorial: Prepare a React single-page app (SPA) for authentication in a customer tenant](how-to-single-page-application-react-prepare-app.md) - JavaScript tutorial edits, code sample updates and fixed SPA aligning content styling. +- [Tutorial: Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant](how-to-single-page-application-react-sign-in-out.md) - JavaScript tutorial edits and fixed SPA aligning content styling. +- [Tutorial: Handle authentication flows in a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling. +- [Tutorial: Prepare a vanilla JavaScript single-page app for authentication in a customer tenant](how-to-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling. +- [Tutorial: Prepare your customer tenant to authenticate a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling. +- [Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant](how-to-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling. +- [Tutorial: Prepare your customer tenant to authenticate users in a React single-page app (SPA)](how-to-single-page-application-react-prepare-tenant.md) - Fixed SPA aligning content styling. +- [Tutorial: Prepare an ASP.NET web app for authentication in a customer tenant](how-to-web-app-dotnet-sign-in-prepare-app.md) - ASP.NET web app fixes. +- [Tutorial: Prepare your customer tenant to authenticate users in an ASP.NET web app](how-to-web-app-dotnet-sign-in-prepare-tenant.md) - ASP.NET web app fixes. +- [Tutorial: Add sign-in and sign-out to an ASP.NET web application for a customer tenant](how-to-web-app-dotnet-sign-in-sign-out.md) - ASP.NET web app fixes. +- [Collect user attributes during sign-up](how-to-define-custom-attributes.md) - Added a step for the Show more attributes pane and custom attributes. +- [Manage Azure Active Directory for customers resources with Microsoft Graph](microsoft-graph-operations.md) - Combined Graph API references into one doc. |
active-directory | Concept Identity Protection Risks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md | Real-time detections may not show up in reporting for 5 to 10 minutes. Offline d | [Additional risk detected](#additional-risk-detected-sign-in) | Real-time or Offline | Nonpremium | | [Anonymous IP address](#anonymous-ip-address) | Real-time | Nonpremium | | [Admin confirmed user compromised](#admin-confirmed-user-compromised) | Offline | Nonpremium |-| [Azure AD threat intelligence](#azure-ad-threat-intelligence-sign-in) | Offline | Nonpremium | +| [Azure AD threat intelligence](#azure-ad-threat-intelligence-sign-in) | Real-time or Offline | Nonpremium | ### User risk detections Customers without Azure AD Premium P2 licenses receive detections titled "additi #### Azure AD threat intelligence (sign-in) -**Calculated offline**. This risk detection type indicates user activity that is unusual for the user or consistent with known attack patterns. This detection is based on Microsoft's internal and external threat intelligence sources. +**Calculated in real-time or offline**. This risk detection type indicates user activity that is unusual for the user or consistent with known attack patterns. This detection is based on Microsoft's internal and external threat intelligence sources. ### Nonpremium user risk detections Location in risk detections is determined using IP address lookup. - [Policies available to mitigate risks](concept-identity-protection-policies.md) - [Investigate risk](howto-identity-protection-investigate-risk.md) - [Remediate and unblock users](howto-identity-protection-remediate-unblock.md)-- [Security overview](concept-identity-protection-security-overview.md)+- [Security overview](concept-identity-protection-security-overview.md) |
active-directory | Howto Identity Protection Investigate Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md | If more information is shown for the detection: 1. Protocol 1. Ranges of IPs/ASNs 1. Time and frequency of sign-ins+ 1. This detection was triggered by a real-time rule + 1. Validate that no other users in your directory are targets of the same attack. This can be found by the TI_RI_#### number assigned to the rule. + 1. Real-time rules protect against novel attacks identified by Microsoft's threat intelligence. If multiple users in your directory were targets of the same attack, investigate unusual patterns in other attributes of the sign in. ## Investigate risk with Microsoft 365 Defender |
active-directory | App Management Powershell Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md | -The following table includes links to PowerShell script examples for Azure AD Application Management. These samples require either: +The following table includes links to PowerShell script examples for Azure AD Application Management. -- The [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) or,-- The [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true), unless otherwise noted.+These samples require the [Microsoft Graph PowerShell](/powershell/microsoftgraph/installation) SDK module. -For more information about the cmdlets used in these samples, see [Applications](/powershell/module/azuread/#applications). | Link | Description | ||| |**Application Management scripts**|| | [Export secrets and certs (app registrations)](scripts/powershell-export-all-app-registrations-secrets-and-certs.md) | Export secrets and certificates for app registrations in Azure Active Directory tenant. | | [Export secrets and certs (enterprise apps)](scripts/powershell-export-all-enterprise-apps-secrets-and-certs.md) | Export secrets and certificates for enterprise apps in Azure Active Directory tenant. |-| [Export expiring secrets and certs](scripts/powershell-export-apps-with-expiring-secrets.md) | Export App Registrations with expiring secrets and certificates and their Owners in Azure Active Directory tenant. | +| [Export expiring secrets and certs (app registrations)](scripts/powershell-export-apps-with-expiring-secrets.md) | Export app registrations with expiring secrets and certificates and their Owners in Azure Active Directory tenant. | +| [Export expiring secrets and certs (enterprise apps)](scripts/powershell-export-enterprise-apps-with-expiring-secrets.md) | Export enterprise apps with expiring secrets and certificates and their Owners in Azure Active Directory tenant. | | [Export secrets and certs expiring beyond required date](scripts/powershell-export-apps-with-secrets-beyond-required.md) | Export App Registrations with secrets and certificates expiring beyond the required date in Azure Active Directory tenant. This uses the non interactive Client_Credentials Oauth flow. | |
active-directory | Powershell Export All App Registrations Secrets And Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-all-app-registrations-secrets-and-certs.md | This PowerShell script example exports all secrets and certificates for the spec [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)] -This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview). +This sample requires the [Microsoft Graph PowerShell](/powershell/microsoftgraph/installation) SDK module. ## Sample script This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/az ## Script explanation +The script can be used directly without any modifications. The admin will be asked about the expiration date and whether they would like to see already expired secrets or certificates or not. + The "Add-Member" command is responsible for creating the columns in the CSV file. You can modify the "$Path" variable directly in PowerShell, with a CSV file path, in case you'd prefer the export to be non-interactive. | Command | Notes | |||-| [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication) | Retrieves an application from your directory. | -| [Get-AzureADApplicationOwner](/powershell/module/azuread/Get-AzureADApplicationOwner) | Retrieves the owners of an application from your directory. | +| [Get-MgApplication](/powershell/module/microsoft.graph.applications/get-mgapplication?view=graph-powershell-1.0&preserve-view=true) | Retrieves an application from your directory. | +| [Get-MgApplicationOwner](/powershell/module/microsoft.graph.applications/get-mgapplicationowner?view=graph-powershell-1.0&preserve-view=true) | Retrieves the owners of an application from your directory. | ## Next steps -For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview). +For more information on the Microsoft Graph PowerShell module, see [Microsoft Graph PowerShell module overview](/powershell/microsoftgraph/installation). -For other PowerShell examples for Application Management, see [Azure AD PowerShell examples for Application Management](../app-management-powershell-samples.md). +For other PowerShell examples for Application Management, see [Azure Microsoft Graph PowerShell examples for Application Management](../app-management-powershell-samples.md). |
active-directory | Powershell Export All Enterprise Apps Secrets And Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-all-enterprise-apps-secrets-and-certs.md | This PowerShell script example exports all secrets, certificates and owners for [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)] -This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview). +This sample requires the [Microsoft Graph PowerShell](/powershell/microsoftgraph/installation) SDK module. + ## Sample script This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/az ## Script explanation +The script can be used directly without any modifications. The admin will be asked about the expiration date and whether they would like to see already expired secrets or certificates or not. + The "Add-Member" command is responsible for creating the columns in the CSV file. You can modify the "$Path" variable directly in PowerShell, with a CSV file path, in case you'd prefer the export to be non-interactive. | Command | Notes | |||-| [Get-AzureADServicePrincipal](/powershell/module/azuread/Get-azureADServicePrincipal?view=azureadps-2.0&preserve-view=true) | Retrieves an enterprise application from your directory. | -| [Get-AzureADServicePrincipalOwner](/powershell/module/azuread/Get-AzureADServicePrincipalOwner?view=azureadps-2.0&preserve-view=true) | Retrieves the owners of an enterprise application from your directory. | -+| [Get-MgServicePrincipal](/powershell/module/microsoft.graph.applications/get-mgserviceprincipal?view=graph-powershell-1.0&preserve-view=true) | Retrieves an enterprise application from your directory. | +| [Get-MgServicePrincipalOwner](/powershell/module/microsoft.graph.applications/get-mgserviceprincipalowner?view=graph-powershell-1.0&preserve-view=true) | Retrieves the owners of an enterprise application from your directory. | ## Next steps -For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview). +For more information on the Microsoft Graph PowerShell module, see [Microsoft Graph PowerShell module overview](/powershell/microsoftgraph/installation). -For other PowerShell examples for Application Management, see [Azure AD PowerShell examples for Application Management](../app-management-powershell-samples.md). +For other PowerShell examples for Application Management, see [Azure Microsoft Graph PowerShell examples for Application Management](../app-management-powershell-samples.md). |
active-directory | Powershell Export Apps With Expiring Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-expiring-secrets.md | Title: PowerShell sample - Export apps with expiring secrets and certificates in Azure Active Directory tenant. -description: PowerShell example that exports all apps with expiring secrets and certificates for the specified apps in your Azure Active Directory tenant. + Title: PowerShell sample - Export app registrations with expiring secrets and certificates in Azure Active Directory tenant. +description: PowerShell example that exports all app registrations with expiring secrets and certificates for the specified apps in your Azure Active Directory tenant. -# Export apps with expiring secrets and certificates +# Export app registrations with expiring secrets and certificates -This PowerShell script example exports all app registrations with expiring secrets, certificates and their owners for the specified apps from your directory in a CSV file. +This PowerShell script example exports all app registrations with secrets and certificates expiring in the next X days (and already expired if you choose so) with their owners for the specified apps from your directory in a CSV file. [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)] -This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview). +This sample requires the [Microsoft Graph PowerShell](/powershell/microsoftgraph/installation) SDK module. ## Sample script You can modify the "$Path" variable directly in PowerShell, with a CSV file path | Command | Notes | |||-| [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication?view=azureadps-2.0&preserve-view=true) | Retrieves an application from your directory. | -| [Get-AzureADApplicationOwner](/powershell/module/azuread/Get-AzureADApplicationOwner?view=azureadps-2.0&preserve-view=true) | Retrieves the owners of an application from your directory. | +| [Get-MgApplication](/powershell/module/microsoft.graph.applications/get-mgapplication?view=graph-powershell-1.0&preserve-view=true) | Retrieves an application from your directory. | +| [Get-MgApplicationOwner](/powershell/module/microsoft.graph.applications/get-mgapplicationowner?view=graph-powershell-1.0&preserve-view=true) | Retrieves the owners of an application from your directory. | ## Next steps -For more information on the Azure AD PowerShell module, see [Azure AD PowerShell module overview](/powershell/azure/active-directory/overview). +For more information on the Microsoft Graph PowerShell module, see [Microsoft Graph PowerShell module overview](/powershell/microsoftgraph/installation). -For other PowerShell examples for Application Management, see [Azure AD PowerShell examples for Application Management](../app-management-powershell-samples.md). +For other PowerShell examples for Application Management, see [Azure Microsoft Graph PowerShell examples for Application Management](../app-management-powershell-samples.md). |
active-directory | Powershell Export Apps With Secrets Beyond Required | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md | |
active-directory | Powershell Export Enterprise Apps With Expiring Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-enterprise-apps-with-expiring-secrets.md | + + Title: PowerShell sample - Export enterprise apps with expiring secrets and certificates in Azure Active Directory tenant. +description: PowerShell example that exports all enterprise apps with expiring secrets and certificates for the specified enterprise apps in your Azure Active Directory tenant. +++++++ Last updated : 07/11/2023+++++# Export enterprise apps with expiring secrets and certificates ++This PowerShell script example exports all enterprise applications with secrets and certificates expiring in the next X days (and already expired if you choose so), with their owners for the specified enterprise apps from your directory in a CSV file. +++This sample requires the [Microsoft Graph PowerShell](/powershell/microsoftgraph/installation) SDK module. ++## Sample script ++[!code-azurepowershell[main](~/powershell_scripts/application-management/export-enterprise-apps-with-expiring-secrets.ps1 "Exports all apps with expiring secrets and certificates for the specified apps in your directory.")] ++## Script explanation ++The script can be used directly without any modifications. The admin will be asked about the expiration date and whether they would like to see already expired secrets or certificates or not. ++The "Add-Member" command is responsible for creating the columns in the CSV file. +The "New-Object" command creates an object to be used for the columns in the CSV file export. +You can modify the "$Path" variable directly in PowerShell, with a CSV file path, in case you'd prefer the export to be non-interactive. ++| Command | Notes | +||| +| [Get-MgServicePrincipal](/powershell/module/microsoft.graph.applications/get-mgserviceprincipal?view=graph-powershell-1.0&preserve-view=true) | Retrieves an enterprise application from your directory. | +| [Get-MgServicePrincipalOwner](/powershell/module/microsoft.graph.applications/get-mgserviceprincipalowner?view=graph-powershell-1.0&preserve-view=true) | Retrieves the owners of an enterprise application from your directory. | ++## Next steps ++For more information on the Microsoft Graph PowerShell module, see [Microsoft Graph PowerShell module overview](/powershell/microsoftgraph/installation). ++For other PowerShell examples for Application Management, see [Azure Microsoft Graph PowerShell examples for Application Management](../app-management-powershell-samples.md). |
active-directory | Overview Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md | The recommendations listed in the following table are currently available in pub |- |- |- |- | | [Convert per-user MFA to Conditional Access MFA](recommendation-turn-off-per-user-mfa.md) | Users | All licenses | Generally available | | [Migrate applications from AD FS to Azure AD](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Applications | All licenses | Generally available |-| [Migrate from ADAL to MSAL](recommendation-migrate-from-adal-to-msal.md) | Applications | All licenses | Generally available* | +| [Migrate from ADAL to MSAL](recommendation-migrate-from-adal-to-msal.md) | Applications | All licenses | Generally available | | [Migrate to Microsoft Authenticator](recommendation-migrate-to-authenticator.md) | Users | All licenses | Preview | | [Minimize MFA prompts from known devices](recommendation-mfa-from-known-devices.md) | Users | All licenses | Generally available | | [Remove unused applications](recommendation-remove-unused-apps.md) | Applications | Azure AD Premium P2 | Preview | The recommendations listed in the following table are currently available in pub | [Renew expiring application credentials](recommendation-renew-expiring-application-credential.md) | Applications | Azure AD Premium P2 | Preview | | [Renew expiring service principal credentials](recommendation-renew-expiring-service-principal-credential.md) | Applications | Azure AD Premium P2 | Preview | -*The Migrate from ADAL to MSAL recommendation is generally available, but rolling out in phases. If you don't see this recommendation in your tenant, check back later. - Azure AD only displays the recommendations that apply to your tenant, so you may not see all supported recommendations listed. ## Next steps |
active-directory | Headspace Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/headspace-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. `https://headspace.com/sso-login` > [!Note]- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Headspace Client support team](mailto:ecosystem-integration-squad@headspace.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Headspace Client support team](mailto:employer-solution-squad@headspace.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. Headspace application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. In this section, you'll enable B.Simon to use Azure single sign-on by granting a ## Configure Headspace SSO -To configure single sign-on on **Headspace** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Headspace support team](mailto:ecosystem-integration-squad@headspace.com). They set this setting to have the SAML SSO connection set properly on both sides. +To configure single sign-on on **Headspace** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Headspace support team](mailto:employer-solution-squad@headspace.com). They set this setting to have the SAML SSO connection set properly on both sides. ### Create Headspace test user In this section, a user called B.Simon is created in Headspace. Headspace suppor In this section, you test your Azure AD single sign-on configuration with following options. -* Click on **Test this application** in Azure portal. This will redirect to Headspace Sign-on URL where you can initiate the login flow. +* Click on **Test this application** in Azure portal. This will redirect to Headspace Sign on URL where you can initiate the login flow. -* Go to Headspace Sign-on URL directly and initiate the login flow from there. +* Go to Headspace Sign on URL directly and initiate the login flow from there. -* You can use Microsoft My Apps. When you click the Headspace tile in the My Apps, this will redirect to Headspace Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). +* You can use Microsoft My Apps. When you click the Headspace tile in the My Apps, this will redirect to Headspace Sign on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ## Next steps |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | az aks create -n $clusterName -g $resourceGroup --location $location --network-p > - Doesn't use the dynamic pod IP allocation feature. > - Doesn't have network policies enabled. > - Doesn't use any Windows node pools with docker as the container runtime.+ +> [!WARNING] +> Prior to Windows OS Build 20348.1668, there was a limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, which had a more detrimental effect for clusters upgrading to Overlay. To avoid this issue, **use Windows OS Build greater than or equal to 20348.1668**. ++> [!WARNING] +> If using a custom azure-ip-masq-agent config to include additional IP ranges that should not SNAT packets from pods, upgrading to Azure CNI Overlay may break connectivity to these ranges. Pod IPs from the overlay space will not be reachable by anything outside the cluster nodes. +> Additionally, for sufficiently old clusters there may be a ConfigMap left over from a previous version of azure-ip-masq-agent. If this ConfigMap, named `azure-ip-masq-agent-config`, exists and is not intetionally in-place it should be deleted before running the update command. +> If not using a custom ip-masq-agent config, only the `azure-ip-masq-agent-config-reconciled` ConfigMap should exist with respect to Azure ip-masq-agent ConfigMaps and this will be updated automatically during the upgrade process. The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn't supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged. az aks update --name $clusterName \ The `--pod-cidr` parameter is required when upgrading from legacy CNI because the pods need to get IPs from a new overlay space, which doesn't overlap with the existing node subnet. The pod CIDR also can't overlap with any VNet address of the node pools. For example, if your VNet address is *10.0.0.0/8*, and your nodes are in the subnet *10.240.0.0/16*, the `--pod-cidr` can't overlap with *10.0.0.0/8* or the existing service CIDR on the cluster. -> [!WARNING] -> Prior to Windows OS Build 20348.1668, there was a limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, which had a more detrimental effect for clusters upgrading to Overlay. To avoid this issue, **use Windows OS Build 20348.1668**. - ## Install the aks-preview Azure CLI extension - Windows only [!INCLUDE [preview features callout](includes/preview/preview-callout.md)] |
aks | Concepts Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md | For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-d ||| | `managed-csi` | Uses Azure StandardSSD locally redundant storage (LRS) to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it's deleted. The storage class also configures the persistent volumes to be expandable, you just need to edit the persistent volume claim with the new size. | | `managed-csi-premium` | Uses Azure Premium locally redundant storage (LRS) to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it's deleted. Similarly, this storage class allows for persistent volumes to be expanded. |-| `azurefile-csi` | Uses Azure Standard storage to create an Azure file share. The reclaim policy ensures that the underlying Azure file share is deleted when the persistent volume that used it -s deleted. | +| `azurefile-csi` | Uses Azure Standard storage to create an Azure file share. The reclaim policy ensures that the underlying Azure file share is deleted when the persistent volume that used it is deleted. | | `azurefile-csi-premium` | Uses Azure Premium storage to create an Azure file share. The reclaim policy ensures that the underlying Azure file share is deleted when the persistent volume that used it's deleted.| | `azureblob-nfs-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using the NFS v3 protocol. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it's deleted. | | `azureblob-fuse-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using BlobFuse. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it's deleted. | For more information on core Kubernetes and AKS concepts, see the following arti [azure-blob-csi]: azure-blob-csi.md [general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md [azure-files-azure-netapp-comparison]: ../storage/files/storage-files-netapp-comparison.md-[azure-disk-customer-managed-key]: azure-disk-customer-managed-keys.md +[azure-disk-customer-managed-key]: azure-disk-customer-managed-keys.md |
api-management | Api Management Howto Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md | The policy XML configuration is divided into `inbound`, `backend`, `outbound`, a </policies> ``` -For policy XML examples, see [API Management policy samples](./policies/index.md). +For policy XML examples, see [API Management policy snippets repo](https://github.com/Azure/api-management-policy-snippets). ### Error handling |
api-management | Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md | For more information about working with policies, see: + [Tutorial: Transform and protect your API](transform-api.md) + [Set or edit policies](set-edit-policies.md)-+ [Policy samples](./policies/index.md) ++ [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) |
api-management | Add Correlation Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/add-correlation-id.md | - Title: Sample API management policy - Add a header containing correlation id- -description: Azure API management policy sample - Demonstrates how to add a header containing a correlation id to the inbound request. ------- Previously updated : 10/13/2017----# Add a header containing a correlation id --This article shows an Azure API management policy sample that demonstrates how to add a header containing a correlation id to the inbound request. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Add correlation id to inbound request.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Authorize Request Based On Jwt Claims | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/authorize-request-based-on-jwt-claims.md | - Title: Sample API management policy - Authorize access based on JWT claims- -description: Azure API management policy sample - Demonstrates how to authorize access to specific HTTP methods on an API based on JWT claims. ------- Previously updated : 10/13/2017----# Authorize access based on JWT claims --This article shows an Azure API management policy sample that demonstrates how to authorize access to specific HTTP methods on an API based on JWT claims. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](index.md). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Pre-authorize requests based on HTTP method with validate-jwt.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Authorize Request Using External Authorizer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/authorize-request-using-external-authorizer.md | - Title: Sample API management policy - Authorize request using external authorizer- -description: Azure API management policy sample - Demonstrates how authorize requests using external authorizer encapsulating a custom or legacy authentication/authorization logic. ------- Previously updated : 06/06/2018----# Authorize requests using external authorizer --This article shows an Azure API management policy sample that demonstrates how to secure API access by using an external authorizer encapsulating custom authentication/authorization logic. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Authorize requests using external authorizer.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Access restrictions policies](../api-management-access-restriction-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Cache Response | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/cache-response.md | - Title: Sample API management policy - Add capabilities to backend service- -description: Azure API management policy sample - Demonstrates how to add capabilities to a backend service. For example, accept a name of the place instead of latitude and longitude in a weather forecast API. ------- Previously updated : 10/13/2017----# Add capabilities to a backend service --This article shows an Azure API management policy sample that demonstrates how to add capabilities to a backend service. For example, accept a name of the place instead of latitude and longitude in a weather forecast API. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Call out to an HTTP endpoint and cache the response.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Filter Ip Addresses When Using Appgw | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/filter-ip-addresses-when-using-appgw.md | - Title: Sample API management policy - Filter on IP Address when using Application Gateway- -description: Azure API management policy sample - Demonstrates how to filter on request IP address when using an Application Gateway. ------ Previously updated : 01/13/2020-----# Filter on request IP Address when using an Application Gateway --This article shows an Azure API management policy sample that demonstrates how filter on the request IP address when the API Management instance is accessed through an Application Gateway or other intermediary. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Filter on IP Address when using Application Gateway.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Access restrictions policies](../api-management-access-restriction-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Filter Response Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/filter-response-content.md | - Title: Azure API management policy sample - Filter response content | Microsoft Docs -description: Azure API management policy sample - Demonstrates how to filter data elements from the response payload based on the product associated with the request. ------- Previously updated : 10/13/2017----# Filter response content --This article shows an Azure API management policy sample that demonstrates how to filter data elements from the response payload based on the product associated with the request. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **outbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Filter response content based on product name.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Generate Shared Access Signature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/generate-shared-access-signature.md | - Title: Sample API management policy - Generate Shared Access Signature- -description: Azure API management policy sample - Demonstrates how to generate Shared Access Signature using expressions and forward the request to Azure storage with rewrite-uri policy.. ------- Previously updated : 10/13/2017----# Generate Shared Access Signature --This article shows an Azure API management policy sample that demonstrates how to generate [Shared Access Signature](../../storage/common/storage-sas-overview.md) using expressions and forward the request to Azure storage with rewrite-uri policy. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Generate Shared Access Signature and forward request to Azure storage.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Get X Csrf Token From Sap Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/get-x-csrf-token-from-sap-gateway.md | - Title: Azure API management policy sample - Implement X-CSRF pattern | Microsoft Docs -description: Azure API management policy sample - Demonstrates how to implement X-CSRF pattern used by many APIs. This example is specific to SAP Gateway. ------- Previously updated : 10/13/2017----# Implement X-CSRF pattern --This article shows an Azure API management policy sample that demonstrates how to implement X-CSRF pattern used by many APIs. This example is specific to SAP Gateway. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Get X-CSRF token from SAP gateway using send request.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/index.md | - Title: Azure API Management policy samples | Microsoft Docs -description: Learn about the policies available for use in Azure API Management. ------- Previously updated : 05/15/2023-----# API Management policy samples --[Policies](../api-management-howto-policies.md) are a powerful capability of the system that allows the publisher to change the behavior of the API through configuration. Policies are a collection of statements that are executed sequentially on the request or response of an API. The following table includes links to samples and gives a brief description of each sample. --| Inbound policies | Description | -| - | -- | -| [Add a Forwarded header to allow the backend API to construct proper URLs](./set-header-to-enable-backend-to-construct-urls.md) | Demonstrates how to add a Forwarded header in the inbound request to allow the backend API to construct proper URLs. | -| [Add a header containing a correlation id](./add-correlation-id.md) | Demonstrates how to add a header containing a correlation ID to the inbound request. | -| [Add capabilities to a backend service and cache the response](./cache-response.md) | Shows how to add capabilities to a backend service. For example, accept a name of the place instead of latitude and longitude in a weather forecast API. | -| [Authorize access based on JWT claims](./authorize-request-based-on-jwt-claims.md) | Shows how to authorize access to specific HTTP methods on an API based on JWT claims. | -| [Authorize requests using external authorizer](./authorize-request-using-external-authorizer.md) | Shows how to use external authorizer for securing API access. | -| [Filter IP Addresses when using an Application Gateway](./filter-ip-addresses-when-using-appgw.md) | Shows how to IP filter in policies when the API Management instance is accessed via an Application Gateway -| [Generate Shared Access Signature and forward request to Azure storage](./generate-shared-access-signature.md) | Shows how to generate [Shared Access Signature](../../storage/common/storage-sas-overview.md) using expressions and forward the request to Azure storage with rewrite-uri policy. | -| [Get OAuth2 access token from Azure AD and forward it to the backend](./use-oauth2-for-authorization.md) | Provides an example of using OAuth2 for authorization between the gateway and a backend. It shows how to obtain an access token from Azure AD and forward it to the backend. | -| [Get X-CSRF token from SAP gateway using send request policy](./get-x-csrf-token-from-sap-gateway.md) | Shows how to implement X-CSRF pattern used by many APIs. This example is specific to SAP Gateway. | -| [Route the request based on the size of its body](./route-requests-based-on-size.md) | Demonstrates how to route requests based on the size of their bodies. | -| [Send request context information to the backend service](./send-request-context-info-to-backend-service.md) | Shows how to send some context information to the backend service for logging or processing. | -| **Outbound policies** | **Description** | -| [Filter response content](./filter-response-content.md) | Demonstrates how to filter data elements from the response payload based on the product associated with the request. | -| [Set response cache duration](./set-cache-duration.md) | Demonstrates how to set response cache duration using maxAge value in Cache-Control header sent by the backend. | -| **On-error policies** | **Description** | -| [Log errors to Stackify](./log-errors-to-stackify.md) | Shows how to add an error logging policy to send errors to Stackify for logging. | |
api-management | Log Errors To Stackify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/log-errors-to-stackify.md | - Title: Sample API management policy - Send errors to Stackify for logging- -description: Azure API management policy sample - Demonstrates how to add an error logging policy to send errors to Stackify for logging.. ------- Previously updated : 10/13/2017----# Send errors to Stackify for logging --This article shows an Azure API management policy sample that demonstrates how to add an error logging policy to send errors to Stackify for logging. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **on-error** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Log errors to Stackify.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Route Requests Based On Size | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/route-requests-based-on-size.md | - Title: Sample API management policy - Route request based on size of message body- -description: Azure API management policy sample - Demonstrates how to route requests based on the size of their bodies. ------- Previously updated : 10/13/2017----# Route the request based on the size of its body --This article shows an Azure API management policy sample that demonstrates how to route requests based on the size of their bodies. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Route requests based on size.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Send Request Context Info To Backend Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/send-request-context-info-to-backend-service.md | - Title: Sample API management policy - Send request context information to backend service- -description: Azure API management policy sample - Demonstrates how to send request context information to the backend service. ------- Previously updated : 10/13/2017----# Send request context information to the backend service --This article shows an Azure API management policy sample that demonstrates how to send request context information to the backend service. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Send request context information to the backend service.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Set Cache Duration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/set-cache-duration.md | - Title: Sample API management policy - Set response cache duration- -description: Azure API management policy sample - Demonstrates how to set response cache duration using maxAge value in Cache-Control header sent by the backend.. ------- Previously updated : 10/13/2017----# Set response cache duration --This article shows an Azure API management policy sample that demonstrates how to set response cache duration using maxAge value in Cache-Control header sent by the backend. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Set cache duration using response cache control header.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Set Header To Enable Backend To Construct Urls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/set-header-to-enable-backend-to-construct-urls.md | - Title: Azure API management policy sample - Add a Forwarded header | Microsoft Docs -description: Azure API management policy sample - Demonstrates how to add a Forwarded header in the inbound request to allow the backend API to construct proper URLs. ------- Previously updated : 10/13/2017----# Add a Forwarded header --This article shows an Azure API management policy sample that demonstrates how to add a Forwarded header in the inbound request to allow the backend API to construct proper URLs. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --## Code --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Forward gateway hostname to backend for generating correct urls in responses.policy.xml)] --## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Use Oauth2 For Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/use-oauth2-for-authorization.md | - Title: Sample Azure API management policy - Use OAuth2 for authorization between gateway and backend- -description: Azure API management policy sample - Demonstrates how to use OAuth2 for authorization between the gateway and a backend. It shows how to obtain an access token from Azure AD and forward it to the backend. ------- Previously updated : 03/14/2023----# Use OAuth2 for authorization between the gateway and a backend - -This article shows an Azure API management policy sample that demonstrates how to use OAuth2 for authorization between the gateway and a backend. It shows how to obtain an access token from Azure Active Directory and forward it to the backend. --* For a more detailed example policy that not only acquires an access token, but also caches and renews it upon expiration, see [this blog](https://techcommunity.microsoft.com/t5/azure-paas-blog/api-management-policy-for-access-token-acquisition-caching-and/ba-p/2191623). -* API Management [authorizations](../authorizations-overview.md) can also be used to simplify the process of managing authorization tokens to OAuth 2.0 backend services. --To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). --The following script uses named values that appear in {{property_name}}. To learn about named values and how to use them in API Management policies, see [this](../api-management-howto-properties.md) topic. - -## Policy --Paste the code into the **inbound** block. --[!code-xml[Main](../../../api-management-policy-samples/examples/Get OAuth2 access token from AAD and forward it to the backend.policy.xml)] - -## Next steps --Learn more about API Management policies: --+ [Transformation policies](../api-management-transformation-policies.md) -+ [Policy samples](/azure/api-management/policies) |
api-management | Policy Fragments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md | For more information about working with policies, see: + [Tutorial: Transform and protect APIs](transform-api.md) + [Set or edit policies](set-edit-policies.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements-+ [Policy samples](./policies/index.md) ++ [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | -[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). For API -Management policy samples, see [API Management - Policy index](./policies/index.md). +[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). If you're looking for policies you can use to modify API behavior in API Management, see [API Management policy reference](api-management-policies.md). The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the **Version** column to view the source on the |
api-management | Set Edit Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md | More information about policies: * [Policy overview](api-management-howto-policies.md) * [Policy reference](api-management-policies.md) for a full list of policy statements and their settings-* [Policy samples](./policies/index.md) +* [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) ## Prerequisites For more information about working with policies, see: + [Tutorial: Transform and protect APIs](transform-api.md) + [Set or edit policies](set-edit-policies.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements and their settings-+ [Policy samples](./policies/index.md) ++ [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) |
app-service | Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md | Public certificates are supported in the *.cer* format. | Setting | Description | |-|-|- | **CER certificate file** | Select your .pfx file. | + | **CER certificate file** | Select your .cer file. | | **Certificate friendly name** | The certificate name that will be shown in your web app. | 1. When you're done, select **Add**. |
app-service | Webjobs Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md | Title: Run background tasks with WebJobs -description: Learn how to use WebJobs to run background tasks in Azure App Service. Choose from a variety of script formats and run them with CRON expressions. -+description: Learn how to use WebJobs to run background tasks in Azure App Service. Choose from various script formats and run them with CRON expressions. ms.assetid: af01771e-54eb-4aea-af5f-f883ff39572b Previously updated : 6/25/2021-- Last updated : 7/30/2023++ #Customer intent: As a web developer, I want to leverage background tasks to keep my application running smoothly. adobe-target: true adobe-target-content: ./webjobs-create-ieux Deploy WebJobs by using the [Azure portal](https://portal.azure.com) to upload an executable or script. You can run background tasks in the Azure App Service. -If instead of the Azure App Service you are using Visual Studio 2019 to develop and deploy WebJobs, see [Deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md). +If instead of the Azure App Service, you're using Visual Studio to develop and deploy WebJobs, see [Deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md). ## Overview-WebJobs is a feature of [Azure App Service](index.yml) that enables you to run a program or script in the same instance as a web app, API app, or mobile app. There is no additional cost to use WebJobs. +WebJobs is a feature of [Azure App Service](index.yml) that enables you to run a program or script in the same instance as a web app, API app, or mobile app. There's no extra cost to use WebJobs. -You can use the Azure WebJobs SDK with WebJobs to simplify many programming tasks. WebJobs is not yet supported for App Service on Linux. For more information, see [What is the WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki). +You can use the Azure WebJobs SDK with WebJobs to simplify many programming tasks. WebJobs aren't supported for App Service on Linux yet. For more information, see [What is the WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki). Azure Functions provides another way to run programs and scripts. For a comparison between WebJobs and Functions, see [Choose between Flow, Logic Apps, Functions, and WebJobs](../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md). when making changes in one don't forget the other two. 1. In the [Azure portal](https://portal.azure.com), go to the **App Service** page of your App Service web app, API app, or mobile app. -1. In the left pane of your app's **App Service** page, search for and select **WebJobs**. +1. From the left pane, select **WebJobs**, then select **Add**. -  + :::image type="content" source="media/webjobs-create/add-webjob.png" alt-text="Screenshot that shows how to add a WebJob in an App Service app in the portal."::: -1. On the **WebJobs** page, select **Add**. +1. Fill in the **Add WebJob** settings as specified in the table, then select **Create Webjob**. -  --1. Fill in the **Add WebJob** settings as specified in the table. --  + :::image type="content" source="media/webjobs-create/configure-new-continuous-webjob.png" alt-text="Screenshot that shows how to configure a mult-instance continuous WebJob for an App Service app."::: | Setting | Sample value | Description  | | | -- | |- | **Name** | myContinuousWebJob | A name that is unique within an App Service app. Must start with a letter or a number and cannot contain special characters other than "-" and "_". | - | **File Upload** | ConsoleApp.zip | A *.zip* file that contains your executable or script file as well as any supporting files needed to run the program or script. The supported executable or script file types are listed in the [Supported file types](#acceptablefiles) section. | + | **Name** | myContinuousWebJob | A name that is unique within an App Service app. Must start with a letter or a number and must not contain special characters other than "-" and "_". | + | **File Upload** | ConsoleApp.zip | A *.zip* file that contains your executable or script file and any supporting files needed to run the program or script. The supported executable or script file types are listed in the [Supported file types](#acceptablefiles) section. | | **Type** | Continuous | The [WebJob types](#webjob-types) are described earlier in this article. |- | **Scale** | Multi instance | Available only for Continuous WebJobs. Determines whether the program or script runs on all instances or just one instance. The option to run on multiple instances doesn't apply to the Free or Shared [pricing tiers](https://azure.microsoft.com/pricing/details/app-service/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). | --1. Select **OK**. + | **Scale** | Multi Instance | Available only for Continuous WebJobs. Determines whether the program or script runs on all instances or just one instance. The option to run on multiple instances doesn't apply to the Free or Shared [pricing tiers](https://azure.microsoft.com/pricing/details/app-service/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). | - The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**. +1. The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**. -  +1. To stop or restart a continuous WebJob, right-click the WebJob in the list and select the **Stop** or **Run** button, then confirm your selection. -1. To stop or restart a continuous WebJob, right-click the WebJob in the list and select **Stop** or **Start**. --  + :::image type="content" source="media/webjobs-create/continuous-webjob-stop.png" alt-text="Screenshot that shows how to stop a continuous WebJob in the Azure portal."::: ## <a name="CreateOnDemand"></a> Create a manually triggered WebJob Several steps in the three "Create..." sections are identical; when making changes in one don't forget the other two. --> -1. In the [Azure portal](https://portal.azure.com), search for and select **App Services**. --1. Select your web app, API app, or mobile app from the list. --1. In the left pane of your app's **App Service** page, select **WebJobs**. --  +1. In the [Azure portal](https://portal.azure.com), go to the **App Service** page of your App Service web app, API app, or mobile app. -2. On the **WebJobs** page, select **Add**. +1. From the left pane, select **WebJobs**, then select **Add**. -  + :::image type="content" source="media/webjobs-create/add-webjob.png" alt-text="Screenshot that shows how to add a WebJob in an App Service app in the portal (manually triggered WebJob)."::: -1. Fill in the **Add WebJob** settings as specified in the table. +1. Fill in the **Add WebJob** settings as specified in the table, then select **Create Webjob**. -  + :::image type="content" source="media/webjobs-create/configure-new-triggered-webjob.png" alt-text="Screenshot that shows how to configure a manually triggered WebJob for an App Service app."::: | Setting | Sample value | Description  | | | -- | |- | **Name** | myTriggeredWebJob | A name that is unique within an App Service app. Must start with a letter or a number and cannot contain special characters other than "-" and "_".| - | **File Upload** | ConsoleApp.zip | A *.zip* file that contains your executable or script file as well as any supporting files needed to run the program or script. The supported executable or script file types are listed in the [Supported file types](#acceptablefiles) section. | + | **Name** | myTriggeredWebJob | A name that is unique within an App Service app. Must start with a letter or a number and must not contain special characters other than "-" and "_".| + | **File Upload** | ConsoleApp1.zip | A *.zip* file that contains your executable or script file and any supporting files needed to run the program or script. The supported executable or script file types are listed in the [Supported file types](#acceptablefiles) section. | | **Type** | Triggered | The [WebJob types](#webjob-types) are described previously in this article. | | **Triggers** | Manual | | -4. Select **OK**. +1. The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**. - The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**. +1. To run a manually triggered WebJob, right-click the WebJob in the list and select the **Run** button, then confirm your selection. -  --7. To run the WebJob, right-click its name in the list and select **Run**. - -  + :::image type="content" source="media/webjobs-create/triggered-webjob-run.png" alt-text="Screenshot that shows how to run a manually triggered WebJob in the Azure portal."::: ## <a name="CreateScheduledCRON"></a> Create a scheduled WebJob Several steps in the three "Create..." sections are identical; when making changes in one don't forget the other two. --> -1. In the [Azure portal](https://portal.azure.com), search for and select **App Services**. --1. Select your web app, API app, or mobile app from the list. --1. In the left pane of your app's **App Service** page, select **WebJobs**. --  +1. In the [Azure portal](https://portal.azure.com), go to the **App Service** page of your App Service web app, API app, or mobile app. -1. On the **WebJobs** page, select **Add**. +1. From the left pane, select **WebJobs**, then select **Add**. -  + :::image type="content" source="media/webjobs-create/add-webjob.png" alt-text="Screenshot that shows how to add a WebJob in an App Service app in the portal (scheduled WebJob)."::: -3. Fill in the **Add WebJob** settings as specified in the table. +1. Fill in the **Add WebJob** settings as specified in the table, then select **Create Webjob**. -  + :::image type="content" source="media/webjobs-create/configure-new-scheduled-webjob.png" alt-text="Screenshot that shows how to configure a scheduled WebJob in an App Service app."::: | Setting | Sample value | Description  | | | -- | |- | **Name** | myScheduledWebJob | A name that is unique within an App Service app. Must start with a letter or a number and cannot contain special characters other than "-" and "_". | - | **File Upload** | ConsoleApp.zip | A *.zip* file that contains your executable or script file as well as any supporting files needed to run the program or script. The supported executable or script file types are listed in the [Supported file types](#acceptablefiles) section. | + | **Name** | myScheduledWebJob | A name that is unique within an App Service app. Must start with a letter or a number and must not contain special characters other than "-" and "_". | + | **File Upload** | ConsoleApp.zip | A *.zip* file that contains your executable or script file and any supporting files needed to run the program or script. The supported executable or script file types are listed in the [Supported file types](#acceptablefiles) section. | | **Type** | Triggered | The [WebJob types](#webjob-types) are described earlier in this article. | | **Triggers** | Scheduled | For the scheduling to work reliably, enable the Always On feature. Always On is available only in the Basic, Standard, and Premium pricing tiers.| | **CRON Expression** | 0 0/20 * * * * | [CRON expressions](#ncrontab-expressions) are described in the following section. | -4. Select **OK**. +1. The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**. - The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**. +1. The scheduled WebJob is run at the schedule defined by the CRON expression. To run it manually at anytime, right-click the WebJob in the list and select the **Run** button, then confirm your selection. -  + :::image type="content" source="media/webjobs-create/scheduled-webjob-run.png" alt-text="Screenshot that shows how to run a manually scheduled WebJob in the Azure portal."::: ## NCRONTAB expressions To learn more, see [Scheduling a triggered WebJob](webjobs-dotnet-deploy-vs.md#s You can manage the running state individual WebJobs running in your site in the [Azure portal](https://portal.azure.com). Just go to **Settings** > **WebJobs**, choose the WebJob, and you can start and stop the WebJob. You can also view and modify the password of the webhook that runs the WebJob. -You can also [add an application setting](configure-common.md#configure-app-settings) named `WEBJOBS_STOPPED` with a value of `1` to stop all WebJobs running on your site. This can be handy as a way to prevent conflicting WebJobs from running both in staging and production slots. You can similarly use a value of `1` for the `WEBJOBS_DISABLE_SCHEDULE` setting to disable triggered WebJobs in the site or a staging slot. For slots, remember to enable the **Deployment slot setting** option so that the setting itself doesn't get swapped. +You can also [add an application setting](configure-common.md#configure-app-settings) named `WEBJOBS_STOPPED` with a value of `1` to stop all WebJobs running on your site. You can use this method to prevent conflicting WebJobs from running both in staging and production slots. You can similarly use a value of `1` for the `WEBJOBS_DISABLE_SCHEDULE` setting to disable triggered WebJobs in the site or a staging slot. For slots, remember to enable the **Deployment slot setting** option so that the setting itself doesn't get swapped. ## <a name="ViewJobHistory"></a> View the job history -1. Select the WebJob and then to see the history, select **Logs**. - -  +1. For the WebJob you want to see, select **Logs**. + :::image type="content" source="media/webjobs-create/open-logs.png" alt-text="Screenshot that shows how to access logs for a WebJob."::: + 2. In the **WebJob Details** page, select a time to see details for one run. -  + :::image type="content" source="media/webjobs-create/webjob-details-page.png" alt-text="Screenshot that shows how to choose a WebJob run to see its detailed logs."::: -3. In the **WebJob Run Details** page, select **Toggle Output** to see the text of the log contents. - -  +3. In the **WebJob Run Details** page, you can select **download** to get a text file of the logs, or select the **WebJobs** breadcrumb link at the top of the page to see logs for a different WebJob. - To see the output text in a separate browser window, select **download**. To download the text itself, right-click **download** and use your browser options to save the file contents. - -5. Select the **WebJobs** breadcrumb link at the top of the page to go to a list of WebJobs. +## WebJob statuses ++Below is a list of common WebJob statuses: ++- **Initializing** The app has just started and the WebJob is going through its initialization process. +- **Starting** The WebJob is starting up. +- **Running** The WebJob is running. +- **PendingRestart** A continuous WebJob exits in less than two minutes since it started for any reason, and App Service waits 60 seconds before restarting the WebJob. If the continuous WebJob exits after the two-minute mark, App Service doesn't wait the 60 seconds and restarts the WebJob immediately. +- **Stopped** The WebJob was stopped (usually from the Azure portal) and is currently not running and won't run until you start it again manually, even for a continuous or scheduled WebJob. +- **Aborted** This can occur for a number of reasons, such as when a long-running WebJob reaches the timeout marker. -  - -  - ## <a name="NextSteps"></a> Next steps The Azure WebJobs SDK can be used with WebJobs to simplify many programming tasks. For more information, see [What is the WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki). |
application-gateway | Ingress Controller Install Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md | -The Application Gateway Ingress Controller (AGIC) is a pod within your Kubernetes cluster. +The Application Gateway Ingress Controller (AGIC) is a pod within your Azure Kubernetes Service (AKS) cluster. AGIC monitors the Kubernetes [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resources, and creates and applies Application Gateway config based on the status of the Kubernetes cluster. -## Outline: +## Outline + - [Prerequisites](#prerequisites) - [Azure Resource Manager Authentication (ARM)](#azure-resource-manager-authentication)- - Option 1: [Set up aad-pod-identity](#set-up-aad-pod-identity) and create Azure Identity on ARMs - - Option 2: [Using a Service Principal](#using-a-service-principal) + - Option 1: [Set up Azure AD workload identity](#set-up-azure-ad-workload-identity) and create Azure Identity on ARMs + - Option 2: [Set up a Service Principal](#using-a-service-principal) - [Install Ingress Controller using Helm](#install-ingress-controller-as-a-helm-chart) - [Shared Application Gateway](#shared-application-gateway): Install AGIC in an environment, where Application Gateway is-shared between one AKS clusters and/or other Azure components. +shared between one AKS cluster and/or other Azure components. ## Prerequisites+ This document assumes you already have the following tools and infrastructure installed:-- [AKS](https://azure.microsoft.com/services/kubernetes-service/) with [Azure Container Networking Interface (CNI)](../aks/configure-azure-cni.md)-- [Application Gateway v2](./tutorial-autoscale-ps.md) in the same virtual network as AKS-- [AAD Pod Identity](https://github.com/Azure/aad-pod-identity) installed on your AKS cluster-- [Cloud Shell](https://shell.azure.com/) is the Azure shell environment, which has `az` CLI, `kubectl`, and `helm` installed. These tools are required for the following commands:++- [An AKS cluster](../aks/intro-kubernetes.md) with [Azure Container Networking Interface (CNI)](../aks/configure-azure-cni.md) +- [Application Gateway v2](./tutorial-autoscale-ps.md) in the same virtual network as the AKS cluster +- [Azure AD workload identity](../aks/workload-identity-overview.md) configured for your AKS cluster +- [Cloud Shell](https://shell.azure.com/) is the Azure shell environment, which has `az` CLI, `kubectl`, and `helm` installed. These tools are required for commands used to support configuring this deployment. **Backup your Application Gateway's configuration** before installing AGIC:- 1. using [Azure portal](https://portal.azure.com/) navigate to your `Application Gateway` instance - 2. from `Export template` click `Download` ++ 1. From the [Azure portal](https://portal.azure.com/), navigate to your Application Gateway instance. + 2. Under the **Automation** section, select **Export template** and then select **Download**. The zip file you downloaded contains JSON templates, bash, and PowerShell scripts you could use to restore App Gateway should that become necessary ## Install Helm+ [Helm](../aks/kubernetes-helm.md) is a package manager for Kubernetes, used to install the `application-gateway-kubernetes-ingress` package. Use [Cloud Shell](https://shell.azure.com/) to install Helm: Use [Cloud Shell](https://shell.azure.com/) to install Helm: ``` 1. Add the AGIC Helm repository:+ ```bash helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ helm repo update Use [Cloud Shell](https://shell.azure.com/) to install Helm: AGIC communicates with the Kubernetes API server and the Azure Resource Manager. It requires an identity to access these APIs. -## Set up AAD Pod Identity +## Set up Azure AD workload identity -[AAD Pod Identity](https://github.com/Azure/aad-pod-identity) is a controller, similar to AGIC, which also runs on your -AKS. It binds Azure Active Directory identities to your Kubernetes pods. Identity is required for an application in a -Kubernetes pod to be able to communicate with other Azure components. In the particular case here, we need authorization +[Azure AD workload identity](../aks/workload-identity-overview.md) is an identity you assign to a software workload, to authenticate and access other services and resources. This identity enables your AKS pod to use this identity and authenticate with other Azure resources. For this configuration, we need authorization for the AGIC pod to make HTTP requests to [ARM](../azure-resource-manager/management/overview.md). -Follow the [AAD Pod Identity installation instructions](https://github.com/Azure/aad-pod-identity#deploy-the-azure-aad-identity-infra) to add this component to your AKS. +1. Use the Azure CLI [az account set](/cli/azure/account#az-account-set) command to set a specific subscription to be the current active subscription. Then use the [az identity create](/cli/azure/identity#az-identity-create) command to create a managed identity. The identity needs to be created in the [node resource group](../aks/concepts-clusters-workloads.md#node-resource-group). The node resource group is assigned a name by default, such as *MC_myResourceGroup_myAKSCluster_eastus*. -Next we need to create an Azure identity and give it permissions ARM. -Use [Cloud Shell](https://shell.azure.com/) to run all of the following commands and create an identity: --1. Create an Azure identity **in the same resource group as the AKS nodes**. Picking the correct resource group is -important. The resource group required in the following commands is *not* the one referenced on the AKS portal pane. This is -the resource group of the `aks-agentpool` virtual machines. Typically that resource group starts with `MC_` and contains - the name of your AKS. For instance: `MC_resourceGroup_aksABCD_westus` + ```azurecli-interactive + az account set --subscription "subscriptionID" + ``` - ```azurecli - az identity create -g <agent-pool-resource-group> -n <identity-name> + ```azurecli-interactive + az identity create --name "userAssignedIdentityName" --resource-group "resourceGroupName" --location "location" --subscription "subscriptionID" ``` -1. For the role assignment, commands we need to obtain `principalId` for the newly created identity: +1. For the role assignment, run the following command to identify the `principalId` for the newly created identity: ```azurecli az identity show -g <resourcegroup> -n <identity-name> ``` -1. Give the identity `Contributor` access to your Application Gateway. For this you need the ID of the Application Gateway, which -looks something like this: `/subscriptions/A/resourceGroups/B/providers/Microsoft.Network/applicationGateways/C` +1. Grant the identity **Contributor** access to your Application Gateway. You need the ID of the Application Gateway, which +looks like: `/subscriptions/A/resourceGroups/B/providers/Microsoft.Network/applicationGateways/C`. First, get the list of Application Gateway IDs in your subscription by running the following command: ++ ```azurecli + az network application-gateway list --query '[].id' + ``` - Get the list of Application Gateway IDs in your subscription with: `az network application-gateway list --query '[].id'` + To assign the identity **Contributor** access, run the following command: ```azurecli az role assignment create \ looks something like this: `/subscriptions/A/resourceGroups/B/providers/Microsof --scope <App-Gateway-ID> ``` -1. Give the identity `Reader` access to the Application Gateway resource group. The resource group ID would look like: +1. Grant the identity **Reader** access to the Application Gateway resource group. The resource group ID looks like: `/subscriptions/A/resourceGroups/B`. You can get all resource groups with: `az group list --query '[].id'` ```azurecli looks something like this: `/subscriptions/A/resourceGroups/B/providers/Microsof --scope <App-Gateway-Resource-Group-ID> ``` ->[!Note] -> If the virtual network Application Gateway is deployed into doesn't reside in the same resource group as the AKS nodes, please ensure the identity used by AGIC has the **Microsoft.Network/virtualNetworks/subnets/join/action** permission delegated to the subnet Application Gateway is deployed into. If a custom role is not defined with this permission, you may use the built-in _Network Contributor_ role, which contains the _Microsoft.Network/virtualNetworks/subnets/join/action_ permission. +>[!NOTE] +> If the virtual network Application Gateway is deployed into doesn't reside in the same resource group as the AKS nodes, please ensure the identity used by AGIC has the **Microsoft.Network/virtualNetworks/subnets/join/action** permission delegated to the subnet Application Gateway is deployed into. If a custom role is not defined with this permission, you may use the built-in **Network Contributor** role, which contains the **Microsoft.Network/virtualNetworks/subnets/join/action** permission. ## Using a Service Principal-It's also possible to provide AGIC access to ARM via a Kubernetes secret. ++It's also possible to provide AGIC access to ARM using a Kubernetes secret. 1. Create an Active Directory Service Principal and encode with base64. The base64 encoding is required for the JSON blob to be saved to Kubernetes. -```azurecli -az ad sp create-for-rbac --role Contributor --sdk-auth | base64 -w0 -``` + ```azurecli + az ad sp create-for-rbac --role Contributor --sdk-auth | base64 -w0 + ``` 2. Add the base64 encoded JSON blob to the `helm-config.yaml` file. More information on `helm-config.yaml` is in the next section.-```yaml -armAuth: - type: servicePrincipal - secretJSON: <Base64-Encoded-Credentials> -``` ++ ```yaml + armAuth: + type: servicePrincipal + secretJSON: <Base64-Encoded-Credentials> + ``` ## Install Ingress Controller as a Helm Chart+ In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use [Cloud Shell](https://shell.azure.com/) to install the AGIC Helm package: 1. Add the `application-gateway-kubernetes-ingress` helm repo and perform a helm update In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use ``` 1. Download helm-config.yaml, which configures AGIC:+ ```bash wget https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-helm-config.yaml -O helm-config.yaml ```- Or copy the following YAML file: - ++ Or copy the following YAML file: + ```yaml # This file contains the essential configs for the ingress controller helm chart In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use # Specify which kubernetes namespace the ingress controller must watch # Default value is "default" # Leaving this variable out or setting it to blank or empty string would- # result in Ingress Controller observing all acessible namespaces. + # result in Ingress Controller observing all accessible namespaces. # # kubernetes: # watchNamespace: <namespace> In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use # Specify the authentication with Azure Resource Manager # # Two authentication methods are available:- # - Option 1: AAD-Pod-Identity (https://github.com/Azure/aad-pod-identity) + # - Option 1: Azure-AD-workload-identity armAuth:- type: aadPodIdentity - identityResourceID: <identityResourceId> + type: workloadIdentity identityClientID: <identityClientId> ## Alternatively you can use Service Principal credentials In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use 1. Edit helm-config.yaml and fill in the values for `appgw` and `armAuth`. - > [!NOTE] - > The `<identity-resource-id>` and `<identity-client-id>` are the properties of the Azure AD Identity you setup in the previous section. You can retrieve this information by running the following command: `az identity show -g <resourcegroup> -n <identity-name>`, where `<resourcegroup>` is the resource group in which the top level AKS cluster object, Application Gateway and Managed Identify are deployed. + > [!NOTE] + > The `<identity-client-id>` is a property of the Azure AD workload identity you setup in the previous section. You can retrieve this information by running the following command: `az identity show -g <resourcegroup> -n <identity-name>`, where `<resourcegroup>` is the resource group hosting the infrastructure resources related to the AKS cluster, Application Gateway and managed identity. 1. Install Helm chart `application-gateway-kubernetes-ingress` with the `helm-config.yaml` configuration from the previous step In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use ``` Alternatively you can combine the `helm-config.yaml` and the Helm command in one step:+ ```bash helm install ./helm/ingress-azure \ --name ingress-azure \ In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use Refer to [this how-to guide](ingress-controller-expose-service-over-http-https.md) to understand how you can expose an AKS service over HTTP or HTTPS, to the internet, using an Azure Application Gateway. -- ## Shared Application Gateway+ By default AGIC assumes full ownership of the Application Gateway it's linked to. AGIC version 0.8.0 and later can share a single Application Gateway with other Azure components. For instance, we could use the same Application Gateway for an app hosted on Virtual Machine Scale Set and an AKS cluster. **Backup your Application Gateway's configuration** before enabling this setting:- 1. using [Azure portal](https://portal.azure.com/) navigate to your `Application Gateway` instance - 2. from `Export template` click `Download` ++ 1. From the [Azure portal](https://portal.azure.com/), navigate to your `Application Gateway` instance + 2. Under the **Automation** section, select **Export template** and then select **Download**. The zip file you downloaded contains JSON templates, bash, and PowerShell scripts you could use to restore Application Gateway ### Example Scenario+ Let's look at an imaginary Application Gateway, which manages traffic for two web sites:- - `dev.contoso.com` - hosted on a new AKS, using Application Gateway and AGIC ++ - `dev.contoso.com` - hosted on a new AKS cluster, using Application Gateway and AGIC - `prod.contoso.com` - hosted on an [Azure Virtual Machine Scale Set](https://azure.microsoft.com/services/virtual-machine-scale-sets/) With default settings, AGIC assumes 100% ownership of the Application Gateway it's pointed to. AGIC overwrites all of App-Gateway's configuration. If we were to manually create a listener for `prod.contoso.com` (on Application Gateway), without +Gateway's configuration. If you manually create a listener for `prod.contoso.com` (on Application Gateway) without defining it in the Kubernetes Ingress, AGIC deletes the `prod.contoso.com` config within seconds. To install AGIC and also serve `prod.contoso.com` from our Virtual Machine Scale Set machines, we must constrain AGIC to configuring The command above creates an `AzureIngressProhibitedTarget` object. This makes A Application Gateway config for `prod.contoso.com` and explicitly instructs it to avoid changing any configuration related to that hostname. - ### Enable with new AGIC installation-To limit AGIC (version 0.8.0 and later) to a subset of the Application Gateway configuration modify the `helm-config.yaml` template. ++To limit AGIC (version 0.8.0 and later) to a subset of the Application Gateway configuration, modify the `helm-config.yaml` template. Under the `appgw:` section, add `shared` key and set it to `true`. ```yaml appgw: ``` Apply the Helm changes:+ 1. Ensure the `AzureIngressProhibitedTarget` CRD is installed with:+ ```bash kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/7b55ad194e7582c47589eb9e78615042e00babf3/crds/AzureIngressProhibitedTarget-v1-CRD-v1.yaml ```+ 2. Update Helm:+ ```bash helm upgrade \ --recreate-pods \ Apply the Helm changes: ingress-azure application-gateway-kubernetes-ingress/ingress-azure ``` -As a result your AKS has a new instance of `AzureIngressProhibitedTarget` called `prohibit-all-targets`: +As a result, your AKS cluster has a new instance of `AzureIngressProhibitedTarget` called `prohibit-all-targets`: + ```bash kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml ``` As a result your AKS has a new instance of `AzureIngressProhibitedTarget` called The object `prohibit-all-targets`, as the name implies, prohibits AGIC from changing config for *any* host and path. Helm install with `appgw.shared=true` deploys AGIC, but doesn't make any changes to Application Gateway. - ### Broaden permissions+ Since Helm with `appgw.shared=true` and the default `prohibit-all-targets` blocks AGIC from applying a config, broaden AGIC permissions: -1. Create a new `AzureIngressProhibitedTarget` with your specific setup: +1. Create a new YAML file named `AzureIngressProhibitedTarget` with the following snippet containing your specific setup: + ```bash cat <<EOF | kubectl apply -f - apiVersion: "appgw.ingress.k8s.io/v1" Since Helm with `appgw.shared=true` and the default `prohibit-all-targets` block ``` ### Enable for an existing AGIC installation-Let's assume that we already have a working AKS, Application Gateway, and configured AGIC in our cluster. We have an Ingress for -`prod.contoso.com` and are successfully serving traffic for it from AKS. We want to add `staging.contoso.com` to our ++Let's assume that we already have a working AKS cluster, Application Gateway, and configured AGIC in our cluster. We have an Ingress for +`prod.contoso.com` and are successfully serving traffic for it from the cluster. We want to add `staging.contoso.com` to our existing Application Gateway, but need to host it on a [VM](https://azure.microsoft.com/services/virtual-machines/). We are going to reuse the existing Application Gateway and manually configure a listener and backend pools for-`staging.contoso.com`. But manually tweaking Application Gateway config (via +`staging.contoso.com`. But manually tweaking Application Gateway config (using [portal](https://portal.azure.com), [ARM APIs](/rest/api/resources/) or [Terraform](https://www.terraform.io/)) would conflict with AGIC's assumptions of full ownership. Shortly after we apply changes, AGIC overwrites or deletes them. We can prohibit AGIC from making changes to a subset of configuration. -1. Create an `AzureIngressProhibitedTarget` object: +1. Create a new YAML file named `AzureIngressProhibitedTarget` with the following snippet: + ```bash cat <<EOF | kubectl apply -f - apiVersion: "appgw.ingress.k8s.io/v1" We can prohibit AGIC from making changes to a subset of configuration. kubectl get AzureIngressProhibitedTargets ``` -3. Modify Application Gateway config via portal - add listeners, routing rules, backends etc. The new object we created +3. Modify Application Gateway config from the Azure portal - add listeners, routing rules, backends etc. The new object we created (`manually-configured-staging-environment`) prohibits AGIC from overwriting Application Gateway configuration related to `staging.contoso.com`. |
applied-ai-services | V3 Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md | monikerRange: '>=form-recog-2.1.0' ## Migrating from a v3.0 preview API version -Preview APIs are periodically deprecated. If you're using a preview API version, plan on updating your application to target the GA API version once available. To migrate from the 2021-09-30-preview or the 2022-01-30-preview API versions to the `2022-08-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview.md). +Preview APIs are periodically deprecated. If you're using a preview API version, please update your application to target the GA API version. To migrate from the 2021-09-30-preview, 2022-01-30-preview or the 2022-06-30-preview API versions to the `2022-08-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview.md). -The `2022-08-31` API has a few updates from the preview API versions: +> [!IMPORTANT] +> +> Preview API versions 2021-09-30-preview, 2022-01-30-preview and 2022-06-30-preview are being retired July 31st 2023. All analyze requests that use these API versions will fail. Custom neural models trained with any of these API versions will no longer be usable once the API versions are deprecated. All custom neural models trained with preview API versions will need to be retrained with the GA API version. ++The `2022-08-31` (GA) API has a few updates from the preview API versions: * Field rename: boundingBox to polygon to support non-quadrilateral polygon regions. * Field deleted: entities removed from the result of the general document model. |
automation | Automation Managed Identity Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managed-identity-faq.md | From 1 April 2023, creation of new Run As accounts wouldn't be possible. We stro ## Will runbooks that still use the Run As account be able to authenticate after September 30, 2023? Yes, the runbooks will be able to authenticate until the Run As account certificate expires. After 30 September 2023, all runbook executions using RunAs accounts wouldn't be supported. +## Are Connections and Credentials assets retiring on 30th Sep 2023? ++Automation Run As accounts will not be supported after **30 September 2023**. Connections and Credentials assets don't come under the purview of this retirement. For more secure way of authentication, we recommend you to use [Managed Identities](automation-security-overview.md#managed-identities). ++ ## What is a managed identity? Applications use managed identities in Azure AD when they're connecting to resources that support Azure AD authentication. Applications can use managed identities to obtain Azure AD tokens without managing credentials, secrets, certificates, or keys. |
automation | Migrate Run As Accounts Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md | -Run As accounts in Azure Automation provide authentication for managing resources deployed through Azure Resource Manager or the classic deployment model. Whenever a Run As account is created, an Azure AD application is registered, and a self-signed certificate is generated. The certificate is valid for one year. Renewing the certificate every year before it expires keeps the Automation account working but adds overhead. +Run As accounts in Azure Automation provide authentication for managing resources deployed through Azure Resource Manager or the classic deployment model. Whenever a Run As account is created, an Azure AD application is registered, and a self-signed certificate is generated. The certificate is valid for one month. Renewing the certificate every month before it expires keeps the Automation account working but adds overhead. You can now configure Automation accounts to use a [managed identity](automation-security-overview.md#managed-identities), which is the default option when you create an Automation account. With this feature, an Automation account can authenticate to Azure resources without the need to exchange any credentials. A managed identity removes the overhead of renewing the certificate or managing the service principal. |
azure-app-configuration | Enable Dynamic Configuration Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-aspnet-core.md | A *sentinel key* is a key that you update after you complete the change of all o 1. Open *Program.cs*, and update the `AddAzureAppConfiguration` method you added previously during the quickstart. - #### [.NET 6.x](#tab/core6x) + #### [.NET 6.0+](#tab/core6x) ```csharp // Load configuration from Azure App Configuration builder.Configuration.AddAzureAppConfiguration(options => A *sentinel key* is a key that you update after you complete the change of all o 1. Add Azure App Configuration middleware to the service collection of your app. - #### [.NET 6.x](#tab/core6x) + #### [.NET 6.0+](#tab/core6x) Update *Program.cs* with the following code. ```csharp A *sentinel key* is a key that you update after you complete the change of all o 1. Call the `UseAzureAppConfiguration` method. It enables your app to use the App Configuration middleware to update the configuration for you automatically. - #### [.NET 6.x](#tab/core6x) + #### [.NET 6.0+](#tab/core6x) Update *Program.cs* withe the following code. ```csharp |
azure-app-configuration | Enable Dynamic Configuration Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md | Title: "Tutorial: Use dynamic configuration in a .NET Core app" + Title: "Tutorial: Use dynamic configuration in a .NET app" -description: In this tutorial, you learn how to dynamically update the configuration data for .NET Core apps +description: In this tutorial, you learn how to dynamically update the configuration data for .NET apps documentationcenter: '' -# Tutorial: Use dynamic configuration in a .NET Core app +# Tutorial: Use dynamic configuration in a .NET app -The App Configuration .NET provider library supports updating configuration on demand without causing an application to restart. This tutorial shows how you can implement dynamic configuration updates in your code. It builds on the app introduced in the quickstart. You should finish [Create a .NET Core app with App Configuration](./quickstart-dotnet-core-app.md) before continuing. +The App Configuration .NET provider library supports updating configuration on demand without causing an application to restart. This tutorial shows how you can implement dynamic configuration updates in your code. It builds on the app introduced in the quickstart. You should finish [Create a .NET app with App Configuration](./quickstart-dotnet-core-app.md) before continuing. You can use any code editor to do the steps in this tutorial. [Visual Studio Code](https://code.visualstudio.com/) is an excellent option that's available on the Windows, macOS, and Linux platforms. In this tutorial, you learn how to: > [!div class="checklist"]-> * Set up your .NET Core app to update its configuration in response to changes in an App Configuration store. +> * Set up your .NET app to update its configuration in response to changes in an App Configuration store. > * Consume the latest configuration in your application. ## Prerequisites [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -Finish the quickstart [Create a .NET Core app with App Configuration](./quickstart-dotnet-core-app.md). +Finish the quickstart [Create a .NET app with App Configuration](./quickstart-dotnet-core-app.md). ## Activity-driven configuration refresh -Open *Program.cs* and update the code as following. +Open the `Program.cs` file and update the code configurations to match the following: ++### [ASP.NET Core 6.0+](#tab/core6x) ++```csharp +using Microsoft.Extensions.Configuration; +using Microsoft.Extensions.Configuration.AzureAppConfiguration; ++IConfiguration _configuration = null; +IConfigurationRefresher _refresher = null; ++var builder = new ConfigurationBuilder(); +builder.AddAzureAppConfiguration(options => +{ + options.Connect(Environment.GetEnvironmentVariable("ConnectionString")) + .ConfigureRefresh(refresh => + { + refresh.Register("TestApp:Settings:Message") + .SetCacheExpiration(TimeSpan.FromSeconds(10)); + }); ++ _refresher = options.GetRefresher(); +}); ++_configuration = builder.Build(); ++Console.WriteLine(_configuration["TestApp:Settings:Message"] ?? "Hello world!"); ++// Wait for the user to press Enter +Console.ReadLine(); ++if (_refresher != null) +{ + await _refresher.TryRefreshAsync(); + Console.WriteLine(_configuration["TestApp:Settings:Message"] ?? "Hello world!"); ++} +``` ++### [ASP.NET Core 3.x](#tab/core3x) ```csharp using Microsoft.Extensions.Configuration; namespace TestConsole } } ```+ In the `ConfigureRefresh` method, a key within your App Configuration store is registered for change monitoring. The `Register` method has an optional boolean parameter `refreshAll` that can be used to indicate whether all configuration values should be refreshed if the registered key changes. In this example, only the key *TestApp:Settings:Message* will be refreshed. The `SetCacheExpiration` method specifies the minimum time that must elapse before a new request is made to App Configuration to check for any configuration changes. In this example, you override the default expiration time of 30 seconds, specifying a time of 10 seconds instead for demonstration purposes. -Calling the `ConfigureRefresh` method alone won't cause the configuration to refresh automatically. You call the `TryRefreshAsync` method from the interface `IConfigurationRefresher` to trigger a refresh. This design is to avoid phantom requests sent to App Configuration even when your application is idle. You will want to include the `TryRefreshAsync` call where you consider your application active. For example, it can be when you process an incoming message, an order, or an iteration of a complex task. It can also be in a timer if your application is active all the time. In this example, you call `TryRefreshAsync` every time you press the Enter key. Note that, even if the call `TryRefreshAsync` fails for any reason, your application will continue to use the cached configuration. Another attempt will be made when the configured cache expiration time has passed and the `TryRefreshAsync` call is triggered by your application activity again. Calling `TryRefreshAsync` is a no-op before the configured cache expiration time elapses, so its performance impact is minimal, even if it's called frequently. +Calling the `ConfigureRefresh` method alone won't cause the configuration to refresh automatically. You call the `TryRefreshAsync` method from the interface `IConfigurationRefresher` to trigger a refresh. This design is to avoid phantom requests sent to App Configuration even when your application is idle. You'll want to include the `TryRefreshAsync` call where you consider your application active. For example, it can be when you process an incoming message, an order, or an iteration of a complex task. It can also be in a timer if your application is active all the time. In this example, you call `TryRefreshAsync` every time you press the Enter key. Even if the call `TryRefreshAsync` fails for any reason, your application continues to use the cached configuration. Another attempt is made when the configured cache expiration time has passed and the `TryRefreshAsync` call is triggered by your application activity again. Calling `TryRefreshAsync` is a no-op before the configured cache expiration time elapses, so its performance impact is minimal, even if it's called frequently. ## Build and run the app locally Logs are output upon configuration refresh and contain detailed information on k ## Next steps -In this tutorial, you enabled your .NET Core app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial. +In this tutorial, you enabled your .NET app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial. > [!div class="nextstepaction"] > [Managed identity integration](./howto-integrate-azure-managed-service-identity.md) |
azure-app-configuration | Howto Integrate Azure Managed Service Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md | -Azure App Configuration and its .NET Core, .NET Framework, and Java Spring client libraries have managed identity support built into them. Although you aren't required to use it, the managed identity eliminates the need for an access token that contains secrets. Your code can access the App Configuration store using only the service endpoint. You can embed this URL in your code directly without exposing any secret. +Azure App Configuration and its .NET, .NET Framework, and Java Spring client libraries have managed identity support built into them. Although you aren't required to use it, the managed identity eliminates the need for an access token that contains secrets. Your code can access the App Configuration store using only the service endpoint. You can embed this URL in your code directly without exposing any secret. :::zone target="docs" pivot="framework-dotnet" To complete this tutorial, you must have: :::zone target="docs" pivot="framework-dotnet" -* [.NET Core SDK](https://dotnet.microsoft.com/download). +* [.NET SDK](https://dotnet.microsoft.com/download). * [Azure Cloud Shell configured](../cloud-shell/quickstart.md). :::zone-end The following steps describe how to assign the App Configuration Data Reader rol using Azure.Identity; ``` -1. If you wish to access only values stored directly in App Configuration, update the `CreateWebHostBuilder` method by replacing the `config.AddAzureAppConfiguration()` method (this method is found in the `Microsoft.Azure.AppConfiguration.AspNetCore` package). +1. To access values stored in App Configuration, update the `Builder` configuration to use the the `AddAzureAppConfiguration()` method. - > [!IMPORTANT] - > `CreateHostBuilder` replaces `CreateWebHostBuilder` in .NET Core 3.0. Select the correct syntax based on your environment. -- ### [.NET Core 5.x](#tab/core5x) + ### [.NET 6.0+](#tab/core6x) ```csharp- public static IHostBuilder CreateHostBuilder(string[] args) => - Host.CreateDefaultBuilder(args) - .ConfigureWebHostDefaults(webBuilder => - webBuilder.ConfigureAppConfiguration((hostingContext, config) => - { - var settings = config.Build(); - config.AddAzureAppConfiguration(options => - options.Connect(new Uri(settings["AppConfig:Endpoint"]), new ManagedIdentityCredential())); - }) - .UseStartup<Startup>()); + var builder = WebApplication.CreateBuilder(args); ++ builder.Configuration.AddAzureAppConfiguration(options => + options.Connect( + new Uri(builder.Configuration["AppConfig:Endpoint"]), + new ManagedIdentityCredential())); ``` ### [.NET Core 3.x](#tab/core3x) The following steps describe how to assign the App Configuration Data Reader rol .UseStartup<Startup>()); ``` - ### [.NET Core 2.x](#tab/core2x) -- ```csharp - public static IWebHostBuilder CreateWebHostBuilder(string[] args) => - WebHost.CreateDefaultBuilder(args) - .ConfigureAppConfiguration((hostingContext, config) => - { - var settings = config.Build(); - config.AddAzureAppConfiguration(options => - options.Connect(new Uri(settings["AppConfig:Endpoint"]), new ManagedIdentityCredential())); - }) - .UseStartup<Startup>(); - ``` - > [!NOTE] > If you want to use a **user-assigned managed identity**, be sure to specify the `clientId` when creating the [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential). >```csharp- >config.AddAzureAppConfiguration(options => - > { - > options.Connect(new Uri(settings["AppConfig:Endpoint"]), new ManagedIdentityCredential("<your_clientId>")) - > }); + >new ManagedIdentityCredential("<your_clientId>") >``` >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/known-issues.md), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid possible runtime issues in the future. For instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled. So, you will need to specify the `clientId` even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity. |
azure-app-configuration | Howto Labels Aspnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-labels-aspnet-core.md | ms.devlang: csharp Previously updated : 3/12/2020 Last updated : 07/11/2023 using Microsoft.Extensions.Configuration.AzureAppConfiguration; Load configuration values with the label corresponding to the current environment by passing the environment name into the `Select` method: -### [.NET Core 5.x](#tab/core5x) +### [ASP.NET Core 6.0+](#tab/core6x) ```csharp- public static IHostBuilder CreateHostBuilder(string[] args) => - Host.CreateDefaultBuilder(args) - .ConfigureWebHostDefaults(webBuilder => - webBuilder.ConfigureAppConfiguration((hostingContext, config) => - { - var settings = config.Build(); - config.AddAzureAppConfiguration(options => - options - .Connect(settings.GetConnectionString("AppConfig")) - // Load configuration values with no label - .Select(KeyFilter.Any, LabelFilter.Null) - // Override with any configuration values specific to current hosting env - .Select(KeyFilter.Any, hostingContext.HostingEnvironment.EnvironmentName) - ); - }) - .UseStartup<Startup>()); +var builder = WebApplication.CreateBuilder(args); ++builder.Configuration.AddAzureAppConfiguration(options => + { + options.Connect(builder.Configuration.GetConnectionString("AppConfig")) + // Load configuration values with no label + .Select(KeyFilter.Any, LabelFilter.Null) + // Override with any configuration values specific to current hosting env + .Select(KeyFilter.Any, builder.Environment.EnvironmentName); + }); ``` -### [.NET Core 3.x](#tab/core3x) +### [ASP.NET Core 3.x](#tab/core3x) ```csharp public static IHostBuilder CreateHostBuilder(string[] args) => Load configuration values with the label corresponding to the current environmen .UseStartup<Startup>()); ``` -### [.NET Core 2.x](#tab/core2x) --```csharp -public static IWebHostBuilder CreateWebHostBuilder(string[] args) => - WebHost.CreateDefaultBuilder(args) - .ConfigureAppConfiguration((hostingContext, config) => - { - var settings = config.Build(); - config.AddAzureAppConfiguration(options => - options - .Connect(settings.GetConnectionString("AppConfig")) - // Load configuration values with no label - .Select(KeyFilter.Any, LabelFilter.Null) - // Override with any configuration values specific to current hosting env - .Select(KeyFilter.Any, hostingContext.HostingEnvironment.EnvironmentName) - ); - }) - .UseStartup<Startup>(); -``` - > [!IMPORTANT] > The preceding code snippet uses the Secret Manager tool to load App Configuration connection string. For information storing the connection string using the Secret Manager, see [Quickstart for Azure App Configuration with ASP.NET Core](quickstart-aspnet-core-app.md). |
azure-app-configuration | Quickstart Dotnet Core App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-core-app.md | Title: Quickstart for Azure App Configuration with .NET Core | Microsoft Docs -description: In this quickstart, create a .NET Core app with Azure App Configuration to centralize storage and management of application settings separate from your code. + Title: Quickstart for Azure App Configuration with .NET | Microsoft Docs +description: In this quickstart, create a .NET app with Azure App Configuration to centralize storage and management of application settings separate from your code. ms.devlang: csharp Previously updated : 03/20/2023 Last updated : 07/11/2023 -#Customer intent: As a .NET Core developer, I want to manage all my app settings in one place. +#Customer intent: As a .NET developer, I want to manage all my app settings in one place. -# Quickstart: Create a .NET Core app with App Configuration +# Quickstart: Create a .NET app with App Configuration -In this quickstart, you incorporate Azure App Configuration into a .NET Core console app to centralize storage and management of application settings separate from your code. +In this quickstart, you incorporate Azure App Configuration into a .NET console app to centralize storage and management of application settings separate from your code. ## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). - An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).-- [.NET Core SDK](https://dotnet.microsoft.com/download) - also available in the [Azure Cloud Shell](https://shell.azure.com).+- [.NET SDK](https://dotnet.microsoft.com/download) - also available in the [Azure Cloud Shell](https://shell.azure.com). ## Add a key-value Add the following key-value to the App Configuration store and leave **Label** a |-|-| | *TestApp:Settings:Message* | *Data from Azure App Configuration* | -## Create a .NET Core console app +## Create a .NET console app -You use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to create a new .NET Core console app project. The advantage of using the .NET Core CLI over Visual Studio is that it's available across the Windows, macOS, and Linux platforms. Alternatively, use the preinstalled tools available in the [Azure Cloud Shell](https://shell.azure.com). +You use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to create a new .NET console app project. The advantage of using the .NET CLI over Visual Studio is that it's available across the Windows, macOS, and Linux platforms. Alternatively, use the preinstalled tools available in the [Azure Cloud Shell](https://shell.azure.com). 1. Create a new folder for your project. -2. In the new folder, run the following command to create a new .NET Core console app project: +2. In the new folder, run the following command to create a new .NET console app project: ```dotnetcli dotnet new console You use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to cre dotnet restore ``` -3. Open *Program.cs*, and add a reference to the .NET Core App Configuration provider. +3. Open *Program.cs*, and add a reference to the .NET App Configuration provider. ```csharp using Microsoft.Extensions.Configuration; using Microsoft.Extensions.Configuration.AzureAppConfiguration; ``` -4. Update the `Main` method to use App Configuration by calling the `builder.AddAzureAppConfiguration()` method. +4. Use App Configuration by calling the `builder.AddAzureAppConfiguration()` method in the `Program.cs` file. + ### [ASP.NET Core 6.0+](#tab/core6x) ++ ```csharp + var builder = new ConfigurationBuilder(); + builder.AddAzureAppConfiguration(Environment.GetEnvironmentVariable("ConnectionString")); + + var config = builder.Build(); + Console.WriteLine(config["TestApp:Settings:Message"] ?? "Hello world!"); + ``` ++ ### [ASP.NET Core 3.x](#tab/core3x) + ```csharp static void Main(string[] args) { var builder = new ConfigurationBuilder(); builder.AddAzureAppConfiguration(Environment.GetEnvironmentVariable("ConnectionString"));-+ var config = builder.Build(); Console.WriteLine(config["TestApp:Settings:Message"] ?? "Hello world!"); } ``` + + ## Build and run the app locally 1. Set an environment variable named **ConnectionString**, and set it to the access key to your App Configuration store. At the command line, run the following command: You use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to cre setx ConnectionString "connection-string-of-your-app-configuration-store" ``` - Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly. + Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it's set properly. ### [PowerShell](#tab/powershell) You use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to cre export ConnectionString='connection-string-of-your-app-configuration-store' ``` - Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly. + Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it's set properly. ### [Linux](#tab/linux) You use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to cre export ConnectionString='connection-string-of-your-app-configuration-store' ``` - Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly. + Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it's set properly. You use the [.NET Core command-line interface (CLI)](/dotnet/core/tools/) to cre ## Next steps -In this quickstart, you created a new App Configuration store and used it with a .NET Core console app via the [App Configuration provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration). To learn how to configure your .NET Core app to dynamically refresh configuration settings, continue to the next tutorial. +In this quickstart, you created a new App Configuration store and used it with a .NET console app via the [App Configuration provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration). To learn how to configure your .NET app to dynamically refresh configuration settings, continue to the next tutorial. > [!div class="nextstepaction"] > [Enable dynamic configuration](./enable-dynamic-configuration-dotnet-core.md) |
azure-app-configuration | Rest Api Authorization Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authorization-azure-ad.md | When you use Azure Active Directory (Azure AD) authentication, authorization is The following roles are available in Azure subscriptions by default: -- **Azure App Configuration Data Owner**: This role provides full access to all operations.-- **Azure App Configuration Data Reader**: This role enables read operations.+- **Azure App Configuration Data Owner**: This role provides full access to all operations. It permits the following actions: + * Microsoft.AppConfiguration/configurationStores/*/read + * Microsoft.AppConfiguration/configurationStores/*/write + * Microsoft.AppConfiguration/configurationStores/*/delete + * Microsoft.AppConfiguration/configurationStores/*/action +- **Azure App Configuration Data Reader**: This role enables read operations. It permits the following actions: + * Microsoft.AppConfiguration/configurationStores/*/read ## Actions Roles contain a list of actions that users assigned to that role can perform. Az - `Microsoft.AppConfiguration/configurationStores/keyValues/read`: This action allows read access to App Configuration key-value resources, such as /kv and /labels. - `Microsoft.AppConfiguration/configurationStores/keyValues/write`: This action allows write access to App Configuration key-value resources. - `Microsoft.AppConfiguration/configurationStores/keyValues/delete`: This action allows App Configuration key-value resources to be deleted. Note that deleting a resource returns the key-value that was deleted.+- `Microsoft.AppConfiguration/configurationStores/snapshots/read`: This action allows read access to App Configuration snapshot resources, as well as any key-values contained within snapshots. +- `Microsoft.AppConfiguration/configurationStores/snapshots/write`: This action allows write access to App Configuration snapshot resouces. +- `Microsoft.AppConfiguration/configurationStores/snapshots/archive/action`: This action allows access to archive and recover App Configuration snapshot resources. ## Error |
azure-app-configuration | Rest Api Key Value | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-key-value.md | Content-Type: application/problem+json; charset="utf-8" "title": "Modifing key '{key}' is not allowed", "name": "{key}", "detail": "The key is read-only. To allow modification unlock it first.",- "status": "409" + "status": 409 } ``` |
azure-app-configuration | Rest Api Snapshot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-snapshot.md | + + Title: Azure App Configuration REST API - snapshot +description: Reference pages for working with snapshots by using the Azure App Configuration REST API ++++ Last updated : 03/21/2023+++# Snapshots ++A snapshot is a resource identified uniquely by its name. See details for each operation. ++This article applies to API version 2022-11-01-preview. ++## Operations ++- Get +- List multiple +- Create +- Archive/Recover +- List key-values ++## Prerequisites +++## Syntax ++`Snapshot` ++```json +{ + "etag": [string], + "name": [string], + "status": [string, enum("provisioning", "ready", "archived", "failed")], + "filters": [array<SnapshotFilter>], + "composition_type": [string, enum("key", "key_label")], + "created": [datetime ISO 8601], + "size": [number, bytes], + "items_count": [number], + "tags": [object with string properties], + "retention_period": [number, timespan in seconds], + "expires": [datetime ISO 8601] +} +``` ++`SnapshotFilter` ++```json +{ + "key": [string], + "label": [string] +} +``` ++## Get snapshot ++Required: ``{name}``, ``{api-version}`` ++```http +GET /snapshots/{name}?api-version={api-version} +``` ++**Responses:** ++```http +HTTP/1.1 200 OK +Content-Type: application/vnd.microsoft.appconfig.snapshot+json; charset=utf-8 +Last-Modified: Mon, 03 Mar 2023 9:00:03 GMT +ETag: "4f6dd610dd5e4deebc7fbaef685fb903" +Link: </kv?snapshot=prod-2023-03-20&api-version={api-version}>; rel="items" +``` ++```json +{ + "etag": "4f6dd610dd5e4deebc7fbaef685fb903", + "name": "prod-2023-03-20", + "status": "ready", + "filters": [ + { + "key": "*", + "label": null + } + ], + "composition_type": "key", + "created": "2023-03-20T21:00:03+00:00", + "size": 2000, + "items_count": 4, + "tags": { + "t1": "value1", + "t2": "value2" + }, + "retention_period": 7776000 +} +``` ++If a snapshot with the provided name doesn't exist, the following response is returned: ++```http +HTTP/1.1 404 Not Found +``` ++## Get (conditionally) ++To improve client caching, use `If-Match` or `If-None-Match` request headers. The `etag` argument is part of the snapshot representation. For more information, see [sections 14.24 and 14.26](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). ++The following request retrieves the snapshot only if the current representation doesn't match the specified `etag`: ++```http +GET /snapshot/{name}?api-version={api-version} HTTP/1.1 +Accept: application/vnd.microsoft.appconfig.snapshot+json; +If-None-Match: "{etag}" +``` ++**Responses:** ++```http +HTTP/1.1 304 NotModified +``` ++or ++```http +HTTP/1.1 200 OK +``` ++## List snapshots ++Optional: ``name`` (If not specified, it implies any name.) +Optional: ``status`` (If not specified, it implies any status.) ++```http +GET /snapshots?name=prod-*&api-version={api-version} HTTP/1.1 +``` ++**Response:** ++```http +HTTP/1.1 200 OK +Content-Type: application/vnd.microsoft.appconfig.snapshotset+json; charset=utf-8 +``` ++For additional options, see the "Filtering" section later in this article. ++## Pagination ++The result is paginated if the number of items returned exceeds the response limit. Follow the optional `Link` response headers, and use `rel="next"` for navigation. +Alternatively, the content provides a next link in form of the `@nextLink` property. The linked URI includes the `api-version` argument. ++```http +GET /snapshots?api-version={api-version} HTTP/1.1 +``` ++**Response:** ++```http +HTTP/1.1 200 OK +Content-Type: application/vnd.microsoft.appconfig.snapshotset+json; charset=utf-8 +Link: <{relative uri}>; rel="next" +``` ++```json +{ + "items": [ + ... + ], + "@nextLink": "{relative uri}" +} +``` ++## Filtering ++A combination of `name` and `status` filtering is supported. +Use the optional `name` and `status` query string parameters. ++```http +GET /snapshots?name={name}&status={status}&api-version={api-version} +``` ++### Supported filters ++|Name filter|Effect| +|--|--| +|`name` is omitted or `name=*`|Matches snapshots with **any** name| +|`name=abc`|Matches a snapshot named **abc**| +|`name=abc*`|Matches snapshots with names that start with **abc**| +|`name=abc,xyz`|Matches snapshots with names **abc** or **xyz** (limited to 5 CSV)| ++|Status filter|Effect| +|--|--| +|`status` is omitted or `status=*`|Matches snapshots with **any** status| +|`status=ready`|Matches snapshots with a **ready** status| +|`status=ready,archived`|Matches snapshots with **ready** or **archived** status (limited to 5 CSV)| ++***Reserved characters*** ++`*`, `\`, `,` ++If a reserved character is part of the value, then it must be escaped by using `\{Reserved Character}`. Non-reserved characters can also be escaped. ++***Filter validation*** ++In the case of a filter validation error, the response is HTTP `400` with error details: ++```http +HTTP/1.1 400 Bad Request +Content-Type: application/problem+json; charset=utf-8 +``` ++```json +{ + "type": "https://azconfig.io/errors/invalid-argument", + "title": "Invalid request parameter '{filter}'", + "name": "{filter}", + "detail": "{filter}(2): Invalid character", + "status": 400 +} +``` ++**Examples** ++- All ++ ```http + GET /snapshots?api-version={api-version} + ``` ++- Snapshot name starts with **abc** ++ ```http + GET /snapshot?name=abc*&api-version={api-version} + ``` ++- Snapshot name starts with **abc** and status equals **ready** or **archived** ++ ```http + GET /snapshot?name=abc*&status=ready,archived&api-version={api-version} + ``` ++## Request specific fields ++Use the optional `$select` query string parameter and provide a comma-separated list of requested fields. If the `$select` parameter is omitted, the response contains the default set. ++```http +GET /snapshot?$select=name,status&api-version={api-version} HTTP/1.1 +``` ++## Create snapshot ++**parameters** ++| Property Name | Required | Default value | Validation | +|-|-|-|-| +| name | yes | n/a | Length <br/> maximum: 256 | +| filters | yes | n/a | Count <br/> minimum: 1<br/> maximum: 3 | +| filters[\<index\>].key | yes | n/a | | +| tags | no | {} | | +| filters[\<index\>].label | no | null | Multi-match label filters (E.g.: "*", "comma,separated") aren't supported with 'key' composition type. | +| composition_type | no | key | | +| retention_period | no | Standard tier <br/> 2592000 (30 days) <br/> Free tier <br/> 604800 (7 days) | Standard tier <br/> minimum: 3600 (1 hour) <br/> maximum: 7776000 (90 days) <br/> Free tier <br/> minimum: 3600 (1 hour) <br/> maximum: 604800 (7 days) | ++```http +PUT /snapshot/{name}?api-version={api-version} HTTP/1.1 +Content-Type: application/vnd.microsoft.appconfig.snapshot+json +``` ++```json +{ + "filters": [ // required + { + "key": "app1/*", // required + "label": "prod" // optional + } + ], + "tags": { // optional + "tag1": "value1", + "tag2": "value2", + }, + "composition_type": "key", // optional + "retention_period": 2592000 // optional +} +``` ++**Responses:** ++```http +HTTP/1.1 201 Created +Content-Type: application/vnd.microsoft.appconfig.snapshot+json; charset=utf-8 +Last-Modified: Tue, 05 Dec 2017 02:41:26 GMT +ETag: "4f6dd610dd5e4deebc7fbaef685fb903" +Operation-Location: {appConfigurationEndpoint}/operations?snapshot={name}&api-version={api-version} +``` ++```json +{ + "etag": "4f6dd610dd5e4deebc7fbaef685fb903", + "name": "{name}", + "status": "provisioning", + "filters": [ + { + "key": "app1/*", + "label": "prod" + } + ], + "composition_type": "key", + "created": "2023-03-20T21:00:03+00:00", + "size": 2000, + "items_count": 4, + "tags": { + "t1": "value1", + "t2": "value2" + }, + "retention_period": 2592000 +} +``` ++The status of the newly created snapshot will be "provisioning". +Once the snapshot is fully provisioned, the status will update to "ready". +Clients can poll the snapshot to wait for the snapshot to be ready before listing its associated key-values. +To query additional information about the operation, reference the [polling snapshot creation](#polling-snapshot-creation) section. ++If the snapshot already exists, you'll receive the following response: ++```http +HTTP/1.1 409 Conflict +Content-Type: application/problem+json; charset=utf-8 +``` ++```json +{ + "type": "https://azconfig.io/errors/already-exists", + "title": "The resource already exists.", + "status": 409, + "detail": "" +} +``` ++### Polling snapshot creation ++The response of a snapshot creation request returns an `Operation-Location` header. ++**Responses:** ++```http +HTTP/1.1 201 Created +... +Operation-Location: {appConfigurationEndpoint}/operations?snapshot={name}&api-version={api-version} +``` ++The status of the snapshot provisioning operation can be found at the URI contained in `Operation-Location`. +Clients can poll this status object to ensure a snapshot is provisioned before listing its associated key-values. ++```http +GET {appConfigurationEndpoint}/operations?snapshot={name}&api-version={api-version} +``` ++**Response:** ++```http +HTTP/1.1 200 OK +Content-Type: application/json; charset=utf-8 +``` ++```json +{ + "id": "{id}", + "status": "Succeeded", + "error": null +} +``` ++If any error occurs during the provisioning of the snapshot, the `error` property will contain details describing the error. ++```json +{ + "id": "{name}", + "status": "Failed", + "error": { + "code": "QuotaExceeded", + "message": "The allotted quota for snapshot creation has been surpassed." + } +} +``` ++## Archive (Patch) ++A snapshot in the `ready` state can be archived. +An archived snapshot will be assigned an expiration date, based off the retention period established at the time of its creation. +After the expiration date passes, the snapshot will be permanently deleted. +At any time before the expiration date, the snapshot's items can still be listed. ++Archiving a snapshot that is already `archived` doesn't affect the snapshot. ++- Required: `{name}`, `{status}`, `{api-version}` ++```http +PATCH /snapshots/{name}?api-version={api-version} HTTP/1.1 +Content-Type: application/vnd.microsoft.appconfig.snapshot+json +``` ++```json +{ + "status": "archived" +} +``` ++**Response:** +Return the archived snapshot ++```http +HTTP/1.1 200 OK +Content-Type: application/vnd.microsoft.appconfig.snapshot+json; charset=utf-8 +... +``` ++```json +{ + "etag": "33a0c9cdb43a4c2cb5fc4c1feede1c68", + "name": "{name}", + "status": "archived", + ... + "expires": "2023-08-11T21:00:03+00:00" +} +``` ++Archiving a snapshot that is currently in the `provisioning` or `failed` state is an invalid operation. ++**Response:** ++```http +HTTP/1.1 409 Conflict +Content-Type: application/problem+json; charset="utf-8" +``` ++```json +{ + "type": "https://azconfig.io/errors/invalid-state", + "title": "Target resource state invalid.", + "detail": "The target resource is not in a valid state to perform the requested operation.", + "status": 409 +} +``` ++## Recover (Patch) ++A snapshot in the `archived` state can be recovered. +Once the snapshot is recovered the snapshot's expiration date is removed. ++Recovering a snapshot that is already `ready` doesn't affect the snapshot. ++- Required: `{name}`, `{status}`, `{api-version}` ++```http +PATCH /snapshots/{name}?api-version={api-version} HTTP/1.1 +Content-Type: application/vnd.microsoft.appconfig.snapshot+json +``` ++```json +{ + "status": "ready" +} +``` ++**Response:** +Return the recovered snapshot ++```http +HTTP/1.1 200 OK +Content-Type: application/vnd.microsoft.appconfig.snapshot+json; charset=utf-8 +... +``` ++```json +{ + "etag": "90dd86e2885440f3af9398ca392095b9", + "name": "{name}", + "status": "ready", + ... +} +``` ++Recovering a snapshot that is currently in the `provisioning` or `failed` state is an invalid operation. ++**Response:** ++```http +HTTP/1.1 409 Conflict +Content-Type: application/problem+json; charset="utf-8" +``` ++```json +{ + "type": "https://azconfig.io/errors/invalid-state", + "title": "Target resource state invalid.", + "detail": "The target resource is not in a valid state to perform the requested operation.", + "status": 409 +} +``` ++## Archive/recover snapshot (conditionally) ++To prevent race conditions, use `If-Match` or `If-None-Match` request headers. The `etag` argument is part of the snapshot representation. +If `If-Match` or `If-None-Match` are omitted, the operation is unconditional. ++The following response updates the resource only if the current representation matches the specified `etag`: ++```http +PATCH /snapshots/{name}?api-version={api-version} HTTP/1.1 +Content-Type: application/vnd.microsoft.appconfig.snapshot+json +If-Match: "4f6dd610dd5e4deebc7fbaef685fb903" +``` ++The following response updates the resource only if the current representation doesn't match the specified `etag`: ++```http +PATCH /snapshots/{name}?api-version={api-version} HTTP/1.1 +Content-Type: application/vnd.microsoft.appconfig.snapshot+json; +If-None-Match: "4f6dd610dd5e4deebc7fbaef685fb903" +``` ++**Responses** ++```http +HTTP/1.1 200 OK +Content-Type: application/vnd.microsoft.appconfig.snapshot+json; charset=utf-8 +... +``` ++or ++```http +HTTP/1.1 412 PreconditionFailed +``` ++## List snapshot key-values ++Required: ``{name}``, ``{api-version}`` ++```http +GET /kv?snapshot={name}&api-version={api-version} +``` ++>[!Note] +>Attempting to list the items of a snapshot that isn't in the `ready` or `archived` state will result in an empty list response. ++### Request specific fields ++Use the optional `$select` query string parameter and provide a comma-separated list of requested fields. If the `$select` parameter is omitted, the response contains the default set. ++```http +GET /kv?snapshot={name}&$select=key,value&api-version={api-version} HTTP/1.1 +``` |
azure-app-configuration | Use Feature Flags Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md | Title: Tutorial for using feature flags in a .NET Core app | Microsoft Docs + Title: Tutorial for using feature flags in a .NET app | Microsoft Docs description: In this tutorial, you learn how to implement feature flags in .NET Core apps. documentationcenter: ''-The .NET Core Feature Management libraries provide idiomatic support for implementing feature flags in a .NET or ASP.NET Core application. These libraries allow you to declaratively add feature flags to your code so that you don't have to manually write code to enable or disable features with `if` statements. +The .NET Feature Management libraries provide idiomatic support for implementing feature flags in a .NET or ASP.NET Core application. These libraries allow you to declaratively add feature flags to your code so that you don't have to manually write code to enable or disable features with `if` statements. The Feature Management libraries also manage feature flag lifecycles behind the scenes. For example, the libraries refresh and cache flag states, or guarantee a flag state to be immutable during a request call. In addition, the ASP.NET Core library offers out-of-the-box integrations, including MVC controller actions, views, routes, and middleware. In this tutorial, you will learn how to: ## Set up feature management -To access the .NET Core feature manager, your app must have references to the `Microsoft.FeatureManagement.AspNetCore` NuGet package. +To access the .NET feature manager, your app must have references to the `Microsoft.FeatureManagement.AspNetCore` NuGet package. -The .NET Core feature manager is configured from the framework's native configuration system. As a result, you can define your application's feature flag settings by using any configuration source that .NET Core supports, including the local *appsettings.json* file or environment variables. +The .NET feature manager is configured from the framework's native configuration system. As a result, you can define your application's feature flag settings by using any configuration source that .NET supports, including the local `appsettings.json` file or environment variables. By default, the feature manager retrieves feature flag configuration from the `"FeatureManagement"` section of the .NET Core configuration data. To use the default configuration location, call the [AddFeatureManagement](/dotnet/api/microsoft.featuremanagement.servicecollectionextensions.addfeaturemanagement) method of the **IServiceCollection** passed into the **ConfigureServices** method of the **Startup** class. +### [.NET 6.0+](#tab/core6x) ++```csharp +using Microsoft.FeatureManagement; ++builder.Services.AddFeatureManagement(); +``` ++### [.NET Core 3.x](#tab/core3x) ```csharp using Microsoft.FeatureManagement; public class Startup { public void ConfigureServices(IServiceCollection services) {- ... services.AddFeatureManagement(); } } ``` ++ You can specify that feature management configuration should be retrieved from a different configuration section by calling [Configuration.GetSection](/dotnet/api/microsoft.web.administration.configuration.getsection) and passing in the name of the desired section. The following example tells the feature manager to read from a different section called `"MyFeatureFlags"` instead: +### [.NET 6.0+](#tab/core6x) ++```csharp +using Microsoft.FeatureManagement; ++builder.Services.AddFeatureManagement(Configuration.GetSection("MyFeatureFlags")); +``` ++### [.NET Core 3.x](#tab/core3x) + ```csharp using Microsoft.FeatureManagement; public class Startup } ``` + If you use filters in your feature flags, you must include the [Microsoft.FeatureManagement.FeatureFilters](/dotnet/api/microsoft.featuremanagement.featurefilters) namespace and add a call to [AddFeatureFilter](/dotnet/api/microsoft.featuremanagement.ifeaturemanagementbuilder.addfeaturefilter) specifying the type name of the filter you want to use as the generic type of the method. For more information on using feature filters to dynamically enable and disable functionality, see [Enable staged rollout of features for targeted audiences](./howto-targetingfilter-aspnet-core.md). The following example shows how to use a built-in feature filter called `PercentageFilter`: +### [.NET 6.0+](#tab/core6x) +```csharp +using Microsoft.FeatureManagement; ++builder.Services.AddFeatureManagement() + .AddFeatureFilter<PercentageFilter>(); +``` ++### [.NET Core 3.x](#tab/core3x) ```csharp using Microsoft.FeatureManagement; public class Startup { public void ConfigureServices(IServiceCollection services) {- ... services.AddFeatureManagement() .AddFeatureFilter<PercentageFilter>(); } } ``` ++ Rather than hard coding your feature flags into your application, we recommend that you keep feature flags outside the application and manage them separately. Doing so allows you to modify flag states at any time and have those changes take effect in the application right away. The Azure App Configuration service provides a dedicated portal UI for managing all of your feature flags. The Azure App Configuration service also delivers the feature flags to your application directly through its .NET Core client libraries. The easiest way to connect your ASP.NET Core application to App Configuration is through the configuration provider included in the `Microsoft.Azure.AppConfiguration.AspNetCore` NuGet package. After including a reference to the package, follow these steps to use this NuGet package. 1. Open *Program.cs* file and add the following code.- > [!IMPORTANT] - > `CreateHostBuilder` replaces `CreateWebHostBuilder` in .NET Core 3.x. Select the correct syntax based on your environment. - ### [.NET 5.x](#tab/core5x) + ### [.NET 6.0+](#tab/core6x) ```csharp using Microsoft.Extensions.Configuration.AzureAppConfiguration; - public static IHostBuilder CreateHostBuilder(string[] args) => - Host.CreateDefaultBuilder(args) - .ConfigureWebHostDefaults(webBuilder => - webBuilder.ConfigureAppConfiguration(config => - { - var settings = config.Build(); - config.AddAzureAppConfiguration(options => - options.Connect(settings["ConnectionStrings:AppConfig"]).UseFeatureFlags()); - }).UseStartup<Startup>()); + var builder = WebApplication.CreateBuilder(args); + + builder.Configuration.AddAzureAppConfiguration(options => + options.Connect( + builder.Configuration["ConnectionStrings:AppConfig"]) + .UseFeatureFlags()); ``` ### [.NET Core 3.x](#tab/core3x) The easiest way to connect your ASP.NET Core application to App Configuration is options.Connect(settings["ConnectionStrings:AppConfig"]).UseFeatureFlags()); }).UseStartup<Startup>()); ```- - ### [.NET Core 2.x](#tab/core2x) - + ++2. Update the middleware and service configurations for your app using the following code. ++ ### [.NET 6.0+](#tab/core6x) ++ Inside the `program.cs` class, register the Azure App Configuration services and middleware on the `builder` and `app` objects: + ```csharp- using Microsoft.Extensions.Configuration.AzureAppConfiguration; + builder.Services.AddAzureAppConfiguration(); - public static IWebHostBuilder CreateWebHostBuilder(string[] args) => - WebHost.CreateDefaultBuilder(args) - .ConfigureAppConfiguration(config => - { - var settings = config.Build(); - config.AddAzureAppConfiguration(options => - options.Connect(settings["ConnectionStrings:AppConfig"]).UseFeatureFlags()); - }).UseStartup<Startup>(); + app.UseAzureAppConfiguration(); ```- --2. Open *Startup.cs* and update the `Configure` and `ConfigureServices` method to add the built-in middleware called `UseAzureAppConfiguration`. This middleware allows the feature flag values to be refreshed at a recurring interval while the ASP.NET Core web app continues to receive requests. + ### [.NET Core 3.x](#tab/core3x) + Open `Startup.cs` and update the `Configure` and `ConfigureServices` method to add the built-in middleware called `UseAzureAppConfiguration`. This middleware allows the feature flag values to be refreshed at a recurring interval while the ASP.NET Core web app continues to receive requests. ```csharp public void Configure(IApplicationBuilder app, IWebHostEnvironment env) The easiest way to connect your ASP.NET Core application to App Configuration is } ``` - ```csharp - public void ConfigureServices(IServiceCollection services) - { - services.AddAzureAppConfiguration(); - } - ``` + ```csharp + public void ConfigureServices(IServiceCollection services) + { + services.AddAzureAppConfiguration(); + } + ``` ++ In a typical scenario, you will update your feature flag values periodically as you deploy and enable and different features of your application. By default, the feature flag values are cached for a period of 30 seconds, so a refresh operation triggered when the middleware receives a request would not update the value until the cached value expires. The following code shows how to change the cache expiration time or polling interval to 5 minutes by setting the [CacheExpirationInterval](/dotnet/api/microsoft.extensions.configuration.azureappconfiguration.featuremanagement.featureflagoptions.cacheexpirationinterval) in the call to **UseFeatureFlags**. +### [.NET 6.0+](#tab/core6x) ++```csharp +config.AddAzureAppConfiguration(options => + options.Connect( + builder.Configuration["ConnectionStrings:AppConfig"]) + .UseFeatureFlags(featureFlagOptions => { + featureFlagOptions.CacheExpirationInterval = TimeSpan.FromMinutes(5); + })); +``` ++### [.NET Core 3.x](#tab/core3x) - ```csharp config.AddAzureAppConfiguration(options => options.Connect(settings["ConnectionStrings:AppConfig"]).UseFeatureFlags(featureFlagOptions => { featureFlagOptions.CacheExpirationInterval = TimeSpan.FromMinutes(5); }));-}); ``` + ## Feature flag declaration By convention, the `FeatureManagement` section of this JSON document is used for * `FeatureB` is *off*. * `FeatureC` specifies a filter named `Percentage` with a `Parameters` property. `Percentage` is a configurable filter. In this example, `Percentage` specifies a 50-percent probability for the `FeatureC` flag to be *on*. For a how-to guide on using feature filters, see [Use feature filters to enable conditional feature flags](./howto-feature-filters-aspnet-core.md). --- ## Use dependency injection to access IFeatureManager -For some operations, such as manually checking feature flag values, you need to get an instance of [IFeatureManager](/dotnet/api/microsoft.featuremanagement.ifeaturemanager). In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument: +For some operations, such as manually checking feature flag values, you need to get an instance of [IFeatureManager](/dotnet/api/microsoft.featuremanagement.ifeaturemanager). In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an implementation of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument: -### [.NET 5.x](#tab/core5x) +### [.NET 6.0+](#tab/core6x) ```csharp using Microsoft.FeatureManagement; public class HomeController : Controller } } ```- -### [.NET Core 2.x](#tab/core2x) --```csharp -using Microsoft.FeatureManagement; --public class HomeController : Controller -{ - private readonly IFeatureManager _featureManager; -- public HomeController(IFeatureManager featureManager) - { - _featureManager = featureManager; - } -} -``` |
azure-app-configuration | Use Key Vault References Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md | In this tutorial, you learn how to: ## Prerequisites -Before you start this tutorial, install the [.NET Core SDK](https://dotnet.microsoft.com/download). +Before you start this tutorial, install the [.NET SDK](https://dotnet.microsoft.com/download). [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] To add a secret to the vault, you need to take just a few additional steps. In t 1. Update the `CreateWebHostBuilder` method to use App Configuration by calling the `config.AddAzureAppConfiguration` method. Include the `ConfigureKeyVault` option, and pass the correct credential to your Key Vault using the `SetCredential` method. If you have multiple Key Vaults, the same credential will be used for all of them. If your Key Vaults require different credentials, you can set them using `Register` or `SetSecretResolver` methods from the [`AzureAppConfigurationKeyVaultOptions`](/dotnet/api/microsoft.extensions.configuration.azureappconfiguration.azureappconfigurationkeyvaultoptions) class. - #### [.NET Core 5.x](#tab/core5x) + #### [.NET 6.0+](#tab/core6x) ```csharp- public static IHostBuilder CreateHostBuilder(string[] args) => - Host.CreateDefaultBuilder(args) - .ConfigureWebHostDefaults(webBuilder => - webBuilder.ConfigureAppConfiguration((hostingContext, config) => - { - var settings = config.Build(); -- config.AddAzureAppConfiguration(options => - { - options.Connect(settings["ConnectionStrings:AppConfig"]) - .ConfigureKeyVault(kv => - { - kv.SetCredential(new DefaultAzureCredential()); - }); - }); - }) - .UseStartup<Startup>()); + var builder = WebApplication.CreateBuilder(args); ++ builder.Configuration.AddAzureAppConfiguration(options => + { + options.Connect( + builder.Configuration["ConnectionStrings:AppConfig"]) + .ConfigureKeyVault(kv => + { + kv.SetCredential(new DefaultAzureCredential()); + }); + }); ``` #### [.NET Core 3.x](#tab/core3x) To add a secret to the vault, you need to take just a few additional steps. In t }) .UseStartup<Startup>()); ```- - #### [.NET Core 2.x](#tab/core2x) -- ```csharp - public static IWebHostBuilder CreateWebHostBuilder(string[] args) => - WebHost.CreateDefaultBuilder(args) - .ConfigureAppConfiguration((hostingContext, config) => - { - var settings = config.Build(); -- config.AddAzureAppConfiguration(options => - { - options.Connect(settings["ConnectionStrings:AppConfig"]) - .ConfigureKeyVault(kv => - { - kv.SetCredential(new DefaultAzureCredential()); - }); - }); - }) - .UseStartup<Startup>(); - ``` + 1. When you initialized the connection to App Configuration, you set up the connection to Key Vault by calling the `ConfigureKeyVault` method. After the initialization, you can access the values of Key Vault references in the same way you access the values of regular App Configuration keys. Alternatively, you can set the AZURE_TENANT_ID, AZURE_CLIENT_ID, and AZURE_CLIEN ## Build and run the app locally -1. To build the app by using the .NET Core CLI, run the following command in the command shell: +1. To build the app by using the .NET CLI, run the following command in the command shell: ```dotnetcli dotnet build |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | +## July 11, 2023 ++### Image tag ++`v1.20.0_2023-07-11` ++For complete release version information, review [Version log](version-log.md#july-11-2023). ++### Release notes ++- Proxy bypass is now supported for Arc SQL Server Extension. Starting this release, you can also specify services which should not use the specified proxy server. + ## June 13, 2023 ### Image tag |
azure-arc | Version Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md | +## July 11, 2023 ++|Component|Value| +|--|--| +|Container images tag |`v1.21.0_2023-07-11`| +|**CRD names and version:**| | +|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1| +|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| +|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| +|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| +|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| +|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| +|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| +|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| +|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| +|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| +|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| +|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| +|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| +|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| +|Azure Resource Manager (ARM) API version|2023-01-15-preview| +|`arcdata` Azure CLI extension version|1.5.3 ([Download](https://aka.ms/az-cli-arcdata-ext))| +|Arc-enabled Kubernetes helm chart extension version|1.21.0| +|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| +|SQL Database version | 957 | + ## June 13, 2023 |Component|Value| |
azure-arc | Tutorial Gitops Flux2 Ci Cd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md | For the details on installation, refer to the [GitOps Connector](https://github. | -- | -- | | AZ_ACR_NAME | (your Azure Container Registry instance, for example. azurearctest.azurecr.io) | | AZURE_SUBSCRIPTION | (your Azure Service Connection, which should be **arc-demo-acr** from earlier in the tutorial) |-| AZURE_VOTE_IMAGE_REPO | The full path to the Azure Vote App repository, for example azurearctest.azurecr.io/azvote | +| AZ_ACR_NAME | Azure ACR name, for example arc-demo-acr | | ENVIRONMENT_NAME | Dev | | MANIFESTS_BRANCH | `master` | | MANIFESTS_REPO | `arc-cicd-demo-gitops` | For the details on installation, refer to the [GitOps Connector](https://github. | Secret | Value | | -- | -- | | AZURE_CREDENTIALS | Credentials for Azure in the following format {"clientId":"GUID","clientSecret":"GUID","subscriptionId":"GUID","tenantId":"GUID"} |-| AZURE_VOTE_IMAGE_REPO | The full path to the Azure Vote App repository, for example azurearctest.azurecr.io/azvote | +| AZ_ACR_NAME | Azure ACR name, for example arc-demo-acr | | MANIFESTS_BRANCH | `master` | | MANIFESTS_FOLDER | `arc-cicd-cluster` | | MANIFESTS_REPO | `https://github.com/your-organization/arc-cicd-demo-gitops` | |
azure-arc | Agent Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md | Title: Archive for What's new with Azure Connected Machine agent description: Release notes for Azure Connected Machine agent versions older than six months Previously updated : 06/02/2023 Last updated : 07/11/2023 The Azure Connected Machine agent receives improvements on an ongoing basis. Thi - Known issues - Bug fixes +## Version 1.28 - March 2023 ++Download for [Windows](https://download.microsoft.com/download/5/9/7/59789af8-5833-4c91-8dc5-91c46ad4b54f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ++### Fixed ++- Improved reliability of delete requests for extensions +- More frequent reporting of VM UUID (system firmware identifier) changes +- Improved reliability when writing changes to agent configuration files +- JSON output for `azcmagent connect` now includes Azure portal URL for the server +- Linux installation script now installs the `gnupg` package if it's missing on Debian operating systems +- Removed weekly restarts for the extension and guest configuration services + ## Version 1.27 - February 2023 Download for [Windows](https://download.microsoft.com/download/8/4/5/845d5e04-bb09-4ed2-9ca8-bb51184cddc9/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | Title: What's new with Azure Connected Machine agent description: This article has release notes for Azure Connected Machine agent. For many of the summarized issues, there are links to more details. Previously updated : 06/20/2023 Last updated : 07/11/2023 The Azure Connected Machine agent receives improvements on an ongoing basis. To This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Connected Machine agent](agent-release-notes-archive.md). +## Version 1.32 - July 2023 ++Download for [Windows](https://download.microsoft.com/download/7/e/5/7e51205f-a02e-4fbe-94fe-f36219be048c/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ++### New features ++- Added support for the Debian 12 operating system +- [azcmagent show](azcmagent-show.md) now reflects the "Expired" status when a machine has been disconnected long enough for the managed identity to expire. Previously, the agent only showed "Disconnected" while the Azure portal and API showed the correct state, "Expired." ++### Fixed ++- Fixed an issue that could result in high CPU usage if the agent was unable to send telemetry to Azure. +- Improved local logging when there are network communication errors + ## Version 1.31 - June 2023 Download for [Windows](https://download.microsoft.com/download/2/6/e/26e2b001-1364-41ed-90b0-1340a44ba409/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) Download for [Windows](https://download.microsoft.com/download/2/7/0/27063536-94 - Reduced how long network checks wait before determining a network endpoint is unreachable - Stopped writing error messages in "himds.log" referring to a missing certificate key file for the ATS agent, an inactive component reserved for future use. -## Version 1.28 - March 2023 --Download for [Windows](https://download.microsoft.com/download/5/9/7/59789af8-5833-4c91-8dc5-91c46ad4b54f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Fixed --- Improved reliability of delete requests for extensions-- More frequent reporting of VM UUID (system firmware identifier) changes-- Improved reliability when writing changes to agent configuration files-- JSON output for `azcmagent connect` now includes Azure portal URL for the server-- Linux installation script now installs the `gnupg` package if it's missing on Debian operating systems-- Removed weekly restarts for the extension and guest configuration services- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods. |
azure-arc | Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md | Links to the current and previous releases of the Windows agents are available b sudo zypper install -f azcmagent-1.28.02260-755 ``` + ## Upgrade the agent You do not need to restart any services when reconfiguring the proxy settings wi Starting with agent version 1.15, you can also specify services which should **not** use the specified proxy server. This can help with split-network designs and private endpoint scenarios where you want Azure Active Directory and Azure Resource Manager traffic to go through your proxy server to public endpoints but want Azure Arc traffic to skip the proxy and communicate with a private IP address on your network. -The proxy bypass feature does not require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that should not use the proxy server. +The proxy bypass feature does not require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that should not use the proxy server. The location parameter refers to the Azure region of the Arc Server(s). | Proxy bypass value | Affected endpoints | | | | | `AAD` | `login.windows.net`, `login.microsoftonline.com`, `pas.windows.net` | | `ARM` | `management.azure.com` |-| `Arc` | `his.arc.azure.com`, `guestconfiguration.azure.com` | +| `Arc` | `his.arc.azure.com`, `guestconfiguration.azure.com` , `san-af-<location>-prod.azurewebsites.net`| To send Azure Active Directory and Azure Resource Manager traffic through a proxy server but skip the proxy for Azure Arc traffic, run the following command: If you're already using environment variables to configure the proxy server for * Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring. * Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/machine-configuration/overview.md), verifying the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.++ |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 06/02/2023 Last updated : 07/11/2023 Azure Arc supports the following Windows and Linux operating systems. Only x86-6 * Azure Stack HCI * Azure Linux 1.0, 2.0 * Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS-* Debian 10 and 11 +* Debian 10, 11, and 12 * CentOS Linux 7 and 8 * Rocky Linux 8 * SUSE Linux Enterprise Server (SLES) 12 SP3-SP5 and 15 |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | The response from an HTTP trigger is always considered an output, so a return va For some service-specific binding types, binding data can be provided using types from service SDKs and frameworks. These provide additional capability beyond what a serialized string or plain-old CLR object (POCO) may offer. Support for SDK types is currently in preview with limited scenario coverage. -To use SDK type bindings, your project must reference [Microsoft.Azure.Functions.Worker 1.12.1-preview1 or later][sdk-types-worker-version] and [Microsoft.Azure.Functions.Worker.Sdk 1.9.0-preview1 or later][sdk-types-worker-sdk-version]. Specific package versions will be needed for each of the service extensions as well. When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`. +To use SDK type bindings, your project must reference [Microsoft.Azure.Functions.Worker 1.15.0-preview1 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.15.0-preview1) and [Microsoft.Azure.Functions.Worker.Sdk 1.11.0-preview1 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.11.0-preview1). Specific package versions will be needed for each of the service extensions as well. When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`. The following service-specific bindings are currently included in the preview: | Service | Trigger | Input binding | Output binding | |-|-|-|-|-| [Azure Blobs][blob-sdk-types] | Preview support | Preview support | Not yet supported<sup>1</sup> | -| [Azure Cosmos DB][cosmos-sdk-types] | SDK types not used<sup>2</sup> | Preview support | Not yet supported<sup>1</sup> | +| [Azure Blobs][blob-sdk-types] | **Preview support** | **Preview support** | _SDK types not recommended<sup>1</sup>_ | +| [Azure Queues][queue-sdk-types] | **Preview support** | _Input binding does not exist_ | _SDK types not recommended<sup>1</sup>_ | +| [Azure Service Bus][servicebus-sdk-types] | **Preview support<sup>2</sup>** | _Input binding does not exist_ | _SDK types not recommended<sup>1</sup>_ | +| [Azure Event Hubs][eventhub-sdk-types] | **Preview support** | _Input binding does not exist_ | _SDK types not recommended<sup>1</sup>_ | +| [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Preview support** | _SDK types not recommended<sup>1</sup>_ | +| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Preview support** | _SDK types not recommended<sup>1</sup>_ | +| [Azure Event Grid][eventgrid-sdk-types] | **Preview support** | _Input binding does not exist_ | _SDK types not recommended<sup>1</sup>_ | [blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types [cosmos-sdk-types]: ./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cextensionv4&pivots=programming-language-csharp#binding-types+[tables-sdk-types]: ./functions-bindings-storage-table.md?tabs=isolated-process%2Ctable-api&pivots=programming-language-csharp#binding-types +[eventgrid-sdk-types]: ./functions-bindings-event-grid.md?tabs=isolated-process%2Cextensionv3&pivots=programming-language-csharp#binding-types +[queue-sdk-types]: ./functions-bindings-storage-queue.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types +[eventhub-sdk-types]: ./functions-bindings-event-hubs.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types +[servicebus-sdk-types]: ./functions-bindings-service-bus.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types -<sup>1</sup> Support for SDK type bindings does not presently extend to output bindings. +<sup>1</sup> For output scenarios in which you would use an SDK type, you should create and work with SDK clients directly instead of using an output binding. -<sup>2</sup> The Cosmos DB trigger uses the [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) and exposes change feed items as JSON-serializable types. The absence of SDK types is by-design for this scenario. +<sup>2</sup> The preview for the Service Bus trigger does not yet support message settlement scenarios. ++<sup>3</sup> The Cosmos DB trigger uses the [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) and exposes change feed items as JSON-serializable types. The absence of SDK types is by-design for this scenario. The [SDK type binding samples](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/WorkerBindingSamples) show examples of working with the various supported types. > [!NOTE]-> When using [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data, SDK types for the trigger itself are not supported. --[sdk-types-worker-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.12.1-preview1 -[sdk-types-worker-sdk-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.9.0-preview1 +> When using [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data, SDK types for the trigger itself cannot be used. ### HTTP trigger |
azure-functions | Functions Bindings Cosmosdb V2 Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md | Here's the binding data in the *function.json* file: Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file. -# [Functions 2.x+](#tab/functionsv2/in-process) -- # [Extension 4.x+](#tab/extensionv4/in-process) [!INCLUDE [functions-cosmosdb-input-attributes-v4](../../includes/functions-cosmosdb-input-attributes-v4.md)] -# [Functions 2.x+](#tab/functionsv2/isolated-process) +# [Functions 2.x+](#tab/functionsv2/in-process) [!INCLUDE [functions-cosmosdb-input-attributes-v3](../../includes/functions-cosmosdb-input-attributes-v3.md)] Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces [!INCLUDE [functions-cosmosdb-input-attributes-v4](../../includes/functions-cosmosdb-input-attributes-v4.md)] -# [Functions 2.x+](#tab/functionsv2/csharp-script) +# [Functions 2.x+](#tab/functionsv2/isolated-process) # [Extension 4.x+](#tab/extensionv4/csharp-script) [!INCLUDE [functions-cosmosdb-input-settings-v4](../../includes/functions-cosmosdb-input-settings-v4.md)] +# [Functions 2.x+](#tab/functionsv2/csharp-script) ++ ::: zone-end For Python v2 functions defined using a decorator, the following properties on t |-|--| |`arg_name` | The variable name used in function code that represents the list of documents with changes. | |`database_name` | The name of the Azure Cosmos DB database with the collection being monitored. |-|`collection_name` | The name of the Azure CosmosDB collection being monitored. | +|`collection_name` | The name of the Azure Cosmos DB collection being monitored. | |`connection_string_setting` | The connection string of the Azure Cosmos DB being monitored. | |`partition_key` | The partition key of the Azure Cosmos DB being monitored. | |`id` | The ID of the document to retrieve. | _Applies only to the Python v1 programming model._ The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version: -# [Functions 2.x+](#tab/functionsv2) -- # [Extension 4.x+](#tab/extensionv4) [!INCLUDE [functions-cosmosdb-settings-v4](../../includes/functions-cosmosdb-input-settings-v4.md)] +# [Functions 2.x+](#tab/functionsv2) ++ ::: zone-end See the [Example section](#example) for complete examples. ::: zone pivot="programming-language-csharp" -The parameter type supported by the Cosmos DB input binding depends on the Functions runtime version, the extension package version, and the C# modality used. +# [Extension 4.x+](#tab/extensionv4/in-process) # [Functions 2.x+](#tab/functionsv2/in-process) [!INCLUDE [functions-cosmosdb-usage](../../includes/functions-cosmosdb-usage.md)] -# [Extension 4.x+](#tab/extensionv4/in-process) +# [Extension 4.x+](#tab/extensionv4/isolated-process) [!INCLUDE [functions-cosmosdb-usage](../../includes/functions-cosmosdb-usage.md)] The parameter type supported by the Cosmos DB input binding depends on the Funct [!INCLUDE [functions-cosmosdb-usage](../../includes/functions-cosmosdb-usage.md)] -# [Extension 4.x+](#tab/extensionv4/isolated-process) +# [Extension 4.x+](#tab/extensionv4/csharp-script) # [Functions 2.x+](#tab/functionsv2/csharp-script) [!INCLUDE [functions-cosmosdb-settings-v3](../../includes/functions-cosmosdb-input-settings-v3.md)] +++The parameter type supported by the Cosmos DB input binding depends on the Functions runtime version, the extension package version, and the C# modality used. ++# [Extension 4.x+](#tab/extensionv4/in-process) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Functions 2.x+](#tab/functionsv2/in-process) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Extension 4.x+](#tab/extensionv4/isolated-process) +++# [Functions 2.x+](#tab/functionsv2/isolated-process) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types. + # [Extension 4.x+](#tab/extensionv4/csharp-script) +See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Functions 2.x+](#tab/functionsv2/csharp-script) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types. |
azure-functions | Functions Bindings Cosmosdb V2 Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md | def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file. -# [Functions 2.x+](#tab/functionsv2/in-process) -- # [Extension 4.x+](#tab/extensionv4/in-process) [!INCLUDE [functions-cosmosdb-output-attributes-v4](../../includes/functions-cosmosdb-output-attributes-v4.md)] -# [Functions 2.x+](#tab/functionsv2/isolated-process) +# [Functions 2.x+](#tab/functionsv2/in-process) [!INCLUDE [functions-cosmosdb-output-attributes-v3](../../includes/functions-cosmosdb-output-attributes-v3.md)] Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces [!INCLUDE [functions-cosmosdb-output-attributes-v4](../../includes/functions-cosmosdb-output-attributes-v4.md)] -# [Functions 2.x+](#tab/functionsv2/csharp-script) +# [Functions 2.x+](#tab/functionsv2/isolated-process) # [Extension 4.x+](#tab/functionsv4/csharp-script) [!INCLUDE [functions-cosmosdb-output-settings-v4](../../includes/functions-cosmosdb-output-settings-v4.md)] +# [Functions 2.x+](#tab/functionsv2/csharp-script) ++ ::: zone-end For Python v2 functions defined using a decorator, the following properties on t |-|--| |`arg_name` | The variable name used in function code that represents the list of documents with changes. | |`database_name` | The name of the Azure Cosmos DB database with the collection being monitored. |-|`collection_name` | The name of the Azure CosmosDB collection being monitored. | +|`collection_name` | The name of the Azure Cosmos DB collection being monitored. | |`create_if_not_exists` | A Boolean value that indicates whether the database and collection should be created if they do not exist. | |`connection_string_setting` | The connection string of the Azure Cosmos DB being monitored. | _Applies only to the Python v1 programming model._ The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version: -# [Functions 2.x+](#tab/functionsv2) -- # [Extension 4.x+](#tab/extensionv4) [!INCLUDE [functions-cosmosdb-settings-v4](../../includes/functions-cosmosdb-output-settings-v4.md)] +# [Functions 2.x+](#tab/functionsv2) ++ + ::: zone-end + See the [Example section](#example) for complete examples. ## Usage -By default, when you write to the output parameter in your function, a document is created in your database. This document has an automatically generated GUID as the document ID. You can specify the document ID of the output document by specifying the id property in the JSON object passed to the output parameter. +By default, when you write to the output parameter in your function, a document is created in your database. This document has an automatically generated GUID as the document ID. You can specify the document ID of the output document by specifying the `id` property in the JSON object passed to the output parameter. > [!NOTE] > When you specify the ID of an existing document, it gets overwritten by the new output document. ++The parameter type supported by the Cosmos DB output binding depends on the Functions runtime version, the extension package version, and the C# modality used. ++# [Extension 4.x+](#tab/extensionv4/in-process) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Functions 2.x+](#tab/functionsv2/in-process) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Extension 4.x+](#tab/extensionv4/isolated-process) +++# [Functions 2.x+](#tab/functionsv2/isolated-process) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Extension 4.x+](#tab/extensionv4/csharp-script) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Functions 2.x+](#tab/functionsv2/csharp-script) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types. ++++ ## Exceptions and return codes |
azure-functions | Functions Bindings Cosmosdb V2 Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md | C# script is used primarily when creating C# functions in the Azure portal. The following examples depend on the extension version for the given C# mode. +# [Extension 4.x+](#tab/extensionv4/in-process) ++Apps using [Azure Cosmos DB extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher will have different attribute properties, which are shown below. This example refers to a simple `ToDoItem` type. ++```cs +namespace CosmosDBSamplesV2 +{ + // Customize the model with your own desired properties + public class ToDoItem + { + public string id { get; set; } + public string Description { get; set; } + } +} +``` ++```cs +using System.Collections.Generic; +using Microsoft.Azure.WebJobs; +using Microsoft.Azure.WebJobs.Host; +using Microsoft.Extensions.Logging; ++namespace CosmosDBSamplesV2 +{ + public static class CosmosTrigger + { + [FunctionName("CosmosTrigger")] + public static void Run([CosmosDBTrigger( + databaseName: "databaseName", + containerName: "containerName", + Connection = "CosmosDBConnectionSetting", + LeaseContainerName = "leases", + CreateLeaseContainerIfNotExists = true)]IReadOnlyList<ToDoItem> input, ILogger log) + { + if (input != null && input.Count > 0) + { + log.LogInformation("Documents modified " + input.Count); + log.LogInformation("First document Id " + input[0].id); + } + } + } +} +``` + # [Functions 2.x+](#tab/functionsv2/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are inserts or updates in the specified database and collection. namespace CosmosDBSamplesV2 } ``` -# [Extension 4.x+](#tab/extensionv4/in-process) +# [Extension 4.x+](#tab/extensionv4/isolated-process) -Apps using [Azure Cosmos DB extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher will have different attribute properties, which are shown below. This example refers to a simple `ToDoItem` type. +This example refers to a simple `ToDoItem` type: -```cs -namespace CosmosDBSamplesV2 +```csharp +public class ToDoItem {- // Customize the model with your own desired properties - public class ToDoItem - { - public string id { get; set; } - public string Description { get; set; } - } + public string? Id { get; set; } + public string? Description { get; set; } } ``` -```cs -using System.Collections.Generic; -using Microsoft.Azure.WebJobs; -using Microsoft.Azure.WebJobs.Host; -using Microsoft.Extensions.Logging; --namespace CosmosDBSamplesV2 +The following function is invoked when there are inserts or updates in the specified database and collection. ++```csharp +[Function("CosmosTrigger")] +public void Run([CosmosDBTrigger( + databaseName: "ToDoItems", + containerName:"TriggerItems", + Connection = "CosmosDBConnection", + LeaseContainerName = "leases", + CreateLeaseContainerIfNotExists = true)] IReadOnlyList<ToDoItem> todoItems, + FunctionContext context) {- public static class CosmosTrigger + if (todoItems is not null && todoItems.Any()) {- [FunctionName("CosmosTrigger")] - public static void Run([CosmosDBTrigger( - databaseName: "databaseName", - containerName: "containerName", - Connection = "CosmosDBConnectionSetting", - LeaseContainerName = "leases", - CreateLeaseContainerIfNotExists = true)]IReadOnlyList<ToDoItem> input, ILogger log) + foreach (var doc in todoItems) {- if (input != null && input.Count > 0) - { - log.LogInformation("Documents modified " + input.Count); - log.LogInformation("First document Id " + input[0].id); - } + _logger.LogInformation("ToDoItem: {desc}", doc.Description); } } } This example requires the following `using` statements: :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDBFunction.cs" range="4-7"::: -# [Extension 4.x+](#tab/extensionv4/isolated-process) --Example pending. --# [Functions 2.x+](#tab/functionsv2/csharp-script) +# [Extension 4.x+](#tab/extensionv4/csharp-script) The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified. Here's the binding data in the *function.json* file: "type": "cosmosDBTrigger", "name": "documents", "direction": "in",- "leaseCollectionName": "leases", - "connectionStringSetting": "<connection-app-setting>", + "leaseContainerName": "leases", + "connection": "<connection-app-setting>", "databaseName": "Tasks",- "collectionName": "Items", - "createLeaseCollectionIfNotExists": true + "containerName": "Items", + "createLeaseContainerIfNotExists": true } ``` Here's the C# script code: ```cs- #r "Microsoft.Azure.DocumentDB.Core" - using System;- using Microsoft.Azure.Documents; using System.Collections.Generic; using Microsoft.Extensions.Logging; - public static void Run(IReadOnlyList<Document> documents, ILogger log) + // Customize the model with your own desired properties + public class ToDoItem + { + public string id { get; set; } + public string Description { get; set; } + } ++ public static void Run(IReadOnlyList<ToDoItem> documents, ILogger log) { log.LogInformation("Documents modified " + documents.Count);- log.LogInformation("First document Id " + documents[0].Id); + log.LogInformation("First document Id " + documents[0].id); } ``` -# [Extension 4.x+](#tab/extensionv4/csharp-script) +# [Functions 2.x+](#tab/functionsv2/csharp-script) The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified. Here's the binding data in the *function.json* file: "type": "cosmosDBTrigger", "name": "documents", "direction": "in",- "leaseContainerName": "leases", - "connection": "<connection-app-setting>", + "leaseCollectionName": "leases", + "connectionStringSetting": "<connection-app-setting>", "databaseName": "Tasks",- "containerName": "Items", - "createLeaseContainerIfNotExists": true + "collectionName": "Items", + "createLeaseCollectionIfNotExists": true } ``` Here's the C# script code: ```cs+ #r "Microsoft.Azure.DocumentDB.Core" + using System;+ using Microsoft.Azure.Documents; using System.Collections.Generic; using Microsoft.Extensions.Logging; - // Customize the model with your own desired properties - public class ToDoItem - { - public string id { get; set; } - public string Description { get; set; } - } -- public static void Run(IReadOnlyList<ToDoItem> documents, ILogger log) + public static void Run(IReadOnlyList<Document> documents, ILogger log) { log.LogInformation("Documents modified " + documents.Count);- log.LogInformation("First document Id " + documents[0].id); + log.LogInformation("First document Id " + documents[0].Id); } ``` Here's the Python code: Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file. +# [Extension 4.x+](#tab/extensionv4/in-process) ++ # [Functions 2.x+](#tab/functionsv2/in-process) [!INCLUDE [functions-cosmosdb-attributes-v3](../../includes/functions-cosmosdb-attributes-v3.md)] -# [Extension 4.x+](#tab/extensionv4/in-process) +# [Extension 4.x+](#tab/extensionv4/isolated-process) [!INCLUDE [functions-cosmosdb-attributes-v4](../../includes/functions-cosmosdb-attributes-v4.md)] Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotn [!INCLUDE [functions-cosmosdb-attributes-v3](../../includes/functions-cosmosdb-attributes-v3.md)] -# [Extension 4.x+](#tab/extensionv4/isolated-process) +# [Extension 4.x+](#tab/extensionv4/csharp-script) # [Functions 2.x+](#tab/functionsv2/csharp-script) [!INCLUDE [functions-cosmosdb-settings-v3](../../includes/functions-cosmosdb-settings-v3.md)] -# [Extension 4.x+](#tab/extensionv4/csharp-script) -- ::: zone-end See the [Example section](#example) for complete examples. ## Usage -The parameter type supported by the Azure Cosmos DB trigger depends on the Functions runtime version, the extension package version, and the C# modality used. - The trigger requires a second collection that it uses to store _leases_ over the partitions. Both the collection being monitored and the collection that contains the leases must be available for the trigger to work. ::: zone pivot="programming-language-csharp" The trigger requires a second collection that it uses to store _leases_ over the The trigger doesn't indicate whether a document was updated or inserted, it just provides the document itself. If you need to handle updates and inserts differently, you could do that by implementing timestamp fields for insertion or update. ++The parameter type supported by the Azure Cosmos DB trigger depends on the Functions runtime version, the extension package version, and the C# modality used. ++# [Extension 4.x+](#tab/extensionv4/in-process) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Functions 2.x+](#tab/functionsv2/in-process) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Extension 4.x+](#tab/extensionv4/isolated-process) +++# [Functions 2.x+](#tab/functionsv2/isolated-process) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Extension 4.x+](#tab/extensionv4/csharp-script) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types. ++# [Functions 2.x+](#tab/functionsv2/csharp-script) ++See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types. ++++ [!INCLUDE [functions-cosmosdb-connections](../../includes/functions-cosmosdb-connections.md)] ## Next steps |
azure-functions | Functions Bindings Cosmosdb V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md | The extension NuGet package you install depends on the C# mode you're using in y Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. + # [Isolated process](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -# [C# script](#tab/csharp-script) --Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. - The process for installing the extension varies depending on the extension version: -# [Functions 2.x+](#tab/functionsv2/in-process) --Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB/3.0.10), version 3.x. - # [Extension 4.x+](#tab/extensionv4/in-process) +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 4.x._ + This version of the Azure Cosmos DB bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). This version also changes the types that you can bind to, replacing the types from the v2 SDK `Microsoft.Azure.DocumentDB` with newer types from the v3 SDK [Microsoft.Azure.Cosmos](../cosmos-db/sql/sql-api-sdk-dotnet-standard.md). Learn more about how these new types are different and how to migrate to them from the [SDK migration guide](../cosmos-db/sql/migrate-dotnet-v3.md), [trigger](./functions-bindings-cosmosdb-v2-trigger.md), [input binding](./functions-bindings-cosmosdb-v2-input.md), and [output binding](./functions-bindings-cosmosdb-v2-output.md) examples. This extension version is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB), version 4.x. -# [Functions 2.x+](#tab/functionsv2/isolated-process) +# [Functions 2.x+](#tab/functionsv2/in-process) -Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/), version 3.x. +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x or 3.x._ ++Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB/3.0.10), version 3.x. # [Extension 4.x+](#tab/extensionv4/isolated-process) This version of the Azure Cosmos DB bindings extension introduces the ability to Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/), version 4.x. -# [Functions 2.x+](#tab/functionsv2/csharp-script) --You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. --# [Extension 4.x+](#tab/extensionv4/csharp-script) --This extension version is available from the extension bundle v4 by adding the following lines in your `host.json` file: +# [Functions 2.x+](#tab/functionsv2/isolated-process) -```json -{ - "version": "2.0", - "extensionBundle": { - "id": "Microsoft.Azure.Functions.ExtensionBundle", - "version": "[4.0.0, 5.0.0)" - } -} -``` +Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/), version 3.x. This extension version is available from the extension bundle v4 by adding the f The Azure Cosmos DB bindings extension is part of an [extension bundle], which is specified in your *host.json* project file. You may need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn more, see [extension bundle]. -# [Bundle v2.x and v3.x](#tab/functionsv2) --You can install this version of the extension in your function app by registering the [extension bundle], version 2.x or 3.x. ---# [Bundle v4.x](#tab/extensionv4) --This version of the bundle contains version 4.x of the Azure Cosmos DB bindings extension that introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). - ::: zone-end ::: zone pivot="programming-language-java" [!INCLUDE [functions-cosmosdb-extension-java-note](../../includes/functions-cosmosdb-extension-java-note.md)] ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-python,programming-language-java,programming-language-powershell" +# [Bundle v4.x](#tab/extensionv4) ++This version of the bundle contains version 4.x of the Azure Cosmos DB bindings extension that introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). + You can add this version of the extension from the preview extension bundle v4 by adding or replacing the following code in your `host.json` file: ```json You can add this version of the extension from the preview extension bundle v4 b To learn more, see [Update your extensions]. +# [Bundle v2.x and v3.x](#tab/functionsv2) ++You can install this version of the extension in your function app by registering the [extension bundle], version 2.x or 3.x. ++ ::: zone-end To learn more, see [Update your extensions]. The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: -# [In-process class library](#tab/in-process) +# [In-process](#tab/in-process) An in-process class library is a compiled C# function runs in the same process as the Functions runtime. # [Isolated process](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime. - -# [C# script](#tab/csharp-script) --C# script is used primarily when creating C# functions in the Azure portal. Choose a version to see binding type details for the mode and version. -# [Extension 4.x and higher](#tab/extensionv4/in-process) +# [Extension 4.x+](#tab/extensionv4/in-process) The Azure Cosmos DB extension supports parameter types according to the table below. -| Binding | Parameter types | +| Binding scenario | Parameter types | |-|-|-| -| Cosmos DB trigger | JSON serializable types<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup> | -| Cosmos DB input | JSON serializable types<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup><br/>[CosmosClient] | -| Cosmos DB output | JSON serializable types<sup>1</sup> | +| Cosmos DB trigger (single document) | JSON serializable types<sup>1</sup> | +| Cosmos DB trigger (batch of documents) | `IEnumerable<T>`where `T` is a JSON serializable type<sup>1</sup> | +| Cosmos DB input (single document) | JSON serializable types<sup>1</sup><br/> | +| Cosmos DB input (query returning multiple documents) | [CosmosClient]<br/>`IEnumerable<T>` where `T` is a JSON serializable type<sup>1</sup> | +| Cosmos DB output (single document) | JSON serializable types<sup>1</sup> | +| Cosmos DB output (multiple documents) | `ICollector<T>` or `IAsyncCollector<T>` where `T` is a JSON serializable type<sup>1</sup> | <sup>1</sup> Documents containing JSON data can be deserialized into known plain-old CLR object (POCO) types. -<sup>2</sup> `IEnumerable<T>` provides a collection of documents. Here, `T` is a JSON serializable type. When specified for a trigger, it allows a single invocation to process a batch of documents. When used for an input binding, this allows multiple documents to be returned by the query. --# [Functions 2.x and higher](#tab/functionsv2/in-process) +# [Functions 2.x+](#tab/functionsv2/in-process) Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.Documents] namespace. Newer types from [Microsoft.Azure.Cosmos] are exclusive to **extension 4.x and higher**. -# [Extension 4.x and higher](#tab/extensionv4/isolated-process) +# [Extension 4.x+](#tab/extensionv4/isolated-process) -The isolated worker process supports parameter types according to the table below. Binding to JSON serializeable types is currently the only option that is generally available. Support for binding to types from [Microsoft.Azure.Cosmos] is in preview. +The isolated worker process supports parameter types according to the tables below. Support for binding to types from [Microsoft.Azure.Cosmos]is in preview. -| Binding | Parameter types | Preview parameter types<sup>1</sup> | -|-|-|-| -| Cosmos DB trigger | JSON serializable types<sup>2</sup><br/>`IEnumerable<T>`<sup>3</sup> | *No preview types* | -| Cosmos DB input | JSON serializable types<sup>2</sup><br/>`IEnumerable<T>`<sup>3</sup> | [CosmosClient]<br/>[Database]<br/>[Container] | -| Cosmos DB output | JSON serializable types<sup>2</sup> | *No preview types*<sup>4</sup> | +**Cosmos DB trigger** -<sup>1</sup> Preview types require use of [Microsoft.Azure.Functions.Worker.Extensions.CosmosDB 4.1.0-preview1 or later][sdk-types-extension-version], [Microsoft.Azure.Functions.Worker 1.12.1-preview1 or later][sdk-types-worker-version], and [Microsoft.Azure.Functions.Worker.Sdk 1.9.0-preview1 or later][sdk-types-worker-sdk-version]. When developing on your local machine, you will need [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). When using a preview type, [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data are not supported. -[sdk-types-extension-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/4.1.0-preview1 -[sdk-types-worker-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.12.1-preview1 -[sdk-types-worker-sdk-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.9.0-preview1 +**Cosmos DB input binding** -<sup>2</sup> Documents containing JSON data can be deserialized into known plain-old CLR object (POCO) types. -<sup>3</sup> `IEnumerable<T>` provides a collection of documents. Here, `T` is a JSON serializable type. When specified for a trigger, it allows a single invocation to process a batch of documents. When used for an input binding, this allows multiple documents to be returned by the query. +**Cosmos DB output binding** -<sup>4</sup> Support for SDK type bindings does not presently extend to output bindings. -# [Functions 2.x and higher](#tab/functionsv2/isolated-process) +# [Functions 2.x+](#tab/functionsv2/isolated-process) Earlier versions of extensions in the isolated worker process only support binding to JSON serializable types. Additional options are available to **extension 4.x and higher**. -# [Extension 4.x and higher](#tab/extensionv4/csharp-script) --The Azure Cosmos DB extension supports parameter types according to the table below. --| Binding | Parameter types | -|-|-|-| -| Cosmos DB trigger | JSON serializable types<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup> | -| Cosmos DB input | JSON serializable types<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup><br/>[CosmosClient] | -| Cosmos DB output | JSON serializable types<sup>1</sup> | --<sup>1</sup> Documents containing JSON data can be deserialized into known plain-old CLR object (POCO) types. --<sup>2</sup> `IEnumerable<T>` provides a collection of documents. Here, `T` is a JSON serializable type. When specified for a trigger, it allows a single invocation to process a batch of documents. When used for an input binding, this allows multiple documents to be returned by the query. --# [Functions 2.x and higher](#tab/functionsv2/csharp-script) --Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.Documents] namespace. Newer types from [Microsoft.Azure.Cosmos] are exclusive to **extension 4.x and higher**. - [Microsoft.Azure.Cosmos]: /dotnet/api/microsoft.azure.cosmos Earlier versions of the extension exposed types from the now deprecated [Microso [!INCLUDE [functions-host-json-section-intro](../../includes/functions-host-json-section-intro.md)] -# [Functions 2.x+](#tab/functionsv2) +# [Extension 4.x+](#tab/extensionv4) ```json { Earlier versions of the extension exposed types from the now deprecated [Microso "extensions": { "cosmosDB": { "connectionMode": "Gateway",- "protocol": "Https", - "leaseOptions": { - "leasePrefix": "prefix1" - } + "userAgentSuffix": "MyDesiredUserAgentStamp" } } } Earlier versions of the extension exposed types from the now deprecated [Microso |Property |Default |Description | |-|--|| |**connectionMode**|`Gateway`|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|-|**protocol**|`Https`|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking). | -|**leasePrefix**|n/a|Lease prefix to use across all functions in an app. | +|**userAgentSuffix**| n/a | Adds the specified string value to all requests made by the trigger or binding to the service. This makes it easier for you to track the activity in Azure Monitor, based on a specific function app and filtering by `User Agent`. -# [Extension 4.x+](#tab/extensionv4) ++# [Functions 2.x+](#tab/functionsv2) ```json { Earlier versions of the extension exposed types from the now deprecated [Microso "extensions": { "cosmosDB": { "connectionMode": "Gateway",- "userAgentSuffix": "MyDesiredUserAgentStamp" + "protocol": "Https", + "leaseOptions": { + "leasePrefix": "prefix1" + } } } } Earlier versions of the extension exposed types from the now deprecated [Microso |Property |Default |Description | |-|--|| |**connectionMode**|`Gateway`|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|-|**userAgentSuffix**| n/a | Adds the specified string value to all requests made by the trigger or binding to the service. This makes it easier for you to track the activity in Azure Monitor, based on a specific function app and filtering by `User Agent`. +|**protocol**|`Https`|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking). | +|**leasePrefix**|n/a|Lease prefix to use across all functions in an app. | + Earlier versions of the extension exposed types from the now deprecated [Microso [extension bundle]: ./functions-bindings-register.md#extension-bundles [Update your extensions]: ./functions-bindings-register.md++[C# scripting]: ./functions-reference-csharp.md |
azure-functions | Functions Bindings Event Grid Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md | In-process C# class library functions supports the following types: # [Extension v3.x](#tab/extensionv3/isolated-process) -Requires you to define a custom type, or use a string. See the [Example section](#example) for examples of using a custom parameter type. # [Extension v2.x](#tab/extensionv2/isolated-process) |
azure-functions | Functions Bindings Event Grid Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md | In-process C# class library functions supports the following types: # [Extension v3.x](#tab/extensionv3/isolated-process) -Requires you to define a custom type, or use a string. See the [Example section](#example) for examples of using a custom parameter type. # [Extension v2.x](#tab/extensionv2/isolated-process) |
azure-functions | Functions Bindings Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md | The extension NuGet package you install depends on the C# mode you're using in y Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. + # [Isolated process](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -# [C# script](#tab/csharp-script) --Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. - The functionality of the extension varies depending on the extension version: # [Extension v3.x](#tab/extensionv3/in-process) -This version of the extension supports updated Event Grid binding parameter types of [Azure.Messaging.CloudEvent](/dotnet/api/azure.messaging.cloudevent) and [Azure.Messaging.EventGrid.EventGridEvent](/dotnet/api/azure.messaging.eventgrid.eventgridevent). +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 3.x._ ++This version of the extension supports updated Event Grid binding parameter types of [Azure.Messaging.CloudEvent][CloudEvent] and [Azure.Messaging.EventGrid.EventGridEvent][EventGridEvent]. Add this version of the extension to your project by installing the [NuGet package], version 3.x. # [Extension v2.x](#tab/extensionv2/in-process) -Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent). Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger. +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x._ ++Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent). Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger, or switch to **Extension v3.x**. Add the extension to your project by installing the [NuGet package], version 2.x. # [Functions 1.x](#tab/functionsv1/in-process) -Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger. +Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger, or switch to **Extension v3.x**. To do so, you will need to [upgrade your application to Functions 4.x]. + The Event Grid output binding is only available for Functions 2.x and higher. Functions version 1.x doesn't support the isolated worker process. The Event Grid output binding is only available for Functions 2.x and higher. -# [Extension v3.x](#tab/extensionv3/csharp-script) --This version of the extension supports updated Event Grid binding parameter types of [Azure.Messaging.CloudEvent](/dotnet/api/azure.messaging.cloudevent) and [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent). --You can install this version of the extension in your function app by registering the [extension bundle], version 3.x. --# [Extension v2.x](#tab/extensionv2/csharp-script) --Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent). Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger. --You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. --# [Functions 1.x](#tab/functionsv1/csharp-script) --Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger. --The Event Grid output binding is only available for Functions 2.x and higher. - ::: zone-end The Event Grid output binding is only available for Functions 2.x and higher. Ev ::: zone-end ++## Binding types ++The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: + +# [In-process](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + +# [Isolated process](#tab/isolated-process) ++An isolated worker process class library compiled C# function runs in a process isolated from the runtime. ++++Choose a version to see binding type details for the mode and version. ++# [Extension v3.x](#tab/extensionv3/in-process) ++The Event Grid extension supports parameter types according to the table below. ++| Binding | Parameter types | +|-|-| +| Event Grid trigger | [CloudEvent]<br/>[EventGridEvent]<br/>[BinaryData]<br/>[Newtonsoft.Json.Linq.JObject][JObject]<br/>`string` | +| Event Grid output (single event) | [CloudEvent]<br/>[EventGridEvent]<br/>[BinaryData]<br/>[Newtonsoft.Json.Linq.JObject][JObject]<br/>`string` | +| Event Grid output (multiple events) | `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the single event types | ++# [Extension v2.x](#tab/extensionv2/in-process) ++This version of the extension supports parameter types according to the table below. It doesn't support for the [CloudEvents schema], which is exclusive to **Extension v3.x**. ++| Binding | Parameter types | +|-|-| +| Event Grid trigger | [Microsoft.Azure.EventGrid.Models.EventGridEvent]<br/>[Newtonsoft.Json.Linq.JObject][JObject]<br/>`string` | +| Event Grid output | [Microsoft.Azure.EventGrid.Models.EventGridEvent]<br/>[Newtonsoft.Json.Linq.JObject][JObject]<br/>`string` | ++# [Functions 1.x](#tab/functionsv1/in-process) ++This version of the extension supports parameter types according to the table below. It doesn't support for the [CloudEvents schema], which is exclusive to **Extension v3.x**. ++| Binding | Parameter types | +|-|-| +| Event Grid trigger | [Newtonsoft.Json.Linq.JObject][JObject]<br/>`string` | +| Event Grid output | [Newtonsoft.Json.Linq.JObject][JObject]<br/>`string` | ++# [Extension v3.x](#tab/extensionv3/isolated-process) ++The isolated worker process supports parameter types according to the tables below. Support for binding to `Stream`, and to types from [Azure.Messaging] is in preview. ++**Event Grid trigger** +++**Event Grid output binding** +++# [Extension v2.x](#tab/extensionv2/isolated-process) ++Earlier versions of this extension in the isolated worker process only support binding to strings and plain-old CLR object (POCO) types. Additional options are available to **Extension v3.x**. ++# [Functions 1.x](#tab/functionsv1/isolated-process) ++Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x]. ++++[CloudEvent]: /dotnet/api/azure.messaging.cloudevent +[EventGridEvent]: /dotnet/api/azure.messaging.eventgrid.eventgridevent +[BinaryData]: /dotnet/api/system.binarydata ++[JObject]: https://www.newtonsoft.com/json/help/html/t_newtonsoft_json_linq_jobject.htm +[Microsoft.Azure.EventGrid.Models.EventGridEvent]: /dotnet/api/microsoft.azure.eventgrid.models.eventgridevent +[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md ++ ## Next steps * If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-sdk-for-net/issues) The Event Grid output binding is only available for Functions 2.x and higher. Ev * [Run a function when an Event Grid event is dispatched](./functions-bindings-event-grid-trigger.md) * [Dispatch an Event Grid event](./functions-bindings-event-grid-trigger.md) +[Azure.Messaging]: /dotnet/api/azure.messaging +[Azure.Messaging.EventGrid]: /dotnet/api/azure.messaging.eventgrid + [binding]: functions-bindings-event-grid-output.md [trigger]: functions-bindings-event-grid-trigger.md [extension bundle]: ./functions-bindings-register.md#extension-bundles [NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid [Update your extensions]: ./functions-bindings-register.md++[CloudEvents schema]: ../event-grid/cloudevents-schema.md#azure-functions ++[C# scripting]: ./functions-reference-csharp.md |
azure-functions | Functions Bindings Event Hubs Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md | public static string Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, ILog } ``` -The following example shows how to use the `IAsyncCollector` interface to send a batch of messages. This scenario is common when you are processing messages coming from one Event Hub and sending the result to another Event Hub. +The following example shows how to use the `IAsyncCollector` interface to send a batch of messages. This scenario is common when you are processing messages coming from one event hub and sending the result to another event hub. ```csharp [FunctionName("EH2EH")] public static async Task Run( string newEventBody = DoSomething(eventData); // Queue the message to be sent in the background by adding it to the collector.- // If only the event is passed, an Event Hub partition to be be assigned via + // If only the event is passed, an Event Hubs partition to be be assigned via // round-robin for each batch. await outputEvents.AddAsync(new EventData(newEventBody)); // If your scenario requires that certain events are grouped together in an- // Event Hub partition, you can specify a partition key. Events added with + // Event Hubs partition, you can specify a partition key. Events added with // the same key will always be assigned to the same partition. await outputEvents.AddAsync(new EventData(newEventBody), "sample-key"); } def main(timer: func.TimerRequest) -> str: ::: zone-end ::: zone pivot="programming-language-java"-The following example shows a Java function that writes a message containing the current time to an Event Hub. +The following example shows a Java function that writes a message containing the current time to an event hub. ```java @FunctionName("sendTime") public String sendTime( } ``` -In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@EventHubOutput` annotation on parameters whose value would be published to Event Hub. The parameter should be of type `OutputBinding<T>` , where `T` is a POJO or any native Java type. +In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@EventHubOutput` annotation on parameters whose value would be published to Event Hubs. The parameter should be of type `OutputBinding<T>` , where `T` is a POJO or any native Java type. ::: zone-end ::: zone pivot="programming-language-csharp" For Python functions defined by using *function.json*, see the [Configuration](# ::: zone pivot="programming-language-java" ## Annotations -In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the [EventHubOutput](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput) annotation on parameters whose value would be published to Event Hub. The following settings are supported on the annotation: +In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the [EventHubOutput](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput) annotation on parameters whose value would be published to Event Hubs. The following settings are supported on the annotation: + [name](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput.name) + [dataType](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput.datatype) Send messages by using a method parameter such as `out string paramName`. To wri # [Extension v5.x+](#tab/extensionv5/isolated-process) -Requires you to define a custom type, or use a string. # [Extension v3.x+](#tab/extensionv3/isolated-process) Send messages by using a method parameter such as `out string paramName`, where ::: zone-end ::: zone pivot="programming-language-java" -There are two options for outputting an Event Hub message from a function by using the [EventHubOutput](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput) annotation: +There are two options for outputting an Event Hubs message from a function by using the [EventHubOutput](/java/api/com.microsoft.azure.functions.annotation.eventhuboutput) annotation: -- **Return value**: By applying the annotation to the function itself, the return value of the function is persisted as an Event Hub message.+- **Return value**: By applying the annotation to the function itself, the return value of the function is persisted as an Event Hubs message. -- **Imperative**: To explicitly set the message value, apply the annotation to a specific parameter of the type [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.OutputBinding), where `T` is a POJO or any native Java type. With this configuration, passing a value to the `setValue` method persists the value as an Event Hub message.+- **Imperative**: To explicitly set the message value, apply the annotation to a specific parameter of the type [`OutputBinding<T>`](/java/api/com.microsoft.azure.functions.OutputBinding), where `T` is a POJO or any native Java type. With this configuration, passing a value to the `setValue` method persists the value as an Event Hubs message. ::: zone-end ::: zone pivot="programming-language-powershell" Access the output event by using `context.bindings.<name>` where `<name>` is the ::: zone-end ::: zone pivot="programming-language-python" -There are two options for outputting an Event Hub message from a function: +There are two options for outputting an Event Hubs message from a function: -- **Return value**: Set the `name` property in *function.json* to `$return`. With this configuration, the function's return value is persisted as an Event Hub message.+- **Return value**: Set the `name` property in *function.json* to `$return`. With this configuration, the function's return value is persisted as an Event Hubs message. -- **Imperative**: Pass a value to the [set](/python/api/azure-functions/azure.functions.out#set-val--t--none) method of the parameter declared as an [Out](/python/api/azure-functions/azure.functions.out) type. The value passed to `set` is persisted as an Event Hub message.+- **Imperative**: Pass a value to the [set](/python/api/azure-functions/azure.functions.out#set-val--t--none) method of the parameter declared as an [Out](/python/api/azure-functions/azure.functions.out) type. The value passed to `set` is persisted as an Event Hubs message. ::: zone-end There are two options for outputting an Event Hub message from a function: | Binding | Reference | |||-| Event Hub | [Operations Guide](/rest/api/eventhub/publisher-policy-operations) | +| Event Hubs | [Operations Guide](/rest/api/eventhub/publisher-policy-operations) | ## Next steps |
azure-functions | Functions Bindings Service Bus Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md | The following output parameter types are supported by all C# modalities and exte | **byte[]** | Use for writing binary data messages. When the parameter value is null when the function exits, Functions doesn't create a message. | | **Object** | When a message contains JSON, Functions serializes the object into a JSON message payload. When the parameter value is null when the function exits, Functions creates a message with a null object.| -Messaging-specific parameter types contain additional message metadata. The specific types supported by the Event Grid Output binding depend on the Functions runtime version, the extension package version, and the C# modality used. +Messaging-specific parameter types contain additional message metadata. The specific types supported by the output binding depend on the Functions runtime version, the extension package version, and the C# modality used. # [Extension v5.x](#tab/extensionv5/in-process) Use the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmes # [Extension 5.x and higher](#tab/extensionv5/isolated-process) -Messaging-specific types are not yet supported. # [Functions 2.x and higher](#tab/functionsv2/isolated-process) |
azure-functions | Functions Bindings Service Bus Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md | In [C# class libraries](functions-dotnet-class-library.md), the attribute's cons # [Extension 5.x and higher](#tab/extensionv5/isolated-process) -Messaging-specific types are not yet supported. # [Functions 2.x and higher](#tab/functionsv2/isolated-process) |
azure-functions | Functions Bindings Service Bus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md | The extension NuGet package you install depends on the C# mode you're using in y # [In-process](#tab/in-process) +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x or later._ + Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus). Functions execute in an isolated C# worker process. To learn more, see [Guide fo Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.servicebus). -# [C# script](#tab/csharp-script) --Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. --You can install this version of the extension in your function app by registering the [extension bundle], version 2.x, or a later version. - The functionality of the extension varies depending on the extension version: Add the extension to your project by installing the [NuGet package](https://www. Functions version 1.x doesn't support the isolated worker process. -# [Extension 5.x+](#tab/extensionv5/csharp-script) ---This version allows you to bind to types from [Azure.Messaging.ServiceBus](/dotnet/api/azure.messaging.servicebus). --This extension is available from the extension bundle v3 by adding the following lines in your `host.json` file: ---To learn more, see [Update your extensions]. --# [Functions 2.x+](#tab/functionsv2/csharp-script) --You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. --# [Functions 1.x](#tab/functionsv1/csharp-script) --Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. - ::: zone-end Functions 1.x apps automatically have a reference to the extension. ::: zone-end ++## Binding types ++The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: + +# [In-process class library](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + +# [Isolated process](#tab/isolated-process) ++An isolated worker process class library compiled C# function runs in a process isolated from the runtime. ++++Choose a version to see binding type details for the mode and version. ++# [Extension 5.x+](#tab/extensionv5/in-process) ++The Service Bus extension supports parameter types according to the table below. ++| Binding scenario | Parameter types | +|-|-| +| Service Bus trigger (single message)| [ServiceBusReceivedMessage]<br/>`string`<br/>`byte[]`<br/>JSON serializable types<sup>1</sup> | +| Service Bus trigger (message batch) | `ServiceBusReceivedMessage[]`<br/>`string[]` | +| Service Bus trigger advanced scenarios<sup>2</sup> | [ServiceBusClient]<br/>[ServiceBusMessageActions]<br/>[ServiceBusSessionMessageActions]<br/>[ServiceBusReceiveActions]<br/> | +| Service Bus output (single message) | [ServiceBusMessage]<br/>`string`<br/>`byte[]`<br/>JSON serializable types<sup>1</sup> | +| Service Bus output (multiple messages) | `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the single message types<br/>[ServiceBusSender] | ++<sup>1</sup> Messages containing JSON data can be deserialized into known plain-old CLR object (POCO) types. ++<sup>2</sup> Advanced scenarios include message settlement, sessions, and transactions. These types are available as separate parameters in addition to the normal trigger parameter. ++# [Functions 2.x+](#tab/functionsv2/in-process) ++Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.ServiceBus] namespace. Newer types from [Azure.Messaging.ServiceBus] are exclusive to **Extension 5.x+**. ++This version of the extension supports parameter types according to the table below. ++The Service Bus extension supports parameter types according to the table below. ++| Binding scenario | Parameter types | +|-|-| +| Service Bus trigger (single message)| [Microsoft.Azure.ServiceBus.Message]<br/>`string`<br/>`byte[]`<br/>JSON serializable types<sup>1</sup> | +| Service Bus trigger (message batch) | `ServiceBusReceivedMessage[]`<br/>`string[]` | +| Service Bus trigger advanced scenarios<sup>2</sup> | [IMessageReceiver]<br/>[MessageReceiver]<br/>[IMessageSession]<br/> | +| Service Bus output (single message) | [Message]<br/>`string`<br/>`byte[]`<br/>JSON serializable types<sup>1</sup> | +| Service Bus output (multiple messages) | `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the single message types<br/>[MessageSender]| ++<sup>1</sup> Messages containing JSON data can be deserialized into known plain-old CLR object (POCO) types. ++<sup>2</sup> Advanced scenarios include message settlement, sessions, and transactions. These types are available as separate parameters in addition to the normal trigger parameter. ++# [Functions 1.x](#tab/functionsv1/in-process) ++Functions 1.x exposed types from the deprecated [Microsoft.ServiceBus.Messaging] namespace. Newer types from [Azure.Messaging.ServiceBus] are exclusive to **Extension 5.x+**. To use these, you will need to [upgrade your application to Functions 4.x]. ++# [Extension 5.x+](#tab/extensionv5/isolated-process) ++The isolated worker process supports parameter types according to the tables below. Support for binding to types from [Azure.Messaging.ServiceBus] is in preview. ++**Service Bus trigger** +++**Service Bus output binding** +++# [Functions 2.x+](#tab/functionsv2/isolated-process) ++Earlier versions of extensions in the isolated worker process only support binding to `string`, `byte[]`, and JSON serializable types. Additional options are available to **Extension 5.x+**. ++# [Functions 1.x](#tab/functionsv1/isolated-process) ++Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x]. ++++[Azure.Messaging.ServiceBus]: /dotnet/api/azure.messaging.servicebus +[ServiceBusReceivedMessage]: /dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage +[ServiceBusMessage]: /dotnet/api/azure.messaging.servicebus.servicebusmessage +[ServiceBusClient]: /dotnet/api/azure.messaging.servicebus.servicebusclient +[ServiceBusSender]: /dotnet/api/azure.messaging.servicebus.servicebussender ++[ServiceBusMessageActions]: /dotnet/api/microsoft.azure.webjobs.servicebus.servicebusmessageactions +[ServiceBusSessionMessageActions]: /dotnet/api/microsoft.azure.webjobs.servicebus.servicebussessionmessageactions +[ServiceBusReceiveActions]: /dotnet/api/microsoft.azure.webjobs.servicebus.servicebusreceiveactions ++[Microsoft.Azure.ServiceBus]: /dotnet/api/microsoft.azure.servicebus +[Message]: /dotnet/api/microsoft.azure.servicebus.message +[IMessageReceiver]: /dotnet/api/microsoft.azure.servicebus.core.imessagereceiver +[MessageReceiver]: /dotnet/api/microsoft.azure.servicebus.core.messagereceiver +[IMessageSession]: /dotnet/api/microsoft.azure.servicebus.imessagesession +[MessageSender]: /dotnet/api/microsoft.azure.servicebus.core.messagesender ++[Microsoft.ServiceBus.Messaging]: /dotnet/api/microsoft.servicebus.messaging ++[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md ++ <a name="host-json"></a> ## host.json settings For a reference of host.json in Functions 1.x, see [host.json reference for Azur [extension bundle]: ./functions-bindings-register.md#extension-bundles [NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ [Update your extensions]: ./functions-bindings-register.md++[C# scripting]: ./functions-reference-csharp.md |
azure-functions | Functions Bindings Storage Blob Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md | See the [Example section](#example) for complete examples. ::: zone pivot="programming-language-csharp" -The binding types supported by Blob input depend on the extension package version and the C# modality used in your function app. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types). +The binding types supported by Blob input depend on the extension package version and the C# modality used in your function app. ++# [In-process](#tab/in-process) ++See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. ++# [Isolated process](#tab/isolated-process) +++# [C# Script](#tab/csharp-script) ++See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. ++ Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage). |
azure-functions | Functions Bindings Storage Blob Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md | See the [Example section](#example) for complete examples. ## Usage ::: zone pivot="programming-language-csharp" -The binding types supported by Blob output depend on the extension package version and the C# modality used in your function app. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types). ++The binding types supported by blob output depend on the extension package version and the C# modality used in your function app. ++# [In-process](#tab/in-process) ++See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. ++# [Isolated process](#tab/isolated-process) +++# [C# Script](#tab/csharp-script) ++See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. ++ Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage). |
azure-functions | Functions Bindings Storage Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md | Metadata is available through the `$TriggerMetadata` parameter. ## Usage ::: zone pivot="programming-language-csharp" -The binding types supported by Blob trigger depend on the extension package version and the C# modality used in your function app. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types). ++The binding types supported by Blob trigger depend on the extension package version and the C# modality used in your function app. ++# [In-process](#tab/in-process) ++See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. ++# [Isolated process](#tab/isolated-process) +++# [C# Script](#tab/csharp-script) ++See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types. ++ Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage). |
azure-functions | Functions Bindings Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md | The extension NuGet package you install depends on the C# mode you're using in y Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. + # [Isolated process](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -# [C# script](#tab/csharp-script) --Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. - The functionality of the extension varies depending on the extension version: # [Extension 5.x and higher](#tab/extensionv5/in-process) +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 4.x._ + [!INCLUDE [functions-bindings-supports-identity-connections-note](../../includes/functions-bindings-supports-identity-connections-note.md)] This version allows you to bind to types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs). Learn more about how these new types are different from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md). dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage.Blobs --version 5. # [Functions 2.x and higher](#tab/functionsv2/in-process) +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x._ + Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the [Microsoft.Azure.WebJobs.Extensions.Storage NuGet package, version 4.x]. The package is used for .NET class libraries while the extension bundle is used for all other application types. # [Functions 1.x](#tab/functionsv1/in-process) Add the extension to your project by installing the [Microsoft.Azure.Functions.W Functions version 1.x doesn't support isolated worker process. -# [Extension 5.x and higher](#tab/extensionv5/csharp-script) ---This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file: ---To learn more, see [Update your extensions]. --# [Functions 2.x and higher](#tab/functionsv2/csharp-script) --You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. --# [Functions 1.x](#tab/functionsv1/csharp-script) --Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. - ::: zone-end Functions 1.x apps automatically have a reference to the extension. ::: zone-end ::: zone pivot="programming-language-csharp"+ ## Binding types The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: -# [In-process class library](#tab/in-process) +# [In-process](#tab/in-process) An in-process class library is a compiled C# function runs in the same process as the Functions runtime. An in-process class library is a compiled C# function runs in the same process a An isolated worker process class library compiled C# function runs in a process isolated from the runtime. -# [C# script](#tab/csharp-script) --C# script is used primarily when creating C# functions in the Azure portal. - Choose a version to see binding type details for the mode and version. Choose a version to see binding type details for the mode and version. The Azure Blobs extension supports parameter types according to the table below. -| Binding | Parameter types | +| Binding scenario | Parameter types | |-|-|-| -| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup>| -| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup>| -| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | +| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BinaryData]<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup>| +| Blob input (single blob)| [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BinaryData]<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup>| +| Blob input (multiple blobs from a container)| `IEnumerable<T>` where `T` is one of the single blob input binding types | +| Blob output (single blob) | [Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | +| Blob output (multiple blobs) | `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the single blob output binding types | <sup>1</sup> The client types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. -<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. --For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Blobs#examples). Learn more about these new types are different and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md). +For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Blobs#examples). Learn more about types from the Azure SDK, how they are different from earlier versions, and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md). # [Functions 2.x and higher](#tab/functionsv2/in-process) This version of the Azure Blobs extension supports parameter types according to |-|-|-| | Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| | Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>|-| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | +| Blob output | [Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | <sup>1</sup> These types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. This version of the Azure Blobs extension supports parameter types according to # [Functions 1.x](#tab/functionsv1/in-process) -Functions 1.x exposed types from the now deprecated [Microsoft.Azure.Storage.Blob] namespace. Newer types from [Azure.Storage.Blobs] are exclusive to later host versions with **extension 5.x and higher**. --Functions 1.x supports parameter types according to the table below. --| Binding | Parameter types | -|-|-|-| -| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| -| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| -| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | --<sup>1</sup> These types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. --<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. +Functions 1.x exposed types from the deprecated [Microsoft.WindowsAzure.Storage] namespace. Newer types from [Azure.Storage.Blobs] are exclusive to **Extension 5.x and higher**. To use these, you will need to [upgrade your application to Functions 4.x]. # [Extension 5.x and higher](#tab/extensionv5/isolated-process) -The isolated worker process supports parameter types according to the table below. Binding to string parameters is currently the only option that is generally available. Support for binding to `Byte[]`, to `Stream`, and to types from [Azure.Storage.Blobs] is in preview. +The isolated worker process supports parameter types according to the tables below. Support for binding to `Stream`, and to types from [Azure.Storage.Blobs] is in preview. -| Binding | Parameter types | Preview parameter types<sup>1</sup> | -|-|-|-| -| Blob trigger | `string` | `Byte[]`<br/>[Stream]<br/>[BlobClient]<br/>[BlockBlobClient]<br/>[PageBlobClient]<br/>[AppendBlobClient]<br/>[BlobBaseClient]<br/>[BlobContainerClient]<br/>JSON serializable types<sup>2</sup>| -| Blob input | `string` | `Byte[]`<br/>[Stream]<br/>[BlobClient]<br/>[BlockBlobClient]<br/>[PageBlobClient]<br/>[AppendBlobClient]<br/>[BlobBaseClient]<br/>[BlobContainerClient]<sup>3</sup><br/>JSON serializable types<sup>2</sup>| -| Blob output | `string` | No preview types<sup>4</sup> | +**Blob trigger** -<sup>1</sup> Preview types require use of [Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs 5.1.0-preview1 or later][sdk-types-extension-version], [Microsoft.Azure.Functions.Worker 1.12.1-preview1 or later][sdk-types-worker-version], and [Microsoft.Azure.Functions.Worker.Sdk 1.9.0-preview1 or later][sdk-types-worker-sdk-version]. When developing on your local machine, you will need [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). Collections of preview types, such as arrays and `IEnumerable<T>`, are not supported. When using a preview type, [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data are not supported. -[sdk-types-extension-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs/5.1.0-preview1 -[sdk-types-worker-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.12.1-preview1 -[sdk-types-worker-sdk-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.9.0-preview1 +**Blob input binding** -<sup>2</sup> Blobs containing JSON data can be deserialized into known plain-old CLR object (POCO) types. -<sup>3</sup> The `BlobPath` configuration for an input binding to [BlobContainerClient] currently requires the presence of a blob name. It is not sufficient to provide just the container name. A placeholder value may be used and will not change the behavior. For example, setting `[BlobInput("samples-workitems/placeholder.txt")] BlobContainerClient containerClient` does not consider whether any `placeholder.txt` exists or not, and the client will work with the overall "samples-workitems" container. +**Blob output binding** -<sup>4</sup> Support for SDK type bindings does not presently extend to output bindings. # [Functions 2.x and higher](#tab/functionsv2/isolated-process) Earlier versions of extensions in the isolated worker process only support bindi # [Functions 1.x](#tab/functionsv1/isolated-process) -Functions version 1.x doesn't support isolated worker process. --# [Extension 5.x and higher](#tab/extensionv5/csharp-script) --The Azure Blobs extension supports parameter types according to the table below. --| Binding | Parameter types | -|-|-|-| -| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup>| -| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup>| -| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | --<sup>1</sup> The client types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. --<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. --# [Functions 2.x and higher](#tab/functionsv2/csharp-script) --Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.Storage.Blob] namespace. Newer types from [Azure.Storage.Blobs] are exclusive to **extension 5.x and higher**. --This version of the Azure Blobs extension supports parameter types according to the table below. --| Binding | Parameter types | -|-|-|-| -| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| -| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| -| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | --<sup>1</sup> These types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. --<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. --# [Functions 1.x](#tab/functionsv1/csharp-script) --Functions 1.x exposed types from the now deprecated [Microsoft.Azure.Storage.Blob] namespace. Newer types from [Azure.Storage.Blobs] are exclusive to later host versions with **extension 5.x and higher**. --Functions 1.x supports parameter types according to the table below. --| Binding | Parameter types | -|-|-|-| -| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| -| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| -| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | --<sup>1</sup> These types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. --<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. +Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x]. [Stream]: /dotnet/api/system.io.stream+[BinaryData]: /dotnet/api/system.binarydata [Azure.Storage.Blobs]: /dotnet/api/azure.storage.blobs [BlobClient]: /dotnet/api/azure.storage.blobs.blobclient Functions 1.x supports parameter types according to the table below. [CloudPageBlob]: /dotnet/api/microsoft.azure.storage.blob.cloudpageblob [CloudAppendBlob]: /dotnet/api/microsoft.azure.storage.blob.cloudappendblob +[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md + :::zone-end ## host.json settings This section describes the function app configuration settings available for fun [Microsoft.Azure.Functions.Worker.Extensions.Storage NuGet package, version 4.x]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage/4.0.4 [Update your extensions]: ./functions-bindings-register.md [Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack++[Microsoft.WindowsAzure.Storage]: /dotnet/api/microsoft.windowsazure.storage ++[C# scripting]: ./functions-reference-csharp.md |
azure-functions | Functions Bindings Storage Queue Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md | You can write multiple messages to the queue by using one of the following types # [Extension 5.x+](#tab/extensionv5/isolated-process) -Isolated worker process currently only supports binding to string parameters. # [Extension 2.x+](#tab/extensionv2/isolated-process) |
azure-functions | Functions Bindings Storage Queue Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md | When binding to an object, the Functions runtime tries to deserialize the JSON p # [Extension 5.x+](#tab/extensionv5/isolated-process) -Isolated worker process currently only supports binding to string parameters. # [Extension 2.x+](#tab/extensionv2/isolated-process) |
azure-functions | Functions Bindings Storage Queue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md | Azure Functions can run as new Azure Queue storage messages are created and can | Write queue storage messages |[Output binding](./functions-bindings-storage-queue-output.md) | ::: zone pivot="programming-language-csharp"+ ## Install extension The extension NuGet package you install depends on the C# mode you're using in your function app: The extension NuGet package you install depends on the C# mode you're using in y Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. + # [Isolated process](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -# [C# script](#tab/csharp-script) --Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. - The functionality of the extension varies depending on the extension version: The functionality of the extension varies depending on the extension version: <a name="storage-extension-5x-and-higher"></a> +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 4.x._ + [!INCLUDE [functions-bindings-supports-identity-connections-note](../../includes/functions-bindings-supports-identity-connections-note.md)] -This version allows you to bind to types from [Azure.Storage.Queues](/dotnet/api/azure.storage.queues). +This version allows you to bind to types from [Azure.Storage.Queues]. This extension is available by installing the [Microsoft.Azure.WebJobs.Extensions.Storage.Queues NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage.Queues), version 5.x. dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage.Queues --version 5 # [Functions 2.x+](#tab/functionsv2/in-process) <a name="functions-2x-and-higher"></a>++_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x._ + Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the [NuGet package], version 3.x or 4.x. # [Functions 1.x](#tab/functionsv1/in-process) Add the extension to your project by installing the [NuGet package](https://www. Functions version 1.x doesn't support the isolated worker process. -# [Extension 5.x+](#tab/extensionv5/csharp-script) ---This extension version is available from the extension bundle v3 by adding the following lines in your `host.json` file: ---To learn more, see [Update your extensions]. --# [Functions 2.x+](#tab/functionsv2/csharp-script) --You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. --# [Functions 1.x](#tab/functionsv1/csharp-script) --Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. - ::: zone-end Functions 1.x apps automatically have a reference to the extension. ::: zone-end ++## Binding types ++The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: + +# [In-process](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + +# [Isolated process](#tab/isolated-process) ++An isolated worker process class library compiled C# function runs in a process isolated from the runtime. + +++Choose a version to see binding type details for the mode and version. ++# [Extension 5.x+](#tab/extensionv5/in-process) ++The Azure Queues extension supports parameter types according to the table below. ++ Binding scenario | Parameter types | +|-|-| +| Queue trigger | [QueueMessage]<br/>JSON serializable types<sup>1</sup><br/>`string`<br/>`byte[]`<br/>[BinaryData] | +| Queue output (single message) | [QueueMessage]<br/>JSON serializable types<sup>1</sup><br/>`string`<br/>`byte[]`<br/>[BinaryData] | +| Queue output (multiple messages) | [QueueClient]<br/>`ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the single message types | ++<sup>1</sup> Messages containing JSON data can be deserialized into known plain-old CLR object (POCO) types. ++# [Functions 2.x+](#tab/functionsv2/in-process) ++Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.Storage.Queues] namespace. Newer types from [Azure.Storage.Queues] are exclusive to **Extension 5.x+**. ++This version of the extension supports parameter types according to the table below. ++ Binding scenario | Parameter types | +|-|-| +| Queue trigger | [CloudQueueMessage]<br/>JSON serializable types<sup>1</sup><br/>`string`<br/>`byte[]` | +| Queue output | [CloudQueueMessage]<br/>JSON serializable types<sup>1</sup><br/>`string`<br/>`byte[]`<br/>[CloudQueue] | ++<sup>1</sup> Messages containing JSON data can be deserialized into known plain-old CLR object (POCO) types. ++# [Functions 1.x](#tab/functionsv1/in-process) ++Functions 1.x exposed types from the deprecated [Microsoft.WindowsAzure.Storage] namespace. Newer types from [Azure.Storage.Queues] are exclusive to the **Extension 5.x+**. To use these, you will need to [upgrade your application to Functions 4.x]. ++# [Extension 5.x+](#tab/extensionv5/isolated-process) ++The isolated worker process supports parameter types according to the tables below. Support for binding to types from [Azure.Storage.Queues] is in preview. ++**Queue trigger** +++**Queue output binding** +++# [Functions 2.x+](#tab/functionsv2/isolated-process) ++Earlier versions of extensions in the isolated worker process only support binding to string types. Additional options are available to the **Extension 5.x**. ++# [Functions 1.x](#tab/functionsv1/isolated-process) ++Functions version 1.x doesn't support the isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x]. ++++[QueueMessage]: /dotnet/api/azure.storage.queues.models.queuemessage +[QueueClient]: /dotnet/api/azure.storage.queues.queueclient +[BinaryData]: /dotnet/api/system.binarydata ++[CloudQueueMessage]: /dotnet/api/microsoft.azure.storage.queue.cloudqueuemessage +[CloudQueue]: /dotnet/api/microsoft.azure.storage.queue.cloudqueue ++[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md ++ ## <a name="host-json"></a>host.json settings [!INCLUDE [functions-host-json-section-intro](../../includes/functions-host-json-section-intro.md)] Functions 1.x apps automatically have a reference to the extension. [extension bundle]: ./functions-bindings-register.md#extension-bundles [NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage [Update your extensions]: ./functions-bindings-register.md++[Azure.Storage.Queues]: /dotnet/api/azure.storage.queues +[Microsoft.Azure.Storage.Queues]: /dotnet/api/microsoft.azure.storage.queue +[Microsoft.WindowsAzure.Storage]: /dotnet/api/microsoft.windowsazure.storage ++[C# scripting]: ./functions-reference-csharp.md |
azure-functions | Functions Bindings Storage Table Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md | To return a specific entity by key, use a binding parameter that derives from [T To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace. - # [Functions 1.x](#tab/functionsv1/in-process) To return a specific entity by key, use a binding parameter that derives from [TableEntity]. The specific `TableName`, `PartitionKey`, and `RowKey` are used to try and get a specific entity from the table. To execute queries that return multiple entities, bind to an [`IQueryable<T>`] o # [Azure Tables extension](#tab/table-api/isolated-process) -To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity). --To execute queries that return multiple entities, bind to a [TableClient] object. You can then use this object to create and execute queries against the bound table. Note that [TableClient] and related APIs belong to the [Azure.Data.Tables](/dotnet/api/azure.data.tables) namespace. # [Combined Azure Storage extension](#tab/storage-extension/isolated-process) |
azure-functions | Functions Bindings Storage Table Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md | C# script is used primarily when creating C# functions in the Azure portal. Choose a version to see usage details for the mode and version. -# [Combined Azure Storage extension](#tab/storage-extension/in-process) +# [Azure Tables extension](#tab/table-api/in-process) The following types are supported for `out` parameters and return types: -- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.-- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.+- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`. +- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`. -You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table. +You can also bind to `TableClient` [from the Azure SDK](/dotnet/api/azure.data.tables.tableclient). You can then use that object to write to the table. -# [Azure Tables extension](#tab/table-api/in-process) +# [Combined Azure Storage extension](#tab/storage-extension/in-process) The following types are supported for `out` parameters and return types: -- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.-- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.+- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`. +- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`. -You can also bind to `TableClient` [from the Azure SDK](/dotnet/api/azure.data.tables.tableclient). You can then use that object to write to the table. +You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table. # [Functions 1.x](#tab/functionsv1/in-process) The following types are supported for `out` parameters and return types: You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table. +# [Azure Tables extension](#tab/table-api/isolated-process) ++ # [Combined Azure Storage extension](#tab/storage-extension/isolated-process) Return a plain-old CLR object (POCO) with properties that can be mapped to the table entity. -# [Azure Tables extension](#tab/table-api/isolated-process) +# [Functions 1.x](#tab/functionsv1/isolated-process) ++Functions version 1.x doesn't support isolated worker process. ++# [Azure Tables extension](#tab/table-api/csharp-script) The following types are supported for `out` parameters and return types: The following types are supported for `out` parameters and return types: You can also bind to `TableClient` [from the Azure SDK](/dotnet/api/azure.data.tables.tableclient). You can then use that object to write to the table. --# [Functions 1.x](#tab/functionsv1/isolated-process) --Functions version 1.x doesn't support isolated worker process. - # [Combined Azure Storage extension](#tab/storage-extension/csharp-script) The following types are supported for `out` parameters and return types: The following types are supported for `out` parameters and return types: You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table. -# [Azure Tables extension](#tab/table-api/csharp-script) --The following types are supported for `out` parameters and return types: --- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.-- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.--You can also bind to `TableClient` [from the Azure SDK](/dotnet/api/azure.data.tables.tableclient). You can then use that object to write to the table. - # [Functions 1.x](#tab/functionsv1/csharp-script) The following types are supported for `out` parameters and return types: |
azure-functions | Functions Bindings Storage Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md | The extension NuGet package you install depends on the C# mode you're using in y Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md). +In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. + # [Isolated process](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). -# [C# script](#tab/csharp-script) --Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions]. - The process for installing the extension varies depending on the extension version: The process for installing the extension varies depending on the extension versi # [Azure Tables extension](#tab/table-api/in-process) +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 4.x._ + [!INCLUDE [functions-bindings-supports-identity-connections-note](../../includes/functions-bindings-supports-identity-connections-note.md)] -This version allows you to bind to types from [`Azure.Data.Tables`](/dotnet/api/azure.data.tables). It also introduces the ability to use Azure Cosmos DB for Table. +This version allows you to bind to types from [`Azure.Data.Tables`][Azure.Data.Tables]. It also introduces the ability to use Azure Cosmos DB for Table. This extension is available by installing the [Microsoft.Azure.WebJobs.Extensions.Tables NuGet package][table-api-package] into a project using version 5.x or higher of the extensions for [blobs](./functions-bindings-storage-blob.md?tabs=in-process%2Cextensionv5) and [queues](./functions-bindings-storage-queue.md?tabs=in-process%2Cextensionv5). dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0 # [Combined Azure Storage extension](#tab/storage-extension/in-process) +_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x._ + Working with the bindings requires that you reference the appropriate NuGet package. Tables are included in a combined package for Azure Storage. Install the [Microsoft.Azure.WebJobs.Extensions.Storage NuGet package][storage-4.x], version 3.x or 4.x. > [!NOTE] Tables are included in a combined package for Azure Storage. Install the [Micros Functions version 1.x doesn't support isolated worker process. -# [Azure Tables extension (preview)](#tab/table-api/csharp-script) ---You can add this version of the extension from the extension bundle v3 by adding or replacing the following code in your `host.json` file: ---# [Combined Azure Storage extension](#tab/storage-extension/csharp-script) --You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. --# [Functions 1.x](#tab/functionsv1/csharp-script) --Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. - ::: zone-end Functions 1.x apps automatically have a reference to the extension. ::: zone-end+++## Binding types ++The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: + +# [In-process](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + +# [Isolated process](#tab/isolated-process) ++An isolated worker process class library compiled C# function runs in a process isolated from the runtime. ++++Choose a version to see binding type details for the mode and version. ++# [Azure Tables extension](#tab/table-api/in-process) ++The Azure Tables extension supports parameter types according to the table below. ++| Binding scenario | Parameter types | +|-|-| +| Table input (single entity) | A type deriving from [ITableEntity] | +| Table input (multiple entities from query) | `IEnumerable<T>` where `T` derives from [ITableEntity]<br/>[TableClient] | +| Table output (single entity) | A type deriving from [ITableEntity] | +| Table output (multiple entities) | [TableClient]<br/>`ICollector<T>` or `IAsyncCollector<T>` where `T` implements `ITableEntity` | ++# [Combined Azure Storage extension](#tab/storage-extension/in-process) ++Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.Cosmos.Table] namespace. Newer types from [Azure.Data.Tables] are exclusive to the **Azure Tables extension**. ++This version of the extension supports parameter types according to the table below. ++| Binding scenario | Parameter types | +|-|-| +| Table input | A plain old CLR object (POCO) representing the entity<br/>[CloudTable] | +| Table output | A plain old CLR object (POCO) representing the entity<br/>[CloudTable] | ++# [Functions 1.x](#tab/functionsv1/in-process) ++Functions 1.x exposed types from the deprecated [Microsoft.WindowsAzure.Storage.Table] namespace. Newer types from [Azure.Data.Tables] are exclusive to the **Azure Tables extension**. To use these, you will need to [upgrade your application to Functions 4.x]. ++# [Azure Tables extension](#tab/table-api/isolated-process) ++The isolated worker process supports parameter types according to the tables below. Support for binding to types from [Azure.Data.Tables] is in preview. ++**Azure Tables input binding** +++**Azure Tables output binding** +++# [Combined Azure Storage extension](#tab/storage-extension/isolated-process) ++Earlier versions of extensions in the isolated worker process only support binding to plain-old CLR object (POCO) types. Additional options are available to the **Azure Tables extension**. ++# [Functions 1.x](#tab/functionsv1/isolated-process) ++Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x]. ++++[ITableEntity]: /dotnet/api/azure.data.tables.itableentity +[TableClient]: /dotnet/api/azure.data.tables.tableclient +[TableEntity]: /dotnet/api/azure.data.tables.tableentity ++[CloudTable]: /dotnet/api/microsoft.azure.cosmos.table.cloudtable ++[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md ++ ## Next steps - [Read table data when a function runs](./functions-bindings-storage-table-input.md) - [Write table data from a function](./functions-bindings-storage-table-output.md) +[Azure.Data.Tables]: /dotnet/api/azure.data.tables ++[Microsoft.Azure.Cosmos.Table]: /dotnet/api/microsoft.azure.cosmos.table +[Microsoft.WindowsAzure.Storage.Table]: /dotnet/api/microsoft.windowsazure.storage.table + [NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage [storage-4.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/4.0.5 [storage-5.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0 Functions 1.x apps automatically have a reference to the extension. [extension bundle]: ./functions-bindings-register.md#extension-bundles [Update your extensions]: ./functions-bindings-register.md++[C# scripting]: ./functions-reference-csharp.md |
azure-functions | Functions Proxies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-proxies.md | After you have your function app endpoints exposed by using API Management, the | [Edit an API](../api-management/edit-api.md) | Shows you how to work with an existing API hosted in API Management. | | [Policies in Azure API Management](../api-management/api-management-howto-policies.md) | In API Management, publishers can change API behavior through configuration using policies. Policies are a collection of statements that are run sequentially on the request or response of an API. | | [API Management policy reference](../api-management/api-management-policies.md) | Reference that details all supported API Management policies. |-| [API Management policy samples](../api-management/policies/index.md) | Helpful collection of samples using API Management policies in key scenarios. | +| [API Management policy samples](https://github.com/Azure/api-management-policy-snippets) | Helpful collection of samples using API Management policies in key scenarios. | ## Legacy Functions Proxies |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | We strongly recommended to always update to the latest version, or opt in to the ## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|-| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>Support DCR settings for DiskQuotaInMB</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add the forwarder/collector's identifier (hostname)</li><li>Link OpenSSL dynamically</li><li>Support Arc-Enabled Servers proxy configuration file</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li></ul></li></ul>|1.17.0 |1.27.2| -| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid an would not resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li></ul></li><ul> | 1.16.0.0 | 1.26.2 | +| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>Support Arc-Enabled Servers proxy configuration file</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li></ul></li></ul>|1.17.0 |1.27.2| +| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid an would not resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li><li>Fix to prevent events folder growing too large and filling the disk</li></ul></li><ul> | 1.16.0.0 | 1.26.2 | | Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0| Coming soon| | Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate logging and continuous tailing of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | Coming soon | | Feb 2023 | <ul><li>**Linux (hotfix)** Resolved potential data loss due to "Bad file descriptor" errors seen in the mdsd error log with previous version. Upgrade to hotfix version</li><li>**Windows** Reliability improvements in Fluentbit buffering to handle larger text files</li></ul> | 1.13.1 | 1.25.2<sup>Hotfix</sup> | |
azure-monitor | Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md | If your primary email doesn't receive notifications, configure the email address You may have a limited number of email actions per action group. To check which limits apply to your situation, see [Azure Monitor service limits](../service-limits.md). -> [!NOTE] -> -> Action Groups uses two different email providers to ensure email notification delivery. The primary email provider is very resilient and quick but occasionally suffers outages. In this case, the secondary email provider handles email requests. The secondary provider is only a fallback solution. Due to provider differences, an email sent from our secondary provider may have a degraded email experience. The degradation results in slightly different email formatting and content. Since email templates differ in the two systems, maintaining parity across the two systems is not feasible. - When you set up the Resource Manager role: 1. Assign an entity of type **User** to the role. |
azure-monitor | Alerts Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md | If you have received a notification for an alert (such as an email or an SMS) mo  ## Action or notification has an unexpected content+Action Groups uses two different email providers to ensure email notification delivery. The primary email provider is very resilient and quick but occasionally suffers outages. In this case, the secondary email provider handles email requests. The secondary provider is only a fallback solution. Due to provider differences, an email sent from our secondary provider may have a degraded email experience. The degradation results in slightly different email formatting and content. Since email templates differ in the two systems, maintaining parity across the two systems is not feasible. You can know that you are recieving a degraded experience, if there is a note at the top of your email notification that says: -If you have received the alert, but believe some of its fields are missing or incorrect, follow these steps: +"This is a degraded email experience. That means the formatting may be off or details could be missing. For more infomration on teh degraded emaiol experience, read here." ++If your notification does not contain this note and you have received the alert, but believe some of its fields are missing or incorrect, follow these steps: 1. **Did you pick the correct format for the action?** |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | description: Learn how Application Insights in Azure Monitor provides performanc Last updated 05/12/2023 -+ # Application Insights overview Application Insights is an extension of [Azure Monitor](../overview.md) and provides application performance monitoring (APM) features. APM tools are useful to monitor applications from development, through test, and into production in the following ways: |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | -This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the "Azure Monitor OpenTelemetry Distro". To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry). +This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the "Azure Monitor OpenTelemetry Distro". The Distro will [automatically collect](opentelemetry-add-modify.md#automatic-data-collection) traces, metrics, logs, and exceptions across your application and its dependencies. The To learn more about collecting data using OpenTelemetry, see [Data Collection Basics](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry). ## OpenTelemetry Release Status |
azure-monitor | Tutorial Autoscale Performance Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/tutorial-autoscale-performance-schedule.md | - Title: Autoscale Azure resources based on data or schedule -description: Create an autoscale setting for an app service plan by using metric data and a schedule. ---- Previously updated : 12/11/2017------# Create an autoscale setting for Azure resources based on performance data or a schedule --Autoscale settings enable you to add or remove instances of service based on preset conditions. These settings can be created through the portal. This method provides a browser-based user interface for creating and configuring an autoscale setting. --In this tutorial, you will: -> [!div class="checklist"] -> * Create a web app and Azure App Service plan. -> * Configure autoscale rules for scale-in and scale-out based on the number of requests a web app receives. -> * Trigger a scale-out action and watch the number of instances increase. -> * Trigger a scale-in action and watch the number of instances decrease. -> * Clean up your resources. --If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. --## Sign in to the Azure portal --Sign in to the [Azure portal](https://portal.azure.com/). --## Create a web app and App Service plan -1. On the menu on the left, select **Create a resource**. -1. Search for and select the **Web App** item and select **Create**. -1. Select an app name like **MyTestScaleWebApp**. Create a new resource group **myResourceGroup** or place it into a resource group of your choosing. --Within a few minutes, your resources should be provisioned. Use the web app and corresponding App Service plan in the remainder of this tutorial. --  --## Go to autoscale settings -1. On the menu on the left, select **Monitor**. Then select the **Autoscale** tab. -1. A list of the resources under your subscription that support autoscale are listed here. Identify the App Service plan that was created earlier in the tutorial, and select it. --  --1. On the **Autoscale setting** screen, select **Enable autoscale**. --The next few steps help you fill the **Autoscale setting** screen to look like the following screenshot. --  --## Configure default profile -1. Provide a name for the autoscale setting. -1. In the default profile, ensure **Scale mode** is set to **Scale to a specific instance count**. -1. Set **Instance count** to **1**. This setting ensures that when no other profile is active, or in effect, the default profile returns the instance count to **1**. --  --## Create recurrence profile --1. Select the **Add a scale condition** link under the default profile. --1. Edit the name of this profile to be **Monday to Friday profile**. --1. Ensure **Scale mode** is set to **Scale based on a metric**. --1. For **Instance limits**, set **Minimum** as **1**, **Maximum** as **2**, and **Default** as **1**. This setting ensures that this profile doesn't autoscale the service plan to have less than one instance or more than two instances. If the profile doesn't have sufficient data to make a decision, it uses the default number of instances (in this case, one). --1. For **Schedule**, select **Repeat specific days**. --1. Set the profile to repeat Monday through Friday, from 09:00 PST to 18:00 PST. This setting ensures that this profile is only active and applicable 9 AM to 6 PM, Monday through Friday. During all other times, the **Default** profile is the profile the autoscale setting uses. --## Create a scale-out rule --1. In the **Monday to Friday profile** section, select the **Add a rule** link. --1. Set **Metric source** to be **Other resource**. Set **Resource type** as **App Services** and set **Resource** as the web app you created earlier in this tutorial. --1. Set **Time aggregation** as **Total**, set **Metric name** as **Requests**, and set **Time grain statistic** as **Sum**. --1. Set **Operator** as **Greater than**, set **Threshold** as **10**, and set **Duration** as **5** minutes. --1. Set **Operation** as **Increase count by**, set **Instance count** as **1**, and set **Cool down** as **5** minutes. --1. Select **Add**. --This rule ensures that if your web app receives more than 10 requests within 5 minutes or less, one other instance is added to your App Service plan to manage load. --  --## Create a scale-in rule -We recommend that you always have a scale-in rule to accompany a scale-out rule. Having both ensures that your resources aren't overprovisioned. Overprovisioning means you have more instances running than needed to handle the current load. --1. In the **Monday to Friday profile**, select the **Add a rule** link. --1. Set **Metric source** to **Other resource**. Set **Resource type** as **App Services**, and set **Resource** as the web app you created earlier in this tutorial. --1. Set **Time aggregation** as **Total**, set **Metric name** as **Requests**, and set **Time grain statistic** as **Average**. --1. Set **Operator** as **Less than**, set **Threshold** as **5**, and set **Duration** as **5** minutes. --1. Set **Operation** as **Decrease count by**, set **Instance count** as **1**, and set **Cool down** as **5** minutes. --1. Select **Add**. --  --1. Save the autoscale setting. --  --## Trigger scale-out action -To trigger the scale-out condition in the autoscale setting you created, the web app must have more than 10 requests in less than 5 minutes. --1. Open a browser window and go to the web app you created earlier in this tutorial. You can find the URL for your web app in the Azure portal by going to your web app resource and selecting **Browse** on the **Overview** tab. --1. In quick succession, reload the page more than 10 times. --1. On the menu on the left, select **Monitor**. Then select the **Autoscale** tab. --1. From the list, select the App Service plan used throughout this tutorial. --1. On the **Autoscale setting** screen, select the **Run history** tab. --1. You see a chart that reflects the instance count of the App Service plan over time. In a few minutes, the instance count should rise from **1** to **2**. --1. Under the chart, you see the activity log entries for each scale action taken by this autoscale setting. --## Trigger scale-in action -The scale-in condition in the autoscale setting triggers if there are fewer than five requests to the web app over a period of 10 minutes. --1. Ensure no requests are being sent to your web app. --1. Load the Azure portal. --1. On the menu on the left, select **Monitor**. Then select the **Autoscale** tab. --1. From the list, select the App Service plan used throughout this tutorial. --1. On the **Autoscale setting** screen, select the **Run history** tab. --1. You see a chart that reflects the instance count of the App Service plan over time. In a few minutes, the instance count should drop from **2** to **1**. The process takes at least 100 minutes. --1. Under the chart, you see the corresponding set of activity log entries for each scale action taken by this autoscale setting. --  --## Clean up resources --1. On the menu on the left in the Azure portal, select **All resources**. Then select the web app created in this tutorial. --1. On your resource page, select **Delete**. Confirm delete by entering **yes** in the text box, and then select **Delete**. --1. Select the App Service plan resource and select **Delete**. --1. Confirm delete by entering **yes** in the text box, and then select **Delete**. --## Next steps --To learn more about autoscale settings, see [Autoscale overview](../autoscale/autoscale-overview.md). --> [!div class="nextstepaction"] -> [Archive your monitoring data](../essentials/platform-logs-overview.md) |
azure-monitor | Best Practices Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md | This table describes Azure Monitor features that provide analysis of collected d |Component |Description | Required training and/or configuration| |||--| |Overview page|Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. |This page is based on platform metrics that are collected automatically. No configuration is required. |-|[Metrics Explorer](essentials/metrics-getting-started.md)|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no another configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. | +|[Metrics Explorer](essentials/metrics-getting-started.md)|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no other configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. | |[Log Analytics](logs/log-analytics-overview.md)|With Log Analytics, you can create log queries to interactively work with log data and create log query alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. | ## Built-in visualization tools |
azure-monitor | Container Insights Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md | The following examples show what changes you can apply to your cluster by modify enabled = false ``` -1. To clean up jobs that are finished, specify the cleanup policy in the job definition by modifying the following code in the ConfigMap file: +1. To clean up jobs that are finished, specify the cleanup policy in your job definition yaml. Following is example Job definition with clean up policy. For more details, refer to [Kubernetes documentation](https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically). ``` apiVersion: batch/v1 |
azure-monitor | Data Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md | Today's complex computing environments run distributed applications that rely on [Azure Monitor](overview.md) collects and aggregates data from various sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources. You can gain deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor. ## Observability data in Azure Monitor |
azure-monitor | Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md | Some of these data sources use the [new data ingestion pipeline](essentials/data Sources of monitoring data from Azure applications can be organized into tiers, the highest tiers being your application itself and the lower tiers being components of Azure platform. The method of accessing data from each tier varies. The application tiers are summarized in the table below, and the sources of monitoring data in each tier are presented in the following sections. See [Monitoring data locations in Azure](monitor-reference.md) for a description of each data location and how you can access its data. -- ### Azure+ The following table briefly describes the application tiers that are specific to Azure. Following the link for further details on each in the sections below. | Tier | Description | Collection method | |
azure-monitor | Data Collection Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md | There are multiple methods to create transformations depending on the data colle While transformations themselves don't incur direct costs, the following scenarios can result in additional charges: - If a transformation increases the size of the incoming data, such as by adding a calculated column, you'll be charged the standard ingestion rate for the extra data.-- If a transformation reduces the incoming data by more than 50%, you'll be charged for the amount of filtered data above 50%.+- If a transformation reduces the ingested data by more than 50%, you'll be charged for the amount of filtered data above 50%. -To calculate the data processing charge resulting from transformations, use the following formula: [GB filtered out by transformations] - ([Total GB ingested] / 2). For example, if you ingest 100 GB of data and your transformations remove 70 GB, you'll be charged for 70 GB - (100 GB / 2), which is 20 GB. This calculation is done per data collection rule and per day basis. To avoid this charge, it's recommended to filter incoming data using alternative methods before applying transformations. By doing so, you can reduce the amount of data processed by transformations and, therefore, minimize any additional costs. +To calculate the data processing charge resulting from transformations, use the following formula:<br>[GB filtered out by transformations] - ([GB data ingested by pipeline] / 2). The following table shows examples. ++| Data ingested by pipeline | Data dropped by transformation | Data ingested by Log Analytics workspace | Data processing charge | Ingestion charge | +|:|:-:|:-:|:-:|:-:| +| 20 GB | 12 GB | 8 GB | 2 GB <sup>1</sup> | 8 GB | +| 20 GB | 8 GB | 12 GB | 0 GB | 12 GB | ++<sup>1</sup> This charge excludes the charge for data ingested by Log Analytics workspace. ++To avoid this charge, you should filter ingested data using alternative methods before applying transformations. By doing so, you can reduce the amount of data processed by transformations and, therefore, minimize any additional costs. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor) for current charges for ingestion and retention of log data in Azure Monitor. |
azure-monitor | Prometheus Metrics Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md | Deploy the template with the parameter file by using any valid method for deploy - Ensure that you update the `kube-state metrics` annotations and labels list with proper formatting. There's a limitation in the ARM template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, the feature might not as expected. - A data collection rule and data collection endpoint are created with the name `MSProm-\<short-cluster-region\>-\<cluster-name\>`. Currently, these names can't be modified.-- You must get the existing Azure Monitor workspace integrations for a Grafana intance and update the ARM template with it. Otherwise, the ARM deployment gets over-written, which removes existing integrations.+- You must get the existing Azure Monitor workspace integrations for a Grafana instance and update the ARM template with it. Otherwise, the ARM deployment gets over-written, which removes existing integrations. ## Enable Windows metrics collection The following table lists the firewall configuration required for Azure monitor | `*.handler.control.monitor.azure.us` | For querying data collection rules | 443 | ## Uninstall the metrics add-on-Currently, the Azure CLI is the only option to remove the metrics add-on and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus. Use the following command to remove the agent from the cluster nodes and delete the recording rules created for that cluster. This will also delete the data collection endpoint (DCE), data collection dule (DCR), DCRA and recording rules groups created as part of onboarding. . This action doesn't remove any existing data stored in your Azure Monitor workspace. +Currently, the Azure CLI is the only option to remove the metrics add-on and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus. Use the following command to remove the agent from the cluster nodes and delete the recording rules created for that cluster. This will also delete the data collection endpoint (DCE), data collection rule (DCR), DCRA and recording rules groups created as part of onboarding. . This action doesn't remove any existing data stored in your Azure Monitor workspace. ```azurecli az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | These tables currently support Basic logs: | Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | | Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | | Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) |-| Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallRecordingIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallRecordingIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | +| Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallRecordingIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallRecordingIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSCallSummary](/azure/azure-monitor/reference/tables/ACSCallSummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) | | Custom log tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) | | Data Manager for Energy | [OEPDataplaneLogs](/azure/azure-monitor/reference/tables/OEPDataplaneLogs) | |
azure-monitor | Cost Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md | Azure Commitment Discounts, such as discounts received from [Microsoft Enterpris ## Dedicated clusters -An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features, such as [customer-managed keys](customer-managed-keys.md), and use the same commitment-tier pricing model as workspaces, although they must have a commitment level of at least 500 GB per day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There's no pay-as-you-go option for clusters. +An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features, such as [customer-managed keys](customer-managed-keys.md), and use the same commitment-tier pricing model as workspaces, although they must have a commitment level of at least 100 GB per day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There's no pay-as-you-go option for clusters. The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level by using the configured commitment tier level. |
azure-monitor | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md | -Azure Monitor is a comprehensive monitoring solution for collecting, analyzing, and responding to telemetry from your cloud and on-premises environments. You can use Azure Monitor to maximize the availability and performance of your applications and services. +Azure Monitor is a comprehensive monitoring solution for collecting, analyzing, and responding to monitoring data from your cloud and on-premises environments. You can use Azure Monitor to maximize the availability and performance of your applications and services. It helps you understand how your applications are performing and allows you to manually and programmatically respond to system events. -Azure Monitor collects and aggregates the data from every layer and component of your system into a common data platform. It correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services. Because this data is stored together, it can be correlated and analyzed using a common set of tools. The data can then be used for analysis and visualizations to help you understand how your applications are performing and respond automatically to system events. +Azure Monitor collects and aggregates the data from every layer and component of your system across multiple Azure and non-Azure subscrtiptions and tenants. It stores it in a common data platform for consumption by a common set of tools which can correlate, analyze, visualize, and/or respond to the data. You can also integrate additional Microsoft and non-Microsoft tools. -Azure Monitor also includes Azure Monitor SCOM Managed Instance, which allows you to move your on-premises System Center Operation Manager (Operations Manager) installation to the cloud in Azure. -Use Azure Monitor to monitor these types of resources in Azure, other clouds, or on-premises: - - Applications - - Virtual machines - - Guest operating systems - - Containers including Prometheus metrics - - Databases - - Security events in combination with Azure Sentinel - - Networking events and health in combination with Network Watcher - - Custom sources that use the APIs to get data into Azure Monitor +The diagram above shows an abstracted view of the monitoring process. A more detailed breakdown of the Azure Monitor architecture is shown in the [High level architecture](#high-level-architecture) section below. -You can also export monitoring data from Azure Monitor into other systems so you can: - - Integrate with other third-party and open-source monitoring and visualization tools - - Integrate with ticketing and other ITSM systems -## Monitoring, observability, and artificial intelligence for IT operations +## High level architecture -Observability is the ability to assess an internal systemΓÇÖs state based on the data it produces. An observability solution analyzes output data, provides an assessment of the systemΓÇÖs health, and offers actionable insights for addressing problems across your IT infrastructure. +Azure Monitor can monitor these types of resources in Azure, other clouds, or on-premises: -Observability wouldnΓÇÖt be possible without monitoring. Monitoring is the collection and analysis of data pulled from IT systems. +- Applications +- Virtual machines +- Guest operating systems +- Containers including Prometheus metrics +- Databases +- Security events in combination with Azure Sentinel +- Networking events and health in combination with Network Watcher +- Custom sources that use the APIs to get data into Azure Monitor -The pillars of observability are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Metrics, logs, and distributed traces are commonly referred to as the pillars of observability. Azure Monitor adds ΓÇ£changesΓÇ¥ to these pillars. +You can also export monitoring data from Azure Monitor into other systems so you can: -When a system is observable, a user can identify the root cause of a performance problem by looking at the data it produces without additional testing or coding. -Azure Monitor achieves observability by correlating data from multiple pillars and aggregating data across the entire set of monitored resources. Azure Monitor provides a common set of tools to correlate and analyze the data from multiple Azure subscriptions and tenants, in addition to data hosted for other services. +- Integrate with other third-party and open-source monitoring and visualization tools +- Integrate with ticketing and other ITSM systems -Artificial Intelligence for IT Operations (AIOps) improves service quality and reliability by using machine learning to process and automatically act on data you collect in Azure Monitor. [Azure Monitor AIOps and machine learning capabilities](./logs/aiops-machine-learning.md) let you detect, diagnose, predict, and respond to potential issues in your IT environment using advanced analytics. +If you are a System Center Operations Manager (SCOM) user, Azure Monitor now includes a preview of Azure Monitor [SCOM Managed Instance (SCOM MI)](./vm/scom-managed-instance-overview.md). SCOM MI is a cloud-hosted version of SCOM and allows you to move your on-premises SCOM installation to Azure. -## High level architecture +The following diagram shows a high-level architecture view of Azure Monitor. -The following diagram gives a high-level view of Azure Monitor. +Click on the diagram to see a more detailed expanded version showing a larger breakdown of data sources and data collection methods. The diagram depicts the Azure Monitor system components:-- The **[data sources](data-sources.md)** are the types of data collected from each monitored resource. The data is collected and routed to the **data platform**.-- The **[data platform](data-platform.md)** is made up of the data stores for collected data. Azure Monitor's data platform has stores for metrics, logs, traces, and changes.-- The functions and components that consume data include analysis, visualizations, insights, and responses.-- Services that integrate with Azure Monitor and provide additional functionality are marked with an asterisk * in the diagram. -## Data sources +- The **[data sources](data-sources.md)** are the types of data collected from each monitored resource. +- The data is **collected and routed** to the data platform. Clicking on the diagram shows these options, which are also called out in detail later in this article. +- The **[data platform](data-platform.md)** stores the collected monitoring data. Azure Monitor's core data platform has stores for metrics, logs, traces, and changes. SCOM MI uses it's own database hosted in SQL Server Managed Instance. +- The **consumption** section shows the components that use data from the data platform. + - Azure Monitor's core consumption methods include tools to provide **insights**, **visualize**, and **analyize** data. The visualization tools build on the analysis tools and the insights build on top of both the visualization and analysis tools. + - There are additional mechanisms to help you **respond** to incoming monitoring data. -Azure Monitor can collect data from multiple sources, including from your application, operating systems, the services they rely on, and from the platform itself. The diagram below shows an expanded version of the datasource types gathered by Azure Monitor. +- The **SCOM MI** path uses the traditional Operations Manager console that SCOM customers are already familiar with. +- Interoperability options are shown in the **integrate** section. Not all services integrate at all levels. SCOM MI only integrates with Power BI. +## Monitoring, observability, and artificial intelligence for IT operations +**Observability** is the ability to assess an internal systemΓÇÖs state based on the data it produces. An observability solution analyzes output data, provides an assessment of the systemΓÇÖs health, and offers actionable insights for addressing problems across your IT infrastructure. -Click on the picture to see a larger version of the data sources diagram in context. +Observability wouldnΓÇÖt be possible without monitoring. Monitoring is the collection and analysis of data pulled from IT systems. When a system is observable, a user can identify the root cause of a performance problem by looking at the data it produces without additional testing or coding. -You can integrate monitoring data from sources outside Azure, including on-premises and other non-Microsoft clouds, using the application, infrastructure, and custom data sources. +The pillars of observability are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Metrics, logs, and distributed traces are commonly referred to as the pillars of observability. Azure Monitor adds "changes" to these pillars. -Azure Monitor collects these types of data: +Azure Monitor achieves observability by correlating data from multiple pillars and aggregating data across the entire set of monitored resources. -|Data Type|Description and subtypes| -||--| -|Application|Application performance, health, and activity data.| -|Infrastructure|**Container** - Data about containers, such as [Azure Kubernetes Service](../aks/intro-kubernetes.md), [Prometheus](./essentials/prometheus-metrics-overview.md), and the applications running inside containers.<br><br>**Operating system** - Data about the guest operating system on which your application is running.| -|Azure Platform <br><br> Data sent into the Azure Monitor data platform using the Azure Monitor REST API. |**Azure resource** - Data about the operation of an Azure resource from inside the resource, including changes. Resource Logs are one example. <br><br>**Azure subscription** - The operation and management of an Azure subscription, and data about the health and operation of Azure itself. The activity log is one example.<br><br>**Azure tenant** - Data about the operation of tenant-level Azure services, such as Azure Active Directory.<br> | -|Custom Sources| Data which gets into the system using Azure Monitor REST API. | +## Data sources -For detailed information about each of the data sources, see [data sources](./data-sources.md). +Azure Monitor can collect [data from multiple sources](data-sources.md). -## Data platform +The diagram below shows an expanded version of the datasource types gathered by Azure Monitor. -Azure Monitor stores data in data stores for each of the pillars of observability: metrics, logs, distributed traces, and changes. Each store is optimized for specific types of data and monitoring scenarios. +Click on the diagram above to see a larger version of the data sources diagram in context. -Click on the picture to see a larger version of the data platform diagram in context. +You can integrate application, infrastructure, and custom data source monitoring data from outside Azure, including from on-premises, and non-Microsoft clouds. -|Pillar of Observability/<br>Data Store|Description| -||| -|[Azure Monitor Metrics](essentials/data-platform-metrics.md)|Metrics are numerical values that describe an aspect of a system at a particular point in time. [Azure Monitor Metrics](./essentials/data-platform-metrics.md) is a time-series database, optimized for analyzing time-stamped data. Azure Monitor collects metrics at regular intervals. Metrics are identified with a timestamp, a name, a value, and one or more defining labels. They can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time. It supports native Azure Monitor metrics and [Prometheus metrics](essentials/prometheus-metrics-overview.md).| -|[Azure Monitor Logs](logs/data-platform-logs.md)|Logs are recorded system events. Logs can contain different types of data, be structured or free-form text, and they contain a timestamp. Azure Monitor stores structured and unstructured log data of all types in [Azure Monitor Logs](./logs/data-platform-logs.md). You can route data to [Log Analytics workspaces](./logs/log-analytics-overview.md) for querying and analysis.| -|Traces|[Distributed tracing](app/distributed-tracing.md) allows you to see the path of a request as it travels through different services and components. Azure Monitor gets distributed trace data from [instrumented applications](app/app-insights-overview.md#how-do-i-instrument-an-application). The trace data is stored in a separate workspace in Azure Monitor Logs.| -|Changes|Changes are a series of events in your application and resources. They're tracked and stored when you use the [Change Analysis](./change/change-analysis.md) service, which uses [Azure Resource Graph](../governance/resource-graph/overview.md) as its store. Change Analysis helps you understand which changes, such as deploying updated code, may have caused issues in your systems.| +Azure Monitor collects these types of data: -Distributed tracing is a technique used to trace requests as they travel through a distributed system. It allows you to see the path of a request as it travels through different services and components. It helps you to identify performance bottlenecks and troubleshoot issues in a distributed system. +|Data Type|Description and subtypes| +||--| +|App/Workloads |**App**- Application performance, health, and activity data. <br/><br/>**Workloads** - IaaS workloads such as SQL server, Oracle or SAP running on a hosted Virtual Machine.| +|Infrastructure|**Container** - Data about containers, such as [Azure Kubernetes Service](../aks/intro-kubernetes.md), [Prometheus](./essentials/prometheus-metrics-overview.md), and the applications running inside containers.<br><br>**Operating system** - Data about the guest operating system on which your application is running.| +|Azure Platform|**Azure resource** - Data about the operation of an Azure resource from inside the resource, including changes. Resource Logs are one example. <br><br>**Azure subscription** - The operation and management of an Azure subscription, and data about the health and operation of Azure itself. The activity log is one example.<br><br>**Azure tenant** - Data about the operation of tenant-level Azure services, such as Azure Active Directory.<br> | +|Custom Sources| Data which gets into the system using the <br/> - Azure Monitor REST API <br/> - Data Collection API | -For less expensive, long-term archival of monitoring data for auditing or compliance purposes, you can export to [Azure Storage](/azure/storage/). +For detailed information about each of the data sources, see [data sources](./data-sources.md). +SCOM MI (like on premises SCOM) collects only IaaS Workload and Operating System sources. ## Data collection and routing Azure Monitor collects and routes monitoring data using a few different mechanisms depending on the data being routed and the destination. Much like a road system built over time, not all roads lead to all locations. Some are legacy, some new, and some are better to take than others given how Azure Monitor has evolved over time. For more information, see **[data sources](data-sources.md)**. --Click on the picture to see a larger version of the data collection diagram in context. +Click on the diagram to see a larger version of the data collection in context. |Collection method|Description | |||-|[Application instrumentation](app/app-insights-overview.md)| Application Insights is enabled through either [Autoinstrumentation (agent)](app/codeless-overview.md#what-is-autoinstrumentation-for-azure-monitor-application-insights) or by adding the Application Insights SDK to your application code. For more information, reference [How do I instrument an application?](app/app-insights-overview.md#how-do-i-instrument-an-application).| +|[Application instrumentation](app/app-insights-overview.md)| Application Insights is enabled through either [Auto-Instrumentation (agent)](app/codeless-overview.md) or by adding the Application Insights SDK to your application code. In addition, Application Insights is in process of implementing [Open Telemetry](./app/opentelemetry-overview.md). For more information, reference [How do I instrument an application?](app/app-insights-overview.md#how-do-i-instrument-an-application).| |[Agents](agents/agents-overview.md)|Agents can collect monitoring data from the guest operating system of Azure and hybrid virtual machines.| |[Data collection rules](essentials/data-collection-rule-overview.md)|Use data collection rules to specify what data should be collected, how to transform it, and where to send it.|-|Internal| Data is automatically sent to a destination without user configuration. | +|Zero Config| Data is automatically sent to a destination without user configuration. Platform metrics are the most common example. | |[Diagnostic settings](essentials/diagnostic-settings.md)|Use diagnostic settings to determine where to send resource log and activity log data on the data platform.| |[Azure Monitor REST API](logs/logs-ingestion-api-overview.md)|The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace in Azure Monitor Logs. You can also send metrics into the Azure Monitor Metrics store using the custom metrics API.|-|[Azure Event Hubs](logs/ingest-logs-event-hub.md)|Azure Event Hubs is a big data streaming platform that can collect events from multiple sources. This is a highly scalable method of collecting data from a wide range of sources with minimum configuration. By setting a data collection rule, you can ingest data you need directly from an event hub into Azure Monitor Logs.| A common way to route monitoring data to other non-Microsoft tools is using *Event hubs*. See more in the [Integrate](#integrate) section below. +SCOM MI (like on-premises SCOM) uses an agent to collect data, which it sends to a management server running in a SCOM MI on Azure. + For detailed information about data collection, see [data collection](./best-practices-data-collection.md). +## Data platform ++Azure Monitor stores data in data stores for each of the three pillars of observability, plus an addition one: + - metrics + - logs + - distributed traces + - changes ++ Each store is optimized for specific types of data and monitoring scenarios. +++Click on the picture above for a to see the Data Platform in the context of the whole of Azure Monitor. ++|Pillar of Observability/<br>Data Store|Description| +||| +|[Azure Monitor Metrics](essentials/data-platform-metrics.md)|Metrics are numerical values that describe an aspect of a system at a particular point in time. [Azure Monitor Metrics](./essentials/data-platform-metrics.md) is a time-series database, optimized for analyzing time-stamped data. Azure Monitor collects metrics at regular intervals. Metrics are identified with a timestamp, a name, a value, and one or more defining labels. They can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time. It supports native Azure Monitor metrics and [Prometheus metrics](essentials/prometheus-metrics-overview.md).| +|[Azure Monitor Logs](logs/data-platform-logs.md)|Logs are recorded system events. Logs can contain different types of data, be structured or free-form text, and they contain a timestamp. Azure Monitor stores structured and unstructured log data of all types in [Azure Monitor Logs](./logs/data-platform-logs.md). You can route data to [Log Analytics workspaces](./logs/log-analytics-overview.md) for querying and analysis.| +|Traces|[Distributed tracing](app/distributed-tracing.md) allows you to see the path of a request as it travels through different services and components. Azure Monitor gets distributed trace data from [instrumented applications](app/app-insights-overview.md#how-do-i-instrument-an-application). The trace data is stored in a separate workspace in Azure Monitor Logs.| +|Changes|Changes are a series of events in your application and resources. They're tracked and stored when you use the [Change Analysis](./change/change-analysis.md) service, which uses [Azure Resource Graph](../governance/resource-graph/overview.md) as its store. Change Analysis helps you understand which changes, such as deploying updated code, may have caused issues in your systems.| ++Distributed tracing is a technique used to trace requests as they travel through a distributed system. It allows you to see the path of a request as it travels through different services and components. It helps you to identify performance bottlenecks and troubleshoot issues in a distributed system. ++For less expensive, long-term archival of monitoring data for auditing or compliance purposes, you can export to [Azure Storage](/azure/storage/). ++SCOM MI is similar to SCOM on-premises. It stores it's information in an SQL Database, but uses SQL Managed Instance because it's in Azure. ++ ## Consumption The following sections outline methods and services that consume monitoring data from the Azure Monitor data platform. All areas in the *consumption* section of the diagram have a user interface that appears in the Azure portal. +The top part of the consumption section applies to Azure Monitor core only. SCOM MI uses the traditional Ops Console running in the cloud. It can also can send monitoring data to Power BI for visualization. + ### The Azure portal The Azure portal is a web-based, unified console that provides an alternative to command-line tools. With the Azure portal, you can manage your Azure subscription using a graphical user interface. You can build, manage, and monitor everything from simple web apps to complex cloud deployments in the portal. The *Monitor* section of the Azure portal provides a visual interface that gives you access to the data collected for Azure resources and an easy way to access the tools, insights, and visualizations in Azure Monitor. The Azure portal is a web-based, unified console that provides an alternative to ### Insights -Some Azure resource providers have curated visualizations that provide a customized monitoring experience and require minimal configuration. Insights are large, scalable, curated visualizations. +Some Azure resource providers have curated visualizations that provide a customized monitoring experience and require minimal configuration. Insights are large, scalable, curated visualizations. The following table describes some of the larger insights: The following table describes some of the larger insights: |[VM Insights](vm/vminsights-overview.md)|VM Insights monitors your Azure VMs. It analyzes the performance and health of your Windows and Linux VMs and identifies their different processes and interconnected dependencies on external processes. The solution includes support for monitoring performance and application dependencies for VMs hosted on-premises or another cloud provider.| |[Network Insights](../network-watcher/network-insights-overview.md)|Network Insights provides a comprehensive and visual representation through topologies, of health and metrics for all deployed network resources, without requiring any configuration. It also provides access to network monitoring capabilities like Connection Monitor, flow logging for network security groups (NSGs), and Traffic Analytics as well as other diagnostic features. | -For more information, see the [list of insights and curated visualizations in the Azure Monitor Insights overview](insights/insights-overview.md). +For more information, see the [list of insights and curated visualizations in the Azure Monitor Insights overview](insights/insights-overview.md). ### Visualize Visualizations such as charts and tables are effective tools for summarizing monitoring data and presenting it to different audiences. Azure Monitor has its own features for visualizing monitoring data and uses other Azure services for publishing it to different audiences. Power BI and Grafana are not officially part of the Azure Monitor product, but they're a core integration and part of the Azure Monitor story. Visualizations such as charts and tables are effective tools for summarizing mon |[Dashboards](visualize/tutorial-logs-dashboards.md)|Azure dashboards allow you to combine different kinds of data into a single pane in the Azure portal. You can optionally share the dashboard with other Azure users. You can add the output of any log query or metrics chart to an Azure dashboard. For example, you could create a dashboard that combines tiles that show a graph of metrics, a table of activity logs, a usage chart from Application Insights, and the output of a log query.| |[Workbooks](visualize/workbooks-overview.md)|Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports in the Azure portal. You can use them to query data from multiple data sources. Workbooks can combine and correlate data from multiple data sets in one visualization giving you easy visual representation of your system. Workbooks are interactive and can be shared across teams with data updating in real time. Use workbooks provided with Insights, utilize the library of templates, or create your own.| |[Power BI](logs/log-powerbi.md)|Power BI is a business analytics service that provides interactive visualizations across various data sources. It's an effective means of making data available to others within and outside your organization. You can configure Power BI to automatically import log data from Azure Monitor to take advantage of these visualizations. |-|[Grafana](visualize/grafana-plugin.md)|Grafana is an open platform that excels in operational dashboards. Grafana has popular plug-ins and dashboard templates for APM tools such as Dynatrace, New Relic, and AppDynamics. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multicloud monitoring in a single pane of glass. All versions of Grafana include the Azure Monitor data source plug-in to visualize your Azure Monitor metrics and logs. Azure Managed Grafana also optimizes this experience for Azure-native data stores such as Azure Monitor and Azure Data Explorer. In this way, you can easily connect to any resource in your subscription and view all resulting monitoring data in a familiar Grafana dashboard. It also supports pinning charts from Azure Monitor metrics and logs to Grafana dashboards.| +|[Grafana](visualize/grafana-plugin.md)|Grafana is an open platform that excels in operational dashboards. All versions of Grafana include the Azure Monitor data source plug-in to visualize your Azure Monitor metrics and logs. Azure Managed Grafana also optimizes this experience for Azure-native data stores such as Azure Monitor and Azure Data Explorer. In this way, you can easily connect to any resource in your subscription and view all resulting monitoring data in a familiar Grafana dashboard. It also supports pinning charts from Azure Monitor metrics and logs to Grafana dashboards. <br/><br/> Grafana has popular plug-ins and dashboard templates for non-Microsoft APM tools such as Dynatrace, New Relic, and AppDynamics as well. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by these other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multicloud monitoring in a single pane of glass.| ### Analyze The Azure portal contains built in tools that allow you to analyze monitoring data. |Tool |Description | ||| The Azure portal contains built in tools that allow you to analyze monitoring da An effective monitoring solution proactively responds to critical events, without the need for an individual or team to notice the issue. The response could be a text or email to an administrator, or an automated process that attempts to correct an error condition. -**[Alerts](alerts/alerts-overview.md)** notify you of critical conditions and can take corrective action. Alert rules can be based on metric or log data. Metric alert rules provide near-real-time alerts based on collected metrics. Log alerts rules based on logs allow for complex logic across data from multiple sources. -Alert rules use action groups, which can perform actions like sending email or SMS notifications. Action groups can send notifications using webhooks to trigger external processes or to integrate with your IT service management tools. Action groups, actions, and sets of recipients can be shared across multiple rules. ++[**Artificial Intelligence for IT Operations (AIOps)**](logs/aiops-machine-learning.md) can improve service quality and reliability by using machine learning to process and automatically act on data you collect from applications, services, and IT resources into Azure Monitor. It automates data-driven tasks, predicts capacity usage, identifies performance issues, and detects anomalies across applications, services, and IT resources. These features simplify IT monitoring and operations without requiring machine learning expertise. ++**[Azure Monitor Alerts](alerts/alerts-overview.md)** notify you of critical conditions and can take corrective action. Alert rules can be based on metric or log data. + +- Metric alert rules provide near-real-time alerts based on collected metrics. +- Log alerts rules based on logs allow for complex logic across data from multiple sources. ++Alert rules use [action groups](alerts/action-groups.md), which can perform actions such as sending email or SMS notifications. Action groups can send notifications using webhooks to trigger external processes or to integrate with your IT service management tools. Action groups, actions, and sets of recipients can be shared across multiple rules. :::image type="content" source="media/overview/alerts.png" alt-text="Screenshot that shows the Azure Monitor alerts UI in the Azure portal." lightbox="media/overview/alerts.png"::: +SCOM MI currently uses it's own separate traditional SCOM alerting mechanism in the Ops Console. + **[Autoscale](autoscale/autoscale-overview.md)** allows you to dynamically control the number of resources running to handle the load on your application. You can create rules that use Azure Monitor metrics to determine when to automatically add resources when the load increases or remove resources that are sitting idle. You can specify a minimum and maximum number of instances, and the logic for when to increase or decrease resources to save money and to increase performance. :::image type="content" source="media/overview/autoscale.png" border="false" alt-text="Conceptual diagram showing how autoscale grows" ::: -**[Azure Logic Apps](../logic-apps/logic-apps-overview.md)** is a service where you can create and run automated workflows with little to no code. While not a part of the Azure Monitor product, it's a core part of the story. You can use Logic Apps to [customize responses and perform other actions in response to to Azure Monitor alerts](alerts/alerts-logic-apps.md). You can also use Logic Apps to perform other [more complex actions](logs/logicapp-flow-connector.md) if the Azure Monitor infrastructure doesn't have a built-it method. +**[Azure Logic Apps](../logic-apps/logic-apps-overview.md)** is also an option. For more information, see the [Integrate](#integrate) section below. ## Integrate You may need to integrate Azure Monitor with other systems or to build custom solutions that use your monitoring data. These Azure services work with Azure Monitor to provide integration capabilities. Below are only a few of the possible integrations. + |Azure service |Description | ||| |[Event Hubs](../event-hubs/event-hubs-about.md)|Azure Event Hubs is a streaming platform and event ingestion service. It can transform and store data by using any real-time analytics provider or batching/storage adapters. Use Event Hubs to stream Azure Monitor data to partner SIEM and monitoring tools.|-|[Logic Apps](../logic-apps/logic-apps-overview.md)|Azure Logic Apps is a service you can use to automate tasks and business processes by using workflows that integrate with different systems and services. Activities are available that read and write metrics and logs in Azure Monitor.| +|[Azure Storage](../storage/common/storage-introduction.md)| Export data to Azure storage for less expensive, long-term archival of monitoring data for auditing or compliance purposes. +|Hosted and Managed Partners | Many [external partners](partners.md) integrate with Azure Monitor. Azure Monitor has partnered with other monitoring providers to provide an [Azure-hosted version of their products](/azure/partner-solutions/) to make interoperability easier. Examples include Elastic, Datadog, Logz.io, and Dynatrace. |[API](/rest/api/monitor/)|Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have unlimited possibilities to build custom solutions that integrate with Azure Monitor.|-|[Hosted Partners](partners.md) | Many external partners integrate with Azure Monitor. Some integrations are [hosted on the Azure platform itself](/azure/partner-solutions/) to make integration faster and easier. +|[Azure Logic Apps](../logic-apps/logic-apps-overview.md)|Azure Logic Apps is a service you can use to automate tasks and business processes by using workflows that integrate with different systems and services with little or no code. Activities are available that read and write metrics and logs in Azure Monitor. You can use Logic Apps to [customize responses and perform other actions in response to to Azure Monitor alerts](alerts/alerts-logic-apps.md). You can also perform other [more complex actions](logs/logicapp-flow-connector.md) when the Azure Monitor infrastructure doesn't already supply a built-it method.| +|[Azure Functions](../azure-functions/functions-overview.md)| Similar to Azure Logic Apps, Azure Functions give you the ability to pre process and post process monitoring data as well as perform complex action beyond the scope of typical Azure Monitor alerts. Azure Functions uses code however providing additional flexibility over Logic Apps. +|Azure DevOps and GitHub | Azure Monitor Application Insights gives you the ability to create [Work Item Integration](app/work-item-integration.md) with monitoring data embedding in it. Additional options include [release annotations](app/annotations.md) and [continuous monitoring](app/continuous-monitoring.md). | + ## Next steps - [Getting started with Azure Monitor](getting-started.md) |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references to SAP on Azure solutions. * [Deploy disaster recovery using JetStream DR software](../azure-vmware/deploy-disaster-recovery-using-jetstream.md#disaster-recovery-with-azure-netapp-files-jetstream-dr-and-azure-vmware-solution) * [Disaster Recovery with Azure NetApp Files, JetStream DR and AVS (Azure VMware Solution)](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/) - Jetstream * [Enable App Volume Replication for Horizon VDI on Azure VMware Solution using Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-migration-and/enable-app-volume-replication-for-horizon-vdi-on-azure-vmware/ba-p/3798178)+* [Disaster Recovery using cross-region replication with Azure NetApp Files datastores for AVS](https://techcommunity.microsoft.com/t5/azure-architecture-blog/disaster-recovery-using-cross-region-replication-with-azure/ba-p/3870682) ## Virtual Desktop Infrastructure solutions |
backup | About Azure Vm Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-azure-vm-restore.md | Title: About the Azure Virtual Machine restore process description: Learn how the Azure Backup service restores Azure virtual machines Last updated 12/24/2021--++ # About Azure VM restore |
backup | About Restore Microsoft Azure Recovery Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-restore-microsoft-azure-recovery-services.md | description: Learn about the restore options available with the Microsoft Azure Last updated 05/07/2021--++ # About restore using the Microsoft Azure Recovery Services (MARS) agent |
backup | Active Directory Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/active-directory-backup-restore.md | Title: Back up and restore Active Directory description: Learn how to back up and restore Active Directory domain controllers. Last updated 07/08/2020--++ # Back up and restore Active Directory domain controllers |
backup | Archive Tier Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md | |
backup | Automation Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/automation-backup.md | Title: Automation in Azure Backup description: Provides a summary of automation capabilities offered by Azure Backup. Last updated 09/15/2022-- ++ # Automation in Azure Backup |
backup | Azure Backup Architecture For Sap Hana Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md | |
backup | Azure Backup Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-glossary.md | description: This article defines terms helpful for use with Azure Backup. Last updated 12/21/2020--++ # Azure Backup glossary |
backup | Azure Backup Move Vaults Across Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-move-vaults-across-regions.md | description: In this article, you'll learn how to ensure continued backups after Last updated 09/24/2021 --++ # Back up resources in Recovery Services vault after moving across regions |
backup | Azure Backup Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-pricing.md | Title: Azure Backup pricing description: Learn how to estimate your costs for budgeting Azure Backup pricing. Last updated 06/16/2020--++ # Azure Backup pricing |
backup | Azure File Share Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-backup-overview.md | Title: About Azure file share backup description: Learn how to back up Azure file shares in the Recovery Services vault Last updated 03/08/2022- - ++ # About Azure file share backup |
backup | Azure File Share Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md | |
backup | Azure Kubernetes Service Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md | description: This article gives you an understanding about Azure Kubernetes Serv Last updated 04/05/2023--++ # About Azure Kubernetes Service backup using Azure Backup (preview) |
backup | Azure Kubernetes Service Backup Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-troubleshoot.md | description: Symptoms, causes, and resolutions of Azure Kubernetes Service backu Last updated 03/15/2023 --++ # Troubleshoot Azure Kubernetes Service backup and restore (preview) |
backup | Azure Kubernetes Service Cluster Backup Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md | description: This article explains the prerequisites for Azure Kubernetes Servic Last updated 03/27/2023--++ # Prerequisites for Azure Kubernetes Service backup using Azure Backup (preview) |
backup | Azure Kubernetes Service Cluster Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md | |
backup | Azure Kubernetes Service Cluster Backup Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-using-cli.md | |
backup | Azure Kubernetes Service Cluster Backup Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-using-powershell.md | |
backup | Azure Kubernetes Service Cluster Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md | description: This article explains how to back up Azure Kubernetes Service (AKS) Last updated 05/25/2023--++ # Back up Azure Kubernetes Service using Azure Backup (preview) |
backup | Azure Kubernetes Service Cluster Manage Backups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md | |
backup | Azure Kubernetes Service Cluster Restore Using Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore-using-cli.md | |
backup | Azure Kubernetes Service Cluster Restore Using Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore-using-powershell.md | |
backup | Azure Kubernetes Service Cluster Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore.md | description: This article explains how to restore backed-up Azure Kubernetes Ser Last updated 05/25/2023--++ # Restore Azure Kubernetes Service using Azure Backup (preview) |
backup | Azure Policy Configure Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-policy-configure-diagnostics.md | Title: Configure Vault Diagnostics settings at scale description: Configure Log Analytics Diagnostics settings for all vaults in a given scope using Azure Policy Last updated 02/14/2020--++ # Configure Vault Diagnostics settings at scale |
chaos-studio | Chaos Studio Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md | During the public preview of Azure Chaos Studio, there are a few limitations and ## Limitations -* The target resources must be in [one of the regions supported by the Azure Chaos Studio Preview](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio). -* Azure Chaos Studio tracked resources (for example, Experiments) currently do NOT support Resource Move. Experiments can be easily copied (by copying Experiment JSON) for use in other subscriptions, resource groups, or regions. Experiments can also already target resources across regions. Extension resources (Targets and Capabilities) do support Resource Move. -* For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service: - * Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security). - * If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) are also required. -* If you run an experiment that makes use of the Chaos Studio agent, the virtual machine must run one of the following operating systems: -- * Windows Server 2019, Windows Server 2016, Windows Server 2012, and Windows Server 2012 R2 - * Red Hat Enterprise Linux 8.2, SUSE Enterprise Linux 15 SP2, CentOS 8.2, Debian 10 Buster (with unzip installation required), Oracle Linux 7.8, Ubuntu Server 16.04 LTS, and Ubuntu Server 18.04 LTS -* The Chaos Studio agent isn't tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux). -* The Chaos Studio portal experience has only been tested on the following browsers: +- **Supported regions** - The target resources must be in [one of the regions supported by the Azure Chaos Studio Preview](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio). +- **Resource Move not supported** - Azure Chaos Studio tracked resources (for example, Experiments) currently do NOT support Resource Move. Experiments can be easily copied (by copying Experiment JSON) for use in other subscriptions, resource groups, or regions. Experiments can also already target resources across regions. Extension resources (Targets and Capabilities) do support Resource Move. +- **VMs require network access to Chaos studio** - For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service: + - Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security). + - If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) are also required. ++- **Supported VM operating systems** - If you run an experiment that makes use of the Chaos Studio agent, the virtual machine must run one of the following operating systems: ++ - Windows Server 2019, Windows Server 2016, Windows Server 2012, and Windows Server 2012 R2 + - Red Hat Enterprise Linux 8.2, SUSE Enterprise Linux 15 SP2, CentOS 8.2, Debian 10 Buster (with unzip installation required), Oracle Linux 7.8, Ubuntu Server 16.04 LTS, and Ubuntu Server 18.04 LTS +- **Hardened Linux untested** - The Chaos Studio agent isn't tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux). +- **Supported browsers** The Chaos Studio portal experience has only been tested on the following browsers: * **Windows:** Microsoft Edge, Google Chrome, and Firefox * **MacOS:** Safari, Google Chrome, and Firefox |
cloud-services-extended-support | Deploy Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-visual-studio.md | - Title: Use Cloud Services (extended support) (Preview) -description: Learn now to create and deploy an Azure Cloud Service using Azure Resource Manager with Visual Studio ------ Previously updated : 10/5/2020----# Create and deploy a Azure Cloud Service (extended support) using Visual Studio --Starting with [Visual Studio 2019 version 16.9](https://visualstudio.microsoft.com/vs/preview/) (currently in preview), you can work with cloud services using Azure Resource Manager (ARM), which greatly simplifies and modernizes maintenance and management of Azure resources. This is enabled by a new Azure service referred to as Cloud Services (extended support). You can publish an existing cloud service to Cloud Services (extended support). For information on this Azure service, see [Cloud Services (extended support) documentation](overview.md). --> [!IMPORTANT] -> Cloud Services (extended support) is currently in public preview. -> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). --## Register the feature for your subscription -Cloud Services (extended support) is currently in preview. Register the feature for your subscription as follows: --```powershell -Register-AzProviderFeature -FeatureName CloudServices -ProviderNamespace Microsoft.Compute -``` -For more information see [Prerequisites for deploying Cloud Services (extended support)](deploy-prerequisite.md) --## Create a project --Visual Studio provides a project template that lets you create an Azure Cloud Service with extended support, named **Azure Cloud Service (extended support)**. A Cloud Service is a simple general-purpose Azure service. Once the project has been created, Visual Studio enables you to configure, debug, and deploy the Cloud Service to Azure. --### To create an Azure Cloud Service (extended support) project in Visual Studio --This section walks you through creating an Azure Cloud Service project in Visual Studio with one or more web roles. --1. From the start window, choose **Create a new project**. --1. In the search box, type in *Cloud*, and then choose **Azure Cloud Service (extended support)**. --  --1. Give the project a name and choose **Create**. --  --1. In the **New Microsoft Azure Cloud Service** dialog, select the roles that you want to add, and choose the right arrow button to add them to your solution. --  --1. To rename a role that you've added, hover on the role in the **New Microsoft Azure Cloud Service** dialog, and, from the context menu, select **Rename**. You can also rename a role within your solution (in the **Solution Explorer**) after it has been added. --  --The Visual Studio Azure project has associations to the role projects in the solution. The project also includes the *service definition file* and *service configuration file*: --- **Service definition file** - Defines the run-time settings for your application, including what roles are required, endpoints, and virtual machine size.-- **Service configuration file** - Configures how many instances of a role are run and the values of the settings defined for a role.--For more information about these files, see [Configure the Roles for an Azure Cloud Service with Visual Studio](/visualstudio/azure/vs-azure-tools-configure-roles-for-cloud-service). --## Publish a Cloud Service --1. Create or open an Azure Cloud Service project in Visual Studio. --1. In **Solution Explorer**, right-click the project, and, from the context menu, select **Publish**. --  --1. **Account** - Select an account or select **Add an account** in the account dropdown list. --1. **Choose your subscription** - Choose the subscription to use for your deployment. The subscription you use for deploying Cloud Services (extended support) needs to have Owner or Contributor roles assigned via role-based access control (RBAC). If your subscription does not have any one of these roles, see [Steps to add a role assignment](../role-based-access-control/role-assignments-steps.md) to add this before proceeding further. --1. Choose **Next** to move to the **Settings** page. --  --1. **Cloud service** - Using the dropdown, either select an existing Cloud Service, or select **Create new**, and create a Cloud Service. The data center displays in parentheses for each Cloud Service. It is recommended that the data center location for the Cloud Service be the same as the data center location for the storage account. -- If you choose to create a new Cloud Service, you'll see the **Create Cloud Service (extended support)** dialog. Specify the location and resource group you want to use for the Cloud Service. --  --1. **Build configuration** - Select either **Debug** or **Release**. --1. **Service configuration** - Select either **Cloud** or **Local**. --1. **Storage account** - Select the storage account to use for this deployment, or **Create new** to create a storage account. The region displays in parentheses for each storage account. It is recommended that the data center location for the storage account is the same as the data center location for the Cloud Service (Common Settings). -- The Azure storage account stores the package for the application deployment. After the application is deployed, the package is removed from the storage account. --1. **Key Vault** - Specify the Key Vault that contains the secrets for this Cloud Service. This is enabled if remote desktop is enabled or certificates are added to the configuration. --1. **Enable Remote Desktop for all roles** - Select this option if you want to be able to remotely connect to the service. You'll be asked to specify credentials. --  --1. Choose **Next** to move to the **Diagnostics settings** page. --  -- Diagnostics enables you to troubleshoot an Azure Cloud Service (or Azure virtual machine). For information about diagnostics, see [Configuring Diagnostics for Azure Cloud Services and Virtual Machines](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines). For information about Application Insights, see [What is Application Insights?](../azure-monitor/app/app-insights-overview.md). --1. Choose **Next** to move to the **Summary** page. --  --1. **Target profile** - You can choose to create a publishing profile from the settings that you have chosen. For example, you might create one profile for a test environment and another for production. To save this profile, choose the **Save** icon. The wizard creates the profile and saves it in the Visual Studio project. To modify the profile name, open the **Target profile** list, and then choose **Manage…**. -- > [!Note] - > The publishing profile appears in Solution Explorer in Visual Studio, and the profile settings are written to a file with an .azurePubxml extension. Settings are saved as attributes of XML tags. --1. Once you configure all the settings for your project's deployment, select **Publish** at the bottom of the dialog. You can monitor the process status in the **Azure Activity Log** output window in Visual Studio. --Congratulations! You've published your extended support Cloud Service project to Azure. To publish again with the same settings, you can reuse the publishing profile, or repeat these steps to create a new one. --## Clean up Azure resources --To clean up the Azure resources you created by following this tutorial, go to the [Azure portal](https://portal.azure.com), choose **Resource groups**, find and open the resource group you used to create the service, and choose **Delete resource group**. --## Next steps --Set up continuous integration (CI) using the **Configure** button on the **Publish** screen. For more information, see [Azure Pipelines documentation](/azure/devops/pipelines). |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | +## July 2023 Guest OS ++>[!NOTE] ++>The July Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the July Guest OS. This list is subject to change. ++| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | +| | | | | | +| Rel 23-07 | [5028168] | Latest Cumulative Update(LCU) | 6.60 | Jul 11, 2023 | +| Rel 23-07 | [5028171] | Latest Cumulative Update(LCU) | 7.28 | Jul 11, 2023 | +| Rel 23-07 | [5028169] | Latest Cumulative Update(LCU) | 5.84 | Jun 11, 2023 | +| Rel 23-07 | [5028871] | .NET Framework 3.5 Security and Quality Rollup | 2.140 | Jul 11, 2023 | +| Rel 23-07 | [5028865] | .NET Framework 4.7.2 Security and Quality Rollup | 2.140 | Jul 11, 2023 | +| Rel 23-07 | [5028872] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.120 | Jul 11, 2023 | +| Rel 23-07 | [5028864] | .NET Framework 4.7.2 Cumulative Update LKG | 4.120 | Ju1 11, 2023 | +| Rel 23-07 | [5028869] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.128 | Jul 11, 2023 | +| Rel 23-07 | [5028863] | .NET Framework 4.7.2 Cumulative Update LKG | 3.128 | Jul 11, 2023 | +| Rel 23-07 | [5028862] | .NET Framework DotNet | 6.60 | Jul 11, 2023 | +| Rel 23-07 | [5028858] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.28 | Ju1 11, 2023 | +| Rel 23-07 | [5028240] | Monthly Rollup | 2.140 | Jul 11, 2023 | +| Rel 23-07 | [5028232] | Monthly Rollup | 3.128 | Jul 11, 2023 | +| Rel 23-07 | [5028228] | Monthly Rollup | 4.120 | Jul 11, 2023 | +| Rel 23-07 | [5027575] | Servicing Stack Update | 3.128 | Jun 13, 2023 | +| Rel 23-07 | [5027574] | Servicing Stack Update LKG | 4.120 | Jun 13, 2023 | +| Rel 23-07 | [4578013] | OOB Standalone Security Update | 4.120 | Aug 19, 2023 | +| Rel 23-07 | [5023788] | Servicing Stack Update LKG | 5.84 | Mar 14, 2023 | +| Rel 23-07 | [5028264] | Servicing Stack Update LKG | 2.140 | Jul 11, 2023 | +| Rel 23-07 | [4494175] | Microcode | 5.84 | Sep 1, 2020 | +| Rel 23-07 | [4494174] | Microcode | 6.60 | Sep 1, 2020 | +| Rel 23-07 | 5028317 | Servicing Stack Update | 7.28 | | +| Rel 23-07 | 5028316 | Servicing Stack Update | 6.60 | | ++[5028168]: https://support.microsoft.com/kb/5028168 +[5028171]: https://support.microsoft.com/kb/5028171 +[5028169]: https://support.microsoft.com/kb/5028169 +[5028871]: https://support.microsoft.com/kb/5028871 +[5028865]: https://support.microsoft.com/kb/5028865 +[5028872]: https://support.microsoft.com/kb/5028872 +[5028864]: https://support.microsoft.com/kb/5028864 +[5028869]: https://support.microsoft.com/kb/5028869 +[5028863]: https://support.microsoft.com/kb/5028863 +[5028862]: https://support.microsoft.com/kb/5028862 +[5028858]: https://support.microsoft.com/kb/5028858 +[5028240]: https://support.microsoft.com/kb/5028240 +[5028232]: https://support.microsoft.com/kb/5028232 +[5028228]: https://support.microsoft.com/kb/5028228 +[5027575]: https://support.microsoft.com/kb/5027575 +[5027574]: https://support.microsoft.com/kb/5027574 +[4578013]: https://support.microsoft.com/kb/4578013 +[5023788]: https://support.microsoft.com/kb/5023788 +[5028264]: https://support.microsoft.com/kb/5028264 +[4494175]: https://support.microsoft.com/kb/4494175 +[4494174]: https://support.microsoft.com/kb/4494174 +[5028317]: https://support.microsoft.com/kb/5028317 +[5028316]: https://support.microsoft.com/kb/5028316 + ## June 2023 Guest OS |
cognitive-services | Shelf Modify Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/shelf-modify-images.md | To run the image stitching operation on a set of images, follow these steps: 1. Copy the following `curl` command into a text editor. ```bash- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-productunderstanding:stitch" --output <your_filename> -d "{ + curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "<endpoint>/computervision/imagecomposition:stitch?api-version=2023-04-01-preview" --output <your_filename> -d "{ 'images': [ { 'url':'<your_url_string>' To correct the perspective distortion in the composite image, follow these steps 1. Copy the following `curl` command into a text editor. ```bash- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/vision/v4.0-preview.1/operations/shelfanalysis-productunderstanding:rectify" --output <your_filename> -d "{ + curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "<endpoint>/computervision/imagecomposition:rectify?api-version=2023-04-01-preview" --output <your_filename> -d "{ 'url': '<your_url_string>', 'controlPoints': { 'topLeft': { |
cognitive-services | Use Blocklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/how-to/use-blocklist.md | The default AI classifiers are sufficient for most content moderation needs. How * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) * Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select the subscription you entered on the application form, select a resource group, supported region, and supported pricing tier. Then select **Create**. * The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs.-* [cURL](https://curl.haxx.se/) or * [Python 3.x](https://www.python.org/) installed - * Your Python installation should include [pip](https://pip.pypa.io/en/stable/). You can check if you have pip installed by running `pip --version` on the command line. Get pip by installing the latest version of Python. - * If you're using the Python SDK, you'll need to install the Azure AI Content Safety client library for Python. Run the command `pip install azure-ai-contentsafety` in your project directory. +* One of the following installed: + * [cURL](https://curl.haxx.se/) for REST API calls. + * [Python 3.x](https://www.python.org/) installed + * Your Python installation should include [pip](https://pip.pypa.io/en/stable/). You can check if you have pip installed by running `pip --version` on the command line. Get pip by installing the latest version of Python. + * If you're using Python, you'll need to install the Azure AI Content Safety client library for Python. Run the command `pip install azure-ai-contentsafety` in your project directory. + * [.NET Runtime](https://dotnet.microsoft.com/download/dotnet/) installed. + * [.NET 6.0](https://dotnet.microsoft.com/download/dotnet-core) SDK or above installed. + * If you're using .NET, you'll need to install the Azure AI Content Safety client library for .NET. Run the command `dotnet add package Azure.AI.ContentSafety --prerelease` in your project directory. + ## Analyze text with a blocklist You can create blocklists to use with the Text API. The following steps help you get started. -- ### Create or modify a blocklist #### [REST API](#tab/rest) curl --location --request PATCH '<endpoint>/contentsafety/text/blocklists/<your_ The response code should be `201`(created a new list) or `200`(updated an existing list). +#### [C#](#tab/csharp) ++Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ++```csharp +string endpoint = "<endpoint>"; +string key = "<enter_your_key_here>"; +ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); ++var blocklistName = "<your_list_id>"; +var blocklistDescription = "<description>"; ++var data = new +{ + description = blocklistDescription, +}; ++var createResponse = client.CreateOrUpdateTextBlocklist(blocklistName, RequestContent.Create(data)); +if (createResponse.Status == 201) +{ + Console.WriteLine("\nBlocklist {0} created.", blocklistName); +} +else if (createResponse.Status == 200) +{ + Console.WriteLine("\nBlocklist {0} updated.", blocklistName); +} +``` ++1. Replace `<endpoint>` with your endpoint URL. +1. Replace `<enter_your_key_here>` with your key. +1. Replace `<your_list_id>` with a custom name for your list. Also replace the last term of the REST URL with the same name. Allowed characters: 0-9, A-Z, a-z, `- . _ ~`. +1. Optionally replace `<description>` with a custom description. +1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. from azure.core.exceptions import HttpResponseError endpoint = "<endpoint>" key = "<enter_your_key_here>" -# Create an Content Safety client +# Create a Content Safety client client = ContentSafetyClient(endpoint, AzureKeyCredential(key)) def create_or_update_text_blocklist(name, description): def create_or_update_text_blocklist(name, description): blocklist_name=name, resource=TextBlocklist(description=description) ) except HttpResponseError as e:- print("Create or update text blocklist failed. ") - print("Error code: {}".format(e.error.code)) - print("Error message: {}".format(e.error.message)) - return None - except Exception as e: + print("\nCreate or update text blocklist failed: ") + if e.error: + print(f"Error code: {e.error.code}") + print(f"Error message: {e.error.message}") + raise print(e)- return None + raise if __name__ == "__main__": The response code should be `200`. } ``` +#### [C#](#tab/csharp) ++Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ++```csharp +string endpoint = "<endpoint>"; +string key = "<enter_your_key_here>"; +ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); ++var blocklistName = "<your_list_id>"; ++string blockItemText1 = "k*ll"; +string blockItemText2 = "h*te"; ++var blockItems = new TextBlockItemInfo[] { new TextBlockItemInfo(blockItemText1), new TextBlockItemInfo(blockItemText2) }; +var addedBlockItems = client.AddBlockItems(blocklistName, new AddBlockItemsOptions(blockItems)); ++if (addedBlockItems != null && addedBlockItems.Value != null) +{ + Console.WriteLine("\nBlockItems added:"); + foreach (var addedBlockItem in addedBlockItems.Value.Value) + { + Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", addedBlockItem.BlockItemId, addedBlockItem.Text, addedBlockItem.Description); + } +} +``` ++1. Replace `<endpoint>` with your endpoint URL. +1. Replace `<enter_your_key_here>` with your key. +1. Replace `<your_list_id>` with the ID value you used in the list creation step. +1. Replace the value of the `block_item_text_1` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters. +1. Optionally add more blockItem strings to the `blockItems` parameter. +1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. from azure.ai.contentsafety.models import TextBlockItemInfo, AddBlockItemsOption from azure.core.exceptions import HttpResponseError import time - endpoint = "<endpoint>" key = "<enter_your_key_here>" def add_block_items(name, items): body=AddBlockItemsOptions(block_items=block_items), ) except HttpResponseError as e:- print("Add block items failed.") - print("Error code: {}".format(e.error.code)) - print("Error message: {}".format(e.error.message)) - return None -- except Exception as e: + print("\nAdd block items failed: ") + if e.error: + print(f"Error code: {e.error.code}") + print(f"Error message: {e.error.message}") + raise print(e)- return None + raise return response.value The JSON response will contain a `"blocklistMatchResults"` that indicates any ma } ``` +#### [C#](#tab/csharp) ++Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ++```csharp +string endpoint = "<endpoint>"; +string key = "<enter_your_key_here>"; +ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); ++var blocklistName = "<your_list_id>"; ++// After you edit your blocklist, it usually takes effect in 5 minutes, please wait some time before analyzing with blocklist after editing. +var request = new AnalyzeTextOptions("I h*te you and I want to k*ll you"); +request.BlocklistNames.Add(blocklistName); +request.BreakByBlocklists = true; ++Response<AnalyzeTextResult> response; +try +{ + response = client.AnalyzeText(request); +} +catch (RequestFailedException ex) +{ + Console.WriteLine("Analyze text failed.\nStatus code: {0}, Error code: {1}, Error message: {2}", ex.Status, ex.ErrorCode, ex.Message); + throw; +} ++if (response.Value.BlocklistsMatchResults != null) +{ + Console.WriteLine("\nBlocklist match result:"); + foreach (var matchResult in response.Value.BlocklistsMatchResults) + { + Console.WriteLine("Blockitem was hit in text: Offset: {0}, Length: {1}", matchResult.Offset, matchResult.Length); + Console.WriteLine("BlocklistName: {0}, BlockItemId: {1}, BlockItemText: {2}, ", matchResult.BlocklistName, matchResult.BlockItemId, matchResult.BlockItemText); + } +} +``` ++1. Replace `<endpoint>` with your endpoint URL. +1. Replace `<enter_your_key_here>` with your key. +1. Replace `<your_list_id>` with the ID value you used in the list creation step. +1. Replace the `request` input text with whatever text you want to analyze. +1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. def analyze_text_with_blocklists(name, text): AnalyzeTextOptions(text=text, blocklist_names=[name], break_by_blocklists=False) ) except HttpResponseError as e:- print("Analyze text failed.") - print("Error code: {}".format(e.error.code)) - print("Error message: {}".format(e.error.message)) - return None - except Exception as e: + print("\nAnalyze text failed: ") + if e.error: + print(f"Error code: {e.error.code}") + print(f"Error message: {e.error.message}") + raise print(e)- return None + raise return response.blocklists_match_results The status code should be `200` and the response body should look like this: } ``` +#### [C#](#tab/csharp) ++Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ++```csharp +string endpoint = "<endpoint>"; +string key = "<enter_your_key_here>"; +ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); ++var blocklistName = "<your_list_id>"; ++var allBlockitems = client.GetTextBlocklistItems(blocklistName); +Console.WriteLine("\nList BlockItems:"); +foreach (var blocklistItem in allBlockitems) +{ + Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", blocklistItem.BlockItemId, blocklistItem.Text, blocklistItem.Description); +} +``` ++1. Replace `<endpoint>` with your endpoint URL. +1. Replace `<enter_your_key_here>` with your key. +1. Replace `<your_list_id>` with the ID value you used in the list creation step. +1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. def list_block_items(name): response = client.list_text_blocklist_items(blocklist_name=name) return list(response) except HttpResponseError as e:- print("List block items failed.") - print("Error code: {}".format(e.error.code)) - print("Error message: {}".format(e.error.message)) - return None - except Exception as e: + print("\nList block items failed: ") + if e.error: + print(f"Error code: {e.error.code}") + print(f"Error message: {e.error.message}") + raise print(e)- return None + raise if __name__ == "__main__": if __name__ == "__main__": 1. Run the script. - ### Get all blocklists The status code should be `200`. The JSON response looks like this: ] ``` +#### [C#](#tab/csharp) ++Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ++```csharp +string endpoint = "<endpoint>"; +string key = "<enter_your_key_here>"; +ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); +++var blocklists = client.GetTextBlocklists(); +Console.WriteLine("\nList blocklists:"); +foreach (var blocklist in blocklists) +{ + Console.WriteLine("BlocklistName: {0}, Description: {1}", blocklist.BlocklistName, blocklist.Description); +} +``` ++1. Replace `<endpoint>` with your endpoint URL. +1. Replace `<enter_your_key_here>` with your key. +1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. def list_text_blocklists(): try: return client.list_text_blocklists() except HttpResponseError as e:- print("List text blocklists failed.") - print("Error code: {}".format(e.error.code)) - print("Error message: {}".format(e.error.message)) - return None - except Exception as e: + print("\nList text blocklists failed: ") + if e.error: + print(f"Error code: {e.error.code}") + print(f"Error message: {e.error.message}") + raise print(e)- return None + raise if __name__ == "__main__": # list blocklists result = list_text_blocklists() The status code should be `200`. The JSON response looks like this: } ``` +#### [C#](#tab/csharp) ++Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ++```csharp +string endpoint = "<endpoint>"; +string key = "<enter_your_key_here>"; +ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); ++var blocklistName = "<your_list_id>"; ++var getBlocklist = client.GetTextBlocklist(blocklistName); +if (getBlocklist != null && getBlocklist.Value != null) +{ + Console.WriteLine("\nGet blocklist:"); + Console.WriteLine("BlocklistName: {0}, Description: {1}", getBlocklist.Value.BlocklistName, getBlocklist.Value.Description); +} +``` +1. Replace `<endpoint>` with your endpoint URL. +1. Replace `<enter_your_key_here>` with your key. +1. Replace `<your_list_id>` with the ID value you used in the list creation step. +1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. def get_text_blocklist(name): try: return client.get_text_blocklist(blocklist_name=name) except HttpResponseError as e:- print("Get text blocklist failed.") - print("Error code: {}".format(e.error.code)) - print("Error message: {}".format(e.error.message)) - return None - except Exception as e: + print("\nGet text blocklist failed: ") + if e.error: + print(f"Error code: {e.error.code}") + print(f"Error message: {e.error.message}") + raise print(e)- return None + raise if __name__ == "__main__": blocklist_name = "<your_list_id>" The status code should be `200`. The JSON response looks like this: } ``` +#### [C#](#tab/csharp) ++Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ++```csharp +string endpoint = "<endpoint>"; +string key = "<enter_your_key_here>"; +ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); ++var blocklistName = "<your_list_id>"; ++var getBlockItemId = addedBlockItems.Value.Value[0].BlockItemId; +var getBlockItem = client.GetTextBlocklistItem(blocklistName, getBlockItemId); +Console.WriteLine("\nGet BlockItem:"); +Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", getBlockItem.Value.BlockItemId, getBlockItem.Value.Text, getBlockItem.Value.Description); +``` ++1. Replace `<endpoint>` with your endpoint URL. +1. Replace `<enter_your_key_here>` with your key. +1. Replace `<your_list_id>` with the ID value you used in the list creation step. +1. Optionally change the value of `getBlockItemId` to the ID of a previously added item. +1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your The response code should be `204`. +#### [C#](#tab/csharp) ++Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ++```csharp +string endpoint = "<endpoint>"; +string key = "<enter_your_key_here>"; +ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); ++var blocklistName = "<your_list_id>"; ++var removeBlockItemId = addedBlockItems.Value.Value[0].BlockItemId; +var removeBlockItemIds = new List<string> { removeBlockItemId }; +var removeResult = client.RemoveBlockItems(blocklistName, new RemoveBlockItemsOptions(removeBlockItemIds)); ++if (removeResult != null && removeResult.Status == 204) +{ + Console.WriteLine("\nBlockItem removed: {0}.", removeBlockItemId); +} +``` ++1. Replace `<endpoint>` with your endpoint URL. +1. Replace `<enter_your_key_here>` with your key. +1. Replace `<your_list_id>` with the ID value you used in the list creation step. +1. Optionally change the value of `removeBlockItemId` to the ID of a previously added item. +1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. curl --location --request DELETE '<endpoint>/contentsafety/text/blocklists/<your The response code should be `204`. +#### [C#](#tab/csharp) ++Create a new C# console app and open it in your preferred editor or IDE. Paste in the following code. ++```csharp +string endpoint = "<endpoint>"; +string key = "<enter_your_key_here>"; +ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key)); ++var blocklistName = "<your_list_id>"; ++var deleteResult = client.DeleteTextBlocklist(blocklistName); +if (deleteResult != null && deleteResult.Status == 204) +{ + Console.WriteLine("\nDeleted blocklist."); +} +``` ++1. Replace `<endpoint>` with your endpoint URL. +1. Replace `<enter_your_key_here>` with your key. +1. Replace `<your_list_id>` (in the request URL) with the ID value you used in the list creation step. +1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. |
cognitive-services | Quickstart Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/quickstart-image.md | Get started with the Content Studio, REST API, or client SDKs to do basic image ::: zone-end +++ ::: zone pivot="programming-language-python" [!INCLUDE [Python SDK quickstart](./includes/quickstarts/python-quickstart-image.md)] |
cognitive-services | Quickstart Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/content-safety/quickstart-text.md | Get started with the Content Safety Studio, REST API, or client SDKs to do basic ::: zone-end +++ ::: zone pivot="programming-language-python" [!INCLUDE [Python SDK quickstart](./includes/quickstarts/python-quickstart-text.md)] |
cognitive-services | Prebuilt Component Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/prebuilt-component-reference.md | The following prebuilt components are available in Conversational Language Under | Quantity.Dimension | Special dimensions such as length, distance, volume, area, and speed. For example: "two miles", "650 square kilometers", "35 km/h" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish | | Quantity.Temperature | A temperature in Celsius or Fahrenheit. For example: "32F", "34 degrees celsius", "2 deg C" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish | | Quantity.Currency | Monetary amounts including currency. For example "1000.00 US dollars", "£20.00", "$67.5B" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |+| Quantity.NumberRange | A numeric interval. For example: "between 25 and 35" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish | | Datetime | Dates and times. For example: "June 23, 1976", "7 AM", "6:49 PM", "Tomorrow at 7 PM", "Next Week" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |+| Person.Name | The name of an individual. For example: "Joe", "Ann" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish | | Email | Email Addresses. For example: "user@contoso.com", "user_name@contoso.com", "user.name@contoso.com" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish | | Phone Number | US Phone Numbers. For example: "123-456-7890", "+1 123 456 7890", "(123)456-7890" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish | | URL | Website URLs and Links. | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish |+| General.Organization | Companies and corporations. For example: "Microsoft" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish | +| Geography.Location | The name of a location. For example: "Tokyo" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish | +| IP Address | An IP address. For example: "192.168.0.4" | English, Chinese, French, Italian, German, Brazilian Portuguese, Spanish | + ## Prebuilt components in multilingual projects -In multilingual conversation projects, you can enable any of the prebuilt components. The component will only be predicted if the language of the query is supported by the prebuilt. The language is either specified in the request or defaults to the primary language of the application if not provided. +In multilingual conversation projects, you can enable any of the prebuilt components. The component is only predicted if the language of the query is supported by the prebuilt entity. The language is either specified in the request or defaults to the primary language of the application if not provided. ## Next steps [Entity components](concepts/entity-components.md) + |
cognitive-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md | |
communication-services | Network Diagnostic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md | The **Network Diagnostics Tool** enables Azure Communication Services developers  -As part of the diagnostics performed, the user is asked to enable permissions for the tool to access their devices. Next, the user is asked to record their voice, which is then played back using an echo bot to ensure that the microphone is working. The tool finally, performs a video test. The test uses the camera to detect video and measure the quality for sent and received frames. +As part of the diagnostics performed, the user is asked to enable permissions for the tool to access their devices. Next, the tool performs an audio and video test to measure the audio and video network conditions. If you're looking to build your own Network Diagnostic Tool or to perform deeper integration of this tool into your application, you can leverage [pre-call diagnostic APIs](../voice-video-calling/pre-call-diagnostics.md) for the calling SDK. |
communication-services | Trial Phone Numbers Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/trial-phone-numbers-faq.md | While the trial phone number itself is provided at no cost during the trial peri Verifying the recipient phone number is a security measure that ensures the trial phone number can only make calls to the verified number. This helps protect against misuse and unauthorized usage of trial phone numbers. ### How is the recipient phone number verified?-The verification process involves sending a one-time passcode via SMS to the recipient phone number. The recipient needs to enter this code in the Azure portal to complete the verification. +The verification process involves sending a one-time passcode via SMS to the recipient phone number. The recipient needs to enter this code in the Azure portal to complete the verification. ++### From where can I verify phone numbers? +Currently, only phone numbers that originate from the United States (i.e., have a +1 preffix) can be verified for use with trial phone numbers. ### Can I verify multiple recipient phone numbers for the same trial phone number? Currently the trial phone number can be verified for up to three recipient phone numbers. If you need to make calls to more numbers, then you'll need to [purchase a phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Get Started With Video Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md | Last updated 06/30/2021 -zone_pivot_groups: acs-plat-web-ios-android-windows +zone_pivot_groups: acs-plat-web-ios-android-windows-unity |
communication-services | Getting Started With Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md | Last updated 06/30/2021 -zone_pivot_groups: acs-plat-web-ios-android-windows +zone_pivot_groups: acs-plat-web-ios-android-windows-unity Get started with Azure Communication Services by using the Communication Service [!INCLUDE [Calling with iOS](./includes/get-started/get-started-ios.md)] ::: zone-end + ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources). |
communication-services | Chat Hero Sample | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/chat-hero-sample.md | In this Sample quickstart, we'll learn how the sample works before we run the sa ## Overview -The sample has both a client-side application and a server-side application. The **client-side application** is a React/Redux web application that uses Microsoft's Fluent UI framework. This application sends requests to an ASP.NET Core **server-side application** that helps the client-side application connect to Azure. +The sample has both a client-side application and a server-side application. The **client-side application** is a React/Redux web application that uses Microsoft's Fluent UI framework. This application sends requests to a Node.js **server-side application** that helps the client-side application connect to Azure. Here's what the sample looks like: |
connectors | Connectors Create Api Servicebus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md | The Service Bus connector has different versions, based on [logic app workflow t For more information about managed identities, review [Authenticate access to Azure resources with managed identities in Azure Logic Apps](../logic-apps/create-managed-service-identity.md). -* By default, the Service Bus built-in connector operations are stateless. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md). +* By default, the Service Bus built-in connector operations are *stateless*. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md). ## Considerations for Azure Service Bus operations The Service Bus connector has different versions, based on [logic app workflow t [!INCLUDE [Warning about creating infinite loops](../../includes/connectors-infinite-loops.md)] -### Peek-lock --In Standard logic app workflows, peek-lock operations are available only for *stateless* workflows, not stateful workflows. - ### Limit on saved sessions in connector cache Per [Service Bus messaging entity, such as a subscription or topic](../service-bus-messaging/service-bus-queues-topics-subscriptions.md), the Service Bus connector can save up to 1,500 unique sessions at a time to the connector cache. If the session count exceeds this limit, old sessions are removed from the cache. For more information, see [Message sessions](../service-bus-messaging/message-sessions.md). |
cosmos-db | Analytical Store Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md | In addition to providing incremental data feed from analytical store to diverse - Changes can be synchronized "from the BeginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥ - There's no limitation around the fixed data retention period for which changes are available +## Efficient incremental data capture with internally managed checkpoints ++Each change in Cosmos DB container appears exactly once in the CDC feed, and the checkpoints are managed internally for you. This helps to address the below disadvantages of the common pattern of using custom checkpoints based on the ΓÇ£_tsΓÇ¥ value: ++ * The ΓÇ£_tsΓÇ¥ filter is applied against the data files which does not always guarantee minimal data scan. The internally managed GLSN based checkpoints in the new CDC capability ensure that the incremental data identification is done, just based on the metadata and so guarantees minimal data scanning in each stream. ++* The analytical store sync process does not guarantee ΓÇ£_tsΓÇ¥ based ordering which means that there could be cases where an incremental recordΓÇÖs ΓÇ£_tsΓÇ¥ is lesser than the last checkpointed ΓÇ£_tsΓÇ¥ and could be missed out in the incremental stream. The new CDC does not consider ΓÇ£_tsΓÇ¥ to identify the incremental records and thus guarantees that none of the incremental records are missed. + ## Features Change data capture in Azure Cosmos DB analytical store supports the following key features. |
cosmos-db | Local Emulator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator.md | Because the Azure Cosmos DB Emulator provides an emulated environment that runs * The emulator is not a scalable service and it doesn't support a large number of containers. When using the Azure Cosmos DB Emulator, by default, you can create up to 25 fixed size containers at 400 RU/s (only supported using Azure Cosmos DB SDKs), or 5 unlimited containers. For more information on how to change this value, see [Set the PartitionCount value](emulator-command-line-parameters.md#change-the-number-of-default-containers) article. -* The emulator does not offer different [Azure Cosmos DB consistency levels](consistency-levels.md) like the cloud service does. +* The emulator does not offer all of the [Azure Cosmos DB consistency levels](consistency-levels.md) that the cloud service does, only [*Session*](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/consistency-levels.md#session-consistency) consistency and [*Strong*](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/consistency-levels.md#strong-consistency) consistency are supported. The default consistency level is *Session*, which can be changed using [command line parameters](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/emulator-command-line-parameters.md). * The emulator does not offer [multi-region replication](distribute-data-globally.md). |
cosmos-db | Vector Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md | This command creates a `vector-ivf` index against the `vectorContent` property i ### Add vectors to your database -To add vectors to your database's collection, you first need to create the embeddings by using your own model, [Azure OpenAI Embeddings](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cognitive-services/openai/tutorials/embeddings.md), or another API (such as [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/)). In this example, new documents are added through sample embeddings: +To add vectors to your database's collection, you first need to create the embeddings by using your own model, [Azure OpenAI Embeddings](../../../cognitive-services/openai/tutorials/embeddings.md), or another API (such as [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/)). In this example, new documents are added through sample embeddings: ```javascript db.exampleCollection.insertMany([ |
cosmos-db | Concepts Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-backup.md | Title: Backup and restore – Azure Cosmos DB for PostgreSQL description: Protecting data from accidental corruption or deletion--++ Previously updated : 04/14/2021 Last updated : 07/10/2023 # Backup and restore in Azure Cosmos DB for PostgreSQL Last updated 04/14/2021 Azure Cosmos DB for PostgreSQL automatically creates backups of each node and stores them in locally redundant storage. Backups can-be used to restore your cluster to a specified time. +be used to restore your cluster to a specified time - point-in-time restore (PITR). Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion. ## Backups -At least once a day, Azure Cosmos DB for PostgreSQL takes snapshot backups of -data files and the database transaction log. The backups allow you to restore a +Automated process performs backup of each Azure Cosmos DB for PostgreSQL node from the moment your cluster is provisioned and throughout cluster's lifecycle. Azure Cosmos DB for PostgreSQL takes periodic disk snapshots and combines it with the node's [WAL files](https://www.postgresql.org/docs/current/wal-intro.html) streaming to Azure blob storage. ++The backups allow you to restore a server to any point in time within the retention period. (The retention period is currently 35 days for all clusters.) All backups are encrypted using AES 256-bit encryption. -In Azure regions that support availability zones, backup snapshots are stored +In Azure regions that support availability zones, backup snapshots and WAL files are stored in three availability zones. As long as at least one availability zone is online, the cluster is restorable. the last 35 days. Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data. +> [!NOTE] +> While cluster backups are always stored for 35 days, you may need to +> open a support request to restore the cluster to a point that is earlier +> than the latest failover time. ++When all nodes are up and running, you can restore cluster without any data loss. In an extremely rare case of a node experiencing a catastrophic event (and [high availability](./concepts-high-availability.md) isn't enabled on the cluster), you may lose up to 5 minutes of data. + > [!IMPORTANT] > Deleted clusters can't be restored. If you delete the > cluster, all nodes that belong to the cluster are deleted and can't subscription, and resource group as the original. The cluster has the original's configuration: the same number of nodes, number of vCores, storage size, user roles, PostgreSQL version, and version of the Citus extension. -Firewall settings and PostgreSQL server parameters are not preserved from the -original cluster, they are reset to default values. The firewall will -prevent all connections. You will need to manually adjust these settings after -restore. In general, see our list of suggested [post-restore -tasks](howto-restore-portal.md#post-restore-tasks). +Networking settings aren't preserved from the original cluster, they're reset to default values. You'll need to manually adjust these settings after restore to allow access to the restored cluster. In general, see our list of suggested [post-restore tasks](howto-restore-portal.md#post-restore-tasks). ++In most cases, cluster restore takes up to 1 hour. ## Next steps |
cosmos-db | Howto Restore Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-restore-portal.md | recoveries](concepts-backup.md#restore) for a cluster using backups. You can restore either to the earliest backup or to a custom restore point within your retention period. -> [!IMPORTANT] -> If the **Restore** option isn't present for your cluster, open an Azure support request to restore your cluster. - ## Restore to the earliest restore point Follow these steps to restore your cluster to its back up and running: * If the new server is meant to replace the original server, redirect clients and client applications to the new server-* Ensure an appropriate server-level firewall is in place for - users to connect. These rules aren't copied from the original cluster. -* Adjust PostgreSQL server parameters as needed. The parameters aren't copied - from the original cluster. -* Ensure appropriate logins and database level permissions are in place. -* Configure alerts, as appropriate. +* Ensure appropriate [networking settings for private or public access](./concepts-security-overview.md#network-security) are in place for + users to connect. These settings aren't copied from the original cluster. +* Ensure appropriate [logins](./howto-create-users.md) and database level permissions are in place. +* Configure [alerts](./howto-alert-on-metric.md#suggested-alerts), as appropriate. ## Next steps |
cosmos-db | Provision Throughput Autoscale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-throughput-autoscale.md | Use the [Azure portal](how-to-provision-autoscale-throughput.md#enable-autoscale For any value of `Tmax`, the database or container can store a total of `0.1 * Tmax GB`. After this amount of storage is reached, the maximum RU/s will be automatically increased based on the new storage value, with no impact to your application. -For example, if you start with a maximum RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 5000 GB of data. If you exceed 500 GB - e.g. storage is now 6000 GB, the new maximum RU/s will be 60,000 RU/s (scales between 6000 - 60,000 RU/s). +For example, if you start with a maximum RU/s of 50,000 RU/s (scales between 5000 - 50,000 RU/s), you can store up to 5000 GB of data. If you exceed 5000 GB - e.g. storage is now 6000 GB, the new maximum RU/s will be 60,000 RU/s (scales between 6000 - 60,000 RU/s). When you use database level throughput with autoscale, you can have the first 25 containers share an autoscale maximum RU/s of 1000 (scales between 100 - 1000 RU/s), as long as you don't exceed 100 GB of storage. See this [documentation](autoscale-faq.yml#can-i-change-the-maximum-ru-s-on-a-database-or-container--) for more information. |
data-factory | Concepts Nested Activities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-nested-activities.md | Your pipeline canvas will then switch to the context of the inner activity conta :::image type="content" source="media/concepts-pipelines-activities/nested-activity-breadcrumb.png" alt-text="Screenshot showing an example If Condition activity inside the true branch with a highlight on the breadcrumb to navigate back to the parent pipeline."::: ## Nested activity embedding limitations-Activities that support nesting (ForEach, Until, Switch, and If Condition) can't be embedded inside of another nested activity. Essentially, the current support for nesting is one level deep. See the best practices section below on how to use other pipeline activities to enable this scenario. In addition, the +There are constraints on the activities that support nesting (ForEach, Until, Switch, and If Condition), for nesting another nested activity. Specifically: ++- If and Switch can be used inside ForEach or Until activities. +- If and Switch can not used inside If and Switch activities. +- ForEach or Until support only a single level of nesting. ++See the best practices section below on how to use other pipeline activities to enable this scenario. In addition, the [Validation Activity](control-flow-validation-activity.md) can't be placed inside of a nested activity. +If and Switch can be used inside ForEach or Until activities. +ForEach or Until supports only single level nesting +If and Switch can not used inside If and Switch activities. + ## Best practices for multiple levels of nested activities In order to have logic that supports nesting more than one level deep, you can use the [Execute Pipeline Activity](control-flow-execute-pipeline-activity.md) inside of your nested activity to call another pipeline that then can have another level of nested activities. A common use case for this pattern is with the ForEach loop where you need to additionally loop based off logic in the inner activities. |
data-factory | Create Self Hosted Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md | Installation of the self-hosted integration runtime on a domain controller isn't - You must be an administrator on the machine to successfully install and configure the self-hosted integration runtime. - Copy-activity runs happen with a specific frequency. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is moved. When multiple copy jobs are in progress, you see resource usage go up during peak times. - Tasks might fail during extraction of data in Parquet, ORC, or Avro formats. For more on Parquet, see [Parquet format in Azure Data Factory](./format-parquet.md#using-self-hosted-integration-runtime). File creation runs on the self-hosted integration machine. To work as expected, file creation requires the following prerequisites:- - [Visual C++ 2010 Redistributable](https://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe) Package (x64) - Java Runtime (JRE) version 11 from a JRE provider such as [Microsoft OpenJDK 11](https://aka.ms/download-jdk/microsoft-jdk-11.0.19-windows-x64.msi) or [Eclipse Temurin 11](https://adoptium.net/temurin/releases/?version=11). Ensure that the *JAVA_HOME* system environment variable is set to the JDK folder (not just the JRE folder) you may also need to add the bin folder to your system's PATH environment variable. >[!NOTE] >It might be necessary to adjust the Java settings if memory errors occur, as described in the [Parquet format](./format-parquet.md#using-self-hosted-integration-runtime) documentation. |
data-factory | Sap Change Data Capture Prepare Linked Service Source Dataset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prepare-linked-service-source-dataset.md | To set up an SAP CDC linked service: 1. In **Name**, enter a unique name for the linked service. 1. In **Connect via integration runtime**, select your self-hosted integration runtime. 1. In **Server name**, enter the mapped server name for your SAP system.- 1. In **Subscriber name**, enter a unique name to register and identify this Data Factory connection as a subscriber that consumes data packages that are produced in the Operational Delta Queue (ODQ) by your SAP system. For example, you might name it `<your data factory -name>_<your linked service name>`. Make sure to only use upper case letters. + 1. In **Subscriber name**, enter a unique name to register and identify this Data Factory connection as a subscriber that consumes data packages that are produced in the Operational Delta Queue (ODQ) by your SAP system. For example, you might name it `<YOUR_DATA_FACTORY_NAME>_<YOUR_LINKED_SERVICE_NAME>`. Make sure to only use upper case letters. Also be sure that the total character count doesn't exceed 32 characters, or SAP will truncate the name. This can be an issue if your factory and linked services both have long names. Make sure you assign a unique subscriber name to every linked service connecting to the same SAP system. This will make monitoring and trouble shooting on SAP side much easier. |
data-factory | Store Credentials In Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/store-credentials-in-key-vault.md | To reference a credential stored in Azure Key Vault, you need to: 1. **Retrieve data factory managed identity** by copying the value of "Managed Identity Object ID" generated along with your factory. If you use ADF authoring UI, the managed identity object ID will be shown on the Azure Key Vault linked service creation window; you can also retrieve it from Azure portal, refer to [Retrieve data factory managed identity](data-factory-service-identity.md#retrieve-managed-identity). 2. **Grant the managed identity access to your Azure Key Vault.** In your key vault -> Access policies -> Add Access Policy, search this managed identity to grant **Get** and **List** permissions in the Secret permissions dropdown. It allows this designated factory to access secret in key vault. 3. **Create a linked service pointing to your Azure Key Vault.** Refer to [Azure Key Vault linked service](#azure-key-vault-linked-service).-4. **Create data store linked service, inside which reference the corresponding secret stored in key vault.** Refer to [reference secret stored in key vault](#reference-secret-stored-in-key-vault). +4. **Create the data store linked service. In its configuration, reference the corresponding secret stored in Azure Key Vault.** Refer to [Reference a secret stored in Azure Key Vault](#reference-secret-stored-in-key-vault). ## Azure Key Vault linked service |
data-factory | Transform Data Using Databricks Notebook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-databricks-notebook.md | Last updated 04/04/2023 # Run a Databricks notebook with the Databricks Notebook Activity in Azure Data Factory In this tutorial, you use the Azure portal to create an Azure Data Factory pipeline that executes a Databricks notebook against the Databricks jobs cluster. It also passes Azure Data Factory parameters to the Databricks notebook during execution. |
data-factory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new-archive.md | This archive page retains updates from older months. Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update +## November 2022 + +### Data flow ++- Incremental only is available in SAP CDC - get changes only from SAP system without initial full load [Learn more](connector-sap-change-data-capture.md?tabs=data-factory#mapping-data-flow-properties) +- Source partitions in initial full data load of SAP CDC to improve performance [Learn more](connector-sap-change-data-capture.md?tabs=data-factory#mapping-data-flow-properties) +- A new pipeline template - Load multiple objects with big amounts from SAP via SAP CDC [Learn more](solution-template-replicate-multiple-objects-sap-cdc.md?tabs=data-factory) ++### Data Movement +- Support to Azure Databricks through private link from a Data Factory managed virtual network [Learn more](managed-virtual-network-private-endpoint.md?tabs=data-factory#supported-data-sources-and-services) ++### User Interface +3 Pipeline designer enhancements added to ADF Studio preview experience +- Dynamic content flyout - make it easier to set dynamic content in your pipeline activities without using the expression builder [Learn more](how-to-manage-studio-preview-exp.md?tabs=data-factory#dynamic-content-flyout) +- Error message relocation to status column - make it easier for you to view errors when you see a Failed pipeline run [Learn more](how-to-manage-studio-preview-exp.md?tabs=data-factory#error-message-relocation-to-status-column) +- Container view - in Author Tab, Pipeline can change output view from list to container [Learn more](how-to-manage-studio-preview-exp.md?tabs=data-factory#container-view) ++### Continuous integration and continuous deployment ++In auto publish config, disable publish button is available to void overwriting the last automated publish deployment [Learn more](source-control.md?tabs=data-factory#editing-repo-settings) + ## October 2022 ### Video summary |
data-factory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md | This page is updated monthly, so revisit it regularly. For older months' update Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos. +## June 2023 ++### Continuous integration and continuous deployment ++NPM package now supports pre-downloaded bundle for building ARM templates. If your firewall setting is blocking direct download for your NPM package, you can now pre-load the package upfront, and let NPM package consume local version instead. This is a super boost for your CI/CD pipeline in a firewalled environment. ++### Region expansion ++Azure Data Factory is now available in Sweden Central. You can co-locate your ETL workflow in this new region if you are utilizing the region for storing and managing your modern data warehouse. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/continued-region-expansion-azure-data-factory-just-became/ba-p/3857249) ++### Data movement ++Securing outbound traffic with Azure Data Factory's outbound network rules is now supported. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/securing-outbound-traffic-with-azure-data-factory-s-outbound/ba-p/3844032) ++### Connectors ++The Amazon S3 connector is now supported as a sink destination using Mapping Data Flows. [Learn more](connector-amazon-simple-storage-service.md) ++### Data flow ++We introduce optional Source settings for DelimitedText and JSON sources in top-level CDC resource. The top-level CDC resource in data factory now supports optional source configurations for Delimited and JSON sources. You can now select the column/row delimiters for delimited sources and set the document type for JSON sources. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-optional-source-settings-for-delimitedtext-and-json/ba-p/3824274) + ## May 2023 ### Data Factory in Microsoft Fabric Express virtual network injection for SSIS in Azure Data Factory is generally av Continued region expansion - Azure Data Factory is now available in China North 3 [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=data-factory) -## November 2022 -- -### Data flow --- Incremental only is available in SAP CDC - get changes only from SAP system without initial full load [Learn more](connector-sap-change-data-capture.md?tabs=data-factory#mapping-data-flow-properties)-- Source partitions in initial full data load of SAP CDC to improve performance [Learn more](connector-sap-change-data-capture.md?tabs=data-factory#mapping-data-flow-properties)-- A new pipeline template - Load multiple objects with big amounts from SAP via SAP CDC [Learn more](solution-template-replicate-multiple-objects-sap-cdc.md?tabs=data-factory)--### Data Movement -- Support to Azure Databricks through private link from a Data Factory managed virtual network [Learn more](managed-virtual-network-private-endpoint.md?tabs=data-factory#supported-data-sources-and-services)--### User Interface -3 Pipeline designer enhancements added to ADF Studio preview experience -- Dynamic content flyout - make it easier to set dynamic content in your pipeline activities without using the expression builder [Learn more](how-to-manage-studio-preview-exp.md?tabs=data-factory#dynamic-content-flyout)-- Error message relocation to status column - make it easier for you to view errors when you see a Failed pipeline run [Learn more](how-to-manage-studio-preview-exp.md?tabs=data-factory#error-message-relocation-to-status-column)-- Container view - in Author Tab, Pipeline can change output view from list to container [Learn more](how-to-manage-studio-preview-exp.md?tabs=data-factory#container-view)---### Continuous integration and continuous deployment --In auto publish config, disable publish button is available to void overwriting the last automated publish deployment [Learn more](source-control.md?tabs=data-factory#editing-repo-settings) ## More information |
defender-for-cloud | Alerts Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md | Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in | **Unusual process execution detected** | Analysis of host data on %{Compromised Host} detected the execution of a process by %{User Name} that was unusual. Accounts such as %{User Name} tend to perform a limited set of operations, this execution was determined to be out of character and may be suspicious. | - | High | | **Unusual user password reset in your virtual machine**<br>(VM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium | | **Unusual user SSH key reset in your virtual machine**<br>(VM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium |-| **VBScript HTTP object allocation detected** | Creation of a VBScript file using Command Prompt has been detected. The following script contains HTTP object allocation command. This action can be used to download malicious files. | - | High | +| **VBScript HTTP object allocation detected** | Creation of a VBScript file using Command Prompt has been detected. The following script contains HTTP object allocation command. This action can be used to download malicious files. +| **Suspicious installation of GPU extension in your virtual machine (Preview)** <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Impact | Low | ## <a name="alerts-linux"></a>Alerts for Linux machines Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in |**Unusual execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualExecution) | Unusual execution of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | |**Unusual user password reset in your virtual machine**<br>(VM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium | |**Unusual user SSH key reset in your virtual machine**<br>(VM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium |+|**Suspicious installation of GPU extension in your virtual machine (Preview)** <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Impact | Low ## <a name="alerts-azureappserv"></a>Alerts for Azure App Service |
defender-for-cloud | Connect Azure Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-azure-subscription.md | Title: Enable Microsoft Defender for Cloud on your Azure subscription -description: Learn how to enable Microsoft Defender for Cloud's enhanced security features. + Title: Connect your Azure subscriptions to Microsoft Defender for Cloud +description: Learn how to connect your Azure subscriptions to Microsoft Defender for Cloud Last updated 07/10/2023 -# Enable Microsoft Defender for Cloud +# Connect your Azure subscriptions In this guide, you'll learn how to enable Microsoft Defender for Cloud on your Azure subscription. |
defender-for-cloud | Defender For Sql Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md | Microsoft Defender for SQL servers on machines extends the protections for your - [Connect your GCP project to Microsoft Defender for Cloud](quickstart-onboard-gcp.md) > [!NOTE]- > Enable database protection for your multicloud SQL servers through the [AWS connector](quickstart-onboard-aws.md#connect-your-aws-account) or the [GCP connector](quickstart-onboard-gcp.md#configure-the-databases-plan). + > Enable database protection for your multicloud SQL servers through the [AWS connector](quickstart-onboard-aws.md#connect-your-aws-account) or the [GCP connector](quickstart-onboard-gcp.md#configure-the-defender-for-databases-plan). This plan includes functionality for identifying and mitigating potential database vulnerabilities and detecting anomalous activities that could indicate threats to your databases. |
defender-for-cloud | How To Manage Attack Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md | While you're [investigating and remediating an attack path](#investigate-and-rem 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to **Microsoft Defender for Cloud** > **Recommendations** > **Attack paths**. +1. Navigate to **Microsoft Defender for Cloud** > **Attack path analysis**. 1. Select an attack path. |
defender-for-cloud | How To Use The Classic Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-use-the-classic-connector.md | Title: Manage the classic connectors + Title: Manage classic cloud connectors -description: Learn how to remove the AWS and GCP classic connectors from your subscription. +description: Learn how to manage AWS and GCP classic connectors and remove them from your subscription. Last updated 06/29/2023 -# Classic connector (retired) +# Manage classic cloud connectors (retired) -The retired **Classic cloud connector** - Requires configuration in your GCP project or AWS account to create a user that Defender for Cloud can use to connect to your GCP project or AWS environment. The classic connector is only available to customers who have previously connected GCP projects or AWS environments with it. +The retired *classic cloud connector* requires configuration in your Google Cloud Platform (GCP) project or Amazon Web Services (AWS) account to create a user that Microsoft Defender for Cloud can use to connect to your GCP project or AWS environment. The classic connector is available only to customers who previously used it to connect GCP projects or AWS environments. -To connect a [GCP project](quickstart-onboard-gcp.md) or [AWS account](quickstart-onboard-aws.md), you should do so using the native connector available in Defender for Cloud. +To connect a [GCP project](quickstart-onboard-gcp.md) or an [AWS account](quickstart-onboard-aws.md), you should use the native connector available in Defender for Cloud. -## Connect your AWS account using the classic connector --To connect your AWS account using the classic connector: +## Connect your AWS account by using the classic connector ### Prerequisites -- You need a Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free subscription](https://azure.microsoft.com/pricing/free-trial/).+To complete the procedures for connecting an AWS account, you need: ++- A Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free one](https://azure.microsoft.com/pricing/free-trial/). -- You must [enable Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription.+- [Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) enabled on your Azure subscription. - Access to an AWS account. -- **Required roles and permissions**: **Owner** on the relevant Azure subscription. A **Contributor** can also connect an AWS account if an owner provides the service principal details.+- **Owner** permission on the relevant Azure subscription. A **Contributor** can also connect an AWS account if an **Owner** provides the service principal details. ### Set up AWS Security Hub To view security recommendations for multiple regions, repeat the following steps for each relevant region. -> [!IMPORTANT] -> If you're using an AWS management account, repeat the following three steps to configure the management account and all connected member accounts across all relevant regions +If you're using an AWS management account, repeat the following steps to configure the management account and all connected member accounts across all relevant regions. 1. Enable [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/gs-console.html).- 1. Enable [AWS Security Hub](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html).--1. Verify that data is flowing to the Security Hub. When you first enable Security Hub, it might take several hours for data to be available. +1. Verify that data is flowing to Security Hub. When you first enable Security Hub, the data might take several hours to become available. ### Set up authentication for Defender for Cloud in AWS There are two ways to allow Defender for Cloud to authenticate to AWS: -- [**Create an IAM role for Defender for Cloud** (Recommended)](#create-an-iam-role-for-defender-for-cloud) - The most secure method.-- [**Create an AWS user for Defender for Cloud**](#create-an-aws-user-for-defender-for-cloud) - A less secure option if you don't have IAM enabled.+- [Create an identity and access management (IAM) role for Defender for Cloud](#create-an-iam-role-for-defender-for-cloud): The more secure and recommended method. +- [Create an AWS user for Defender for Cloud](#create-an-aws-user-for-defender-for-cloud): A less secure option if you don't have IAM enabled. -### Create an IAM role for Defender for Cloud +#### Create an IAM role for Defender for Cloud 1. From your Amazon Web Services console, under **Security, Identity & Compliance**, select **IAM**. :::image type="content" source="./media/quickstart-onboard-aws/aws-identity-and-compliance.png" alt-text="Screenshot of the AWS services." lightbox="./media/quickstart-onboard-aws/aws-identity-and-compliance.png"::: -1. Select **Roles** and **Create role**. +1. Select **Roles** > **Create role**. 1. Select **Another AWS account**. 1. Enter the following details: - - **Account ID** - enter the Microsoft Account ID (**158177204117**) as shown in the AWS connector page in Defender for Cloud. - - **Require External ID** - should be selected - - **External ID** - enter the subscription ID as shown in the AWS connector page in Defender for Cloud. + - For **Account ID**, enter the Microsoft account ID **158177204117**, as shown on the AWS connector page in Defender for Cloud. + - Select **Require External ID**. + - For **External ID**, enter the subscription ID, as shown on the AWS connector page in Defender for Cloud. 1. Select **Next**. 1. In the **Attach permission policies** section, select the following [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html): - - SecurityAudit (`arn:aws:iam::aws:policy/SecurityAudit`) - - AmazonSSMAutomationRole (`arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole`) - - AWSSecurityHubReadOnlyAccess (`arn:aws:iam::aws:policy/AWSSecurityHubReadOnlyAccess`) + - `SecurityAudit` (`arn:aws:iam::aws:policy/SecurityAudit`) + - `AmazonSSMAutomationRole` (`arn:aws:iam::aws:policy/service-role/AmazonSSMAutomationRole`) + - `AWSSecurityHubReadOnlyAccess` (`arn:aws:iam::aws:policy/AWSSecurityHubReadOnlyAccess`) -1. Optionally add tags. Adding Tags to the user doesn't affect the connection. +1. Optionally, add tags. Adding tags to the user doesn't affect the connection. 1. Select **Next**. -1. In The Roles list, choose the role you created +1. In The **Roles** list, choose the role that you created. 1. Save the Amazon Resource Name (ARN) for later. -### Create an AWS user for Defender for Cloud +#### Create an AWS user for Defender for Cloud 1. Open the **Users** tab and select **Add user**. -1. In the **Details** step, enter a username for Defender for Cloud and ensure that you select **Programmatic access** for the AWS Access Type. +1. In the **Details** step, enter a username for Defender for Cloud. Select **Programmatic access** for the AWS access type. -1. Select **Next Permissions**. +1. Select **Next: Permissions**. 1. Select **Attach existing policies directly** and apply the following policies:- - SecurityAudit - - AmazonSSMAutomationRole - - AWSSecurityHubReadOnlyAccess + - `SecurityAudit` + - `AmazonSSMAutomationRole` + - `AWSSecurityHubReadOnlyAccess` -1. Select **Next: Tags**. Optionally add tags. Adding Tags to the user doesn't affect the connection. +1. Select **Next: Tags**. Optionally, add tags. Adding tags to the user doesn't affect the connection. 1. Select **Review**. -1. Save the automatically generated **Access key ID** and **Secret access key** CSV file for later. +1. Save the automatically generated **Access key ID** and **Secret access key** CSV files for later. -1. Review the summary and select **Create user**. +1. Review the summary, and then select **Create user**. ### Configure the SSM Agent -AWS Systems Manager is required for automating tasks across your AWS resources. If your EC2 instances don't have the SSM Agent, follow the relevant instructions from Amazon: +AWS Systems Manager (SSM) is required for automating tasks across your AWS resources. If your EC2 instances don't have the SSM Agent, follow the relevant instructions from Amazon: - [Installing and Configuring SSM Agent on Windows Instances](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-win.html) AWS Systems Manager is required for automating tasks across your AWS resources. ### Complete the Azure Arc prerequisites -1. Make sure the appropriate [Azure resources providers](../azure-arc/servers/prerequisites.md#azure-resource-providers) are registered: - - Microsoft.HybridCompute - - Microsoft.GuestConfiguration +1. Make sure the appropriate [Azure resource providers](../azure-arc/servers/prerequisites.md#azure-resource-providers) are registered: + - `Microsoft.HybridCompute` + - `Microsoft.GuestConfiguration` -1. Create a Service Principal for onboarding at scale. As an **Owner** on the subscription you want to use for the onboarding, create a service principal for Azure Arc onboarding as described in [Create a Service Principal for onboarding at scale](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). +1. As an **Owner** on the subscription that you want to use for onboarding, create a service principal for Azure Arc onboarding, as described in [Create a service principal for onboarding at scale](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). ### Connect AWS to Defender for Cloud -1. From Defender for Cloud's menu, open **Environment settings** and select the option to switch back to the classic connectors experience. +1. From the Defender for Cloud menu, open **Environment settings**. Then select the option to switch back to the classic connectors experience. - :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows how to switch back to the classic cloud connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png"::: + :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows how to switch back to the classic connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png"::: 1. Select **Add AWS account**. - :::image type="content" source="./media/quickstart-onboard-aws/add-aws-account.png" alt-text="Screenshot that shows how to add AWS account button on Defender for Cloud's multicloud connectors page." lightbox="./media/quickstart-onboard-aws/add-aws-account.png"::: + :::image type="content" source="./media/quickstart-onboard-aws/add-aws-account.png" alt-text="Screenshot that shows the button for adding an AWS account on the pane for multicloud connectors in Defender for Cloud." lightbox="./media/quickstart-onboard-aws/add-aws-account.png"::: ++1. Configure the options on the **AWS authentication** tab: ++ 1. For **Display name**, enter a name for the connector. -1. Configure the options in the **AWS authentication** tab: + 1. For **Subscription**, confirm that the value is correct. It's the subscription that includes the connector and AWS Security Hub recommendations. - 1. Enter a **Display name** for the connector. - - 1. Confirm that the subscription is correct. It's the subscription that includes the connector and AWS Security Hub recommendations. - - 1. Depending on the authentication option, you chose in [Set up authentication for Defender for Cloud in AWS](#set-up-authentication-for-defender-for-cloud-in-aws): - - Select **Assume Role** and paste the ARN from [Create an IAM role for Defender for Cloud](#create-an-iam-role-for-defender-for-cloud). + 1. Depending on the authentication option that you chose when you [set up authentication for Defender for Cloud in AWS](#set-up-authentication-for-defender-for-cloud-in-aws), take one of the following actions: + - For **Authentication method**, select **Assume Role**. Then, for **AWS role ARN**, paste the ARN that you got when you [created an IAM role for Defender for Cloud](#create-an-iam-role-for-defender-for-cloud). - :::image type="content" source="./media/quickstart-onboard-aws/paste-arn-in-portal.png" alt-text="Screenshot that shows how to paste the ARN file in the relevant field of the AWS connection wizard in the Azure portal." lightbox="./media/quickstart-onboard-aws/paste-arn-in-portal.png"::: + :::image type="content" source="./media/quickstart-onboard-aws/paste-arn-in-portal.png" alt-text="Screenshot that shows the location for pasting the ARN file in the AWS connection wizard in the Azure portal." lightbox="./media/quickstart-onboard-aws/paste-arn-in-portal.png"::: - OR + - For **Authentication method**, select **Credentials**. Then, in the relevant boxes, paste the access key and secret key from the CSV files that you saved when you [created an AWS user for Defender for Cloud](#create-an-aws-user-for-defender-for-cloud). - - Select **Credentials** and paste the **access key** and **secret key** from the .csv file you saved in [Create an AWS user for Defender for Cloud](#create-an-aws-user-for-defender-for-cloud). - 1. Select **Next**. -1. Configure the options in the **Azure Arc Configuration** tab: +1. Configure the options on the **Azure Arc Configuration** tab. - Defender for Cloud discovers the EC2 instances in the connected AWS account and uses SSM to onboard them to Azure Arc. + Defender for Cloud discovers the EC2 instances in the connected AWS account and uses SSM to onboard them to Azure Arc. For the list of supported operating systems, see [What operating systems for my EC2 instances are supported?](faq-general.yml) in the common questions. - > [!TIP] - > See [What operating systems for my EC2 instances are supported?](faq-general.yml) + 1. For **Resource Group** and **Azure Region**, select the resource group and region that the discovered AWS EC2s will be onboarded to in the selected subscription. - 1. Select the **Resource Group** and **Azure Region** that the discovered AWS EC2s is onboarded to in the selected subscription. - - 1. Enter the **Service Principal ID** and **Service Principal Client Secret** for Azure Arc as described here [Create a Service Principal for onboarding at scale](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). - - 1. If the machine is connecting to the internet via a proxy server, specify the proxy server IP address, or the name and port number that the machine uses to communicate with the proxy server. Enter the value in the format ```http://<proxyURL>:<proxyport>``` - - 1. Select **Review + create**. + 1. Enter the **Service Principal ID** and **Service Principal Client Secret** values for Azure Arc, as described in [Create a service principal for onboarding at scale](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). ++ 1. If the machine is connecting to the internet via proxy server, specify the proxy server IP address, or the name and port number that the machine uses to communicate with the proxy server. Enter the value in the format `http://<proxyURL>:<proxyport>`. - Review the summary information + 1. Select **Review + create**. - The Tags sections list all Azure Tags that are automatically created for each onboarded EC2 with its own relevant details to easily recognize it in Azure. +1. Review the summary information. - Learn more about Azure Tags in [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md). + The **Tags** section lists all Azure tags that are automatically created for each onboarded EC2 instance. Each tag has its own relevant details, so you can easily recognize it in Azure. Learn more about Azure tags in [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md). -### Confirmation +### Confirm the connection -When the connector is successfully created, and AWS Security Hub has been configured properly: +After you successfully create the connector and properly configure AWS Security Hub: -- Defender for Cloud scans the environment for AWS EC2 instances, onboarding them to Azure Arc, enabling to install the Log Analytics agent and providing threat protection and security recommendations. +- Defender for Cloud scans the environment for AWS EC2 instances and onboards them to Azure Arc. You can then install the Log Analytics agent and get threat protection and security recommendations. - The Defender for Cloud service scans for new AWS EC2 instances every 6 hours and onboards them according to the configuration. -- The AWS CIS standard is shown in the Defender for Cloud's regulatory compliance dashboard.+- The AWS CIS standard appears in the regulatory compliance dashboard in Defender for Cloud. -- If Security Hub policy is enabled, recommendations will appear in the Defender for Cloud portal and the regulatory compliance dashboard 5-10 minutes after onboard completes.+- If a Security Hub policy is enabled, recommendations appear in the Defender for Cloud portal and the regulatory compliance dashboard 5 to 10 minutes after onboarding finishes. ## Remove classic AWS connectors -If you have any existing connectors created with the classic cloud connectors experience, remove them first: +To remove any connectors that you created by using the classic connectors experience: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to **Defender for Cloud** > **Environment settings**. +1. Go to **Defender for Cloud** > **Environment settings**. 1. Select the option to switch back to the classic connectors experience. - :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot of switching back to the classic cloud connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png"::: + :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows switching back to the classic connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png"::: -1. For each connector, select the three dots button **…** at the end of the row, and select **Delete**. +1. For each connector, select the ellipsis (**…**) button at the end of the row, and then select **Delete**. -1. On AWS, delete the role ARN, or the credentials created for the integration. +1. On AWS, delete the ARN role or the credentials created for the integration. -## Connect your GCP project using the classic connector +## Connect your GCP project by using the classic connector -To connect your GCP project using the classic connector: +Create a connector for every organization that you want to monitor from Defender for Cloud. -### Prerequisites --- You need a Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free subscription](https://azure.microsoft.com/pricing/free-trial/).+When you're connecting GCP projects to specific Azure subscriptions, consider the [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail) and these guidelines: -- You must [enable Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription.+- You can connect your GCP projects to Defender for Cloud at the *organization* level. +- You can connect multiple organizations to one Azure subscription. +- You can connect multiple organizations to multiple Azure subscriptions. +- When you connect an organization, all projects within that organization are added to Defender for Cloud. -- Access to a GCP project.+### Prerequisites -- Required roles and permissions: **Owner** or **Contributor** on the relevant Azure Subscription.+To complete the procedures for connecting a GCP project, you need: -You can learn more about Defender for Cloud's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +- A Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free one](https://azure.microsoft.com/pricing/free-trial/). -### Connect your GCP project using the classic connector +- [Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) enabled on your Azure subscription. -Create a connector for every organization you want to monitor from Defender for Cloud. +- Access to a GCP project. -When connecting your GCP projects to specific Azure subscriptions, consider the [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail) and these guidelines: +- The **Owner** or **Contributor** role on the relevant Azure subscription. -- You can connect your GCP projects to Defender for Cloud in the *organization* level-- You can connect multiple organizations to one Azure subscription-- You can connect multiple organizations to multiple Azure subscriptions-- When you connect an organization, all *projects* within that organization are added to Defender for Cloud+You can learn more about Defender for Cloud pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). ### Set up GCP Security Command Center with Security Health Analytics -For all the GCP projects in your organization, you must also: +For all the GCP projects in your organization, you must: ++1. Set up GCP Security Command Center by using [these instructions from the GCP documentation](https://cloud.google.com/security-command-center/docs/quickstart-scc-setup). -1. Set up **GCP Security Command Center** using [these instructions from the GCP documentation](https://cloud.google.com/security-command-center/docs/quickstart-scc-setup). +1. Enable Security Health Analytics by using [these instructions from the GCP documentation](https://cloud.google.com/security-command-center/docs/how-to-use-security-health-analytics). -1. Enable **Security Health Analytics** using [these instructions from the GCP documentation](https://cloud.google.com/security-command-center/docs/how-to-use-security-health-analytics). +1. Verify that data is flowing to Security Command Center. -1. Verify that there's data flowing to the Security Command Center. +The instructions for connecting your GCP environment for security configuration follow Google's recommendations for consuming security configuration recommendations. The integration applies Google Security Command Center and consumes extra resources that might affect your billing. -The instructions for connecting your GCP environment for security configuration follow Google's recommendations for consuming security configuration recommendations. The integration applies Google Security Command Center and consumes extra resources that might impact your billing. +When you first enable Security Health Analytics, the data might take several hours to become available. -When you first enable Security Health Analytics, it might take several hours for data to be available. +### Enable the GCP Security Command Center API -### Enable GCP Security Command Center API +1. Go to Google's Cloud Console API Library. -1. Navigate to From Google's **Cloud Console API Library**, select each project in the organization you want to connect to Microsoft Defender for Cloud. +1. Select each project in the organization that you want to connect to Microsoft Defender for Cloud. -1. In the API Library, find and select **Security Command Center API**. +1. Find and select **Security Command Center API**. 1. On the API's page, select **ENABLE**. -Learn more about the [Security Command Center API](https://cloud.google.com/security-command-center/docs/reference/rest/). +[Learn more about the Security Command Center API](https://cloud.google.com/security-command-center/docs/reference/rest/). ### Create a dedicated service account for the security configuration integration -1. In the **GCP Console**, select a project from the organization in which you're creating the required service account. +1. On the GCP console, select a project from the organization in which you're creating the required service account. > [!NOTE]- > When this service account is added at the organization level, it'll be used to access the data gathered by Security Command Center from all of the other enabled projects in the organization. + > When you add this service account at the organization level, it will be used to access the data that Security Command Center gathers from all of the other enabled projects in the organization. -1. In the **IAM & admin** section of the navigation menu, select **Service accounts**. +1. In the **IAM & admin** section of the left menu, select **Service accounts**. 1. Select **CREATE SERVICE ACCOUNT**. -1. Enter an account name, and select **Create**. +1. Enter an account name, and then select **Create**. -1. Specify the **Role** as **Defender for Cloud Admin Viewer**, and select **Continue**. +1. Specify **Role** as **Defender for Cloud Admin Viewer**, and then select **Continue**. 1. The **Grant users access to this service account** section is optional. Select **Done**. -1. Copy the **Email value** of the created service account, and save it for later use. +1. Copy the **Email value** information for the created service account, and save it for later use. -1. In the **IAM & admin** section of the navigation menu, select **IAM**. +1. In the **IAM & admin** section of the left menu, select **IAM**, and then: ++ 1. Switch to the organization level. - 1. Switch to organization level. - 1. Select **ADD**.- - 1. In the **New members** field, paste the **Email value** you copied earlier. - - 1. Specify the role as **Defender for Cloud Admin Viewer** and then select **Save**. ++ 1. In the **New members** box, paste the **Email value** information that you copied earlier. ++ 1. Specify the role as **Security Center Admin Viewer**, and then select **Save**. :::image type="content" source="./media/quickstart-onboard-gcp/iam-settings-gcp-permissions-admin-viewer.png" alt-text="Screenshot that shows how to set the relevant GCP permissions." lightbox="./media/quickstart-onboard-gcp/iam-settings-gcp-permissions-admin-viewer.png"::: ### Create a private key for the dedicated service account -1. Switch to project level. +1. Switch to the project level. -1. In the **IAM & admin** section of the navigation menu, select **Service accounts**. +1. In the **IAM & admin** section of the left menu, select **Service accounts**. -1. Open the dedicated service account and select Edit. +1. Open the dedicated service account, and then select **Edit**. -1. In the **Keys** section, select **ADD KEY** and then **Create new key**. +1. In the **Keys** section, select **ADD KEY** > **Create new key**. -1. In the Create private key screen, select **JSON** and then select **CREATE**. +1. On the **Create private key** pane, select **JSON**, and then select **CREATE**. 1. Save this JSON file for later use. ### Connect GCP to Defender for Cloud -1. From Defender for Cloud's menu, open **Environment settings** and select the option to switch back to the classic connectors experience. +1. From the Defender for Cloud menu, open **Environment settings**. Then select the option to switch back to the classic connectors experience. - :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows how to switch back to the classic cloud connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png" ::: + :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows how to switch back to the classic connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png" ::: -1. Select add GCP project. +1. Select **Add GCP project**. -1. In the onboarding page: +1. On the onboarding page: 1. Validate the chosen subscription.- - 1. In the **Display name** field, enter a display name for the connector. - - 1. In the **Organization ID** field, enter your organization's ID. If you don't know it, see [Creating and managing organizations](https://cloud.google.com/resource-manager/docs/creating-managing-organization). - - 1. In the **Private key** file box, browse to the JSON file you downloaded in [Create a private key for the dedicated service account](#create-a-private-key-for-the-dedicated-service-account). - - 1. Select **Next** -### Confirmation + 1. In the **Display name** box, enter a display name for the connector. ++ 1. In the **Organization ID** box, enter your organization's ID. If you don't know it, see the Google guide [Creating and managing organizations](https://cloud.google.com/resource-manager/docs/creating-managing-organization). ++ 1. In the **Private key** box, browse to the JSON file that you downloaded when you [created a private key for the dedicated service account](#create-a-private-key-for-the-dedicated-service-account). ++1. Select **Next**. ++### Confirm the connection -When the connector is successfully created, and GCP Security Command Center has been configured properly: +After you successfully create the connector and properly configure GCP Security Command Center: -- The GCP CIS standard is shown in the Defender for Cloud's regulatory compliance dashboard.+- The GCP CIS standard appears in the regulatory compliance dashboard in Defender for Cloud. -- Security recommendations for your GCP resources will appear in the Defender for Cloud portal and the regulatory compliance dashboard 5-10 minutes after onboard completes:+- Security recommendations for your GCP resources appear in the Defender for Cloud portal and the regulatory compliance dashboard 5 to 10 minutes after onboarding finishes. - :::image type="content" source="./media/quickstart-onboard-gcp/gcp-resources-in-recommendations.png" alt-text="Screenshot that shows the GCP resources and recommendations in Defender for Cloud's recommendations page." lightbox="./media/quickstart-onboard-gcp/gcp-resources-in-recommendations.png" ::: + :::image type="content" source="./media/quickstart-onboard-gcp/gcp-resources-in-recommendations.png" alt-text="Screenshot that shows the GCP resources and recommendations on the recommendations pane in Defender for Cloud." lightbox="./media/quickstart-onboard-gcp/gcp-resources-in-recommendations.png" ::: ## Remove classic GCP connectors -If you have any existing connectors created with the classic cloud connectors experience, remove them first: +To remove any connectors that you created by using the classic connectors experience: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to **Defender for Cloud** > **Environment settings**. +1. Go to **Defender for Cloud** > **Environment settings**. 1. Select the option to switch back to the classic connectors experience. - :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="A screenshot that shows how to switch back to the classic cloud connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png"::: + :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Screenshot that shows how to switch back to the classic connectors experience in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/classic-connectors-experience.png"::: -1. For each connector, select the three dot button at the end of the row, and select **Delete**. +1. For each connector, select the ellipsis (**...**) button at the end of the row, and then select **Delete**. ## Next steps |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | Title: Connect your AWS account to Defender for Cloud -description: Defend your AWS resources with Microsoft Defender for Cloud + Title: Connect your AWS account to Microsoft Defender for Cloud +description: Defend your AWS resources by using Microsoft Defender for Cloud. Last updated 06/28/2023 -# Connect your AWS accounts to Microsoft Defender for Cloud +# Connect your AWS account to Microsoft Defender for Cloud -With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Amazon Web Services (AWS), but you need to set up the connection between them to your Azure subscription. +Workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Amazon Web Services (AWS), but you need to set up the connection between them to your Azure subscription. -> [!NOTE] -> If you are connecting an AWS account that was previously connected with the classic connector, you must [remove them](how-to-use-the-classic-connector.md#remove-classic-aws-connectors) first. Using an AWS account that is connected by both the classic and native connector can produce duplicate recommendations. +If you're connecting an AWS account that you previously connected by using the classic connector, you must [remove it](how-to-use-the-classic-connector.md#remove-classic-aws-connectors) first. Using an AWS account that's connected by both the classic and native connectors can produce duplicate recommendations. -This screenshot shows AWS accounts displayed in Defender for Cloud's [overview dashboard](overview-page.md). +The following screenshot shows AWS accounts displayed in the Defender for Cloud [overview dashboard](overview-page.md). -You can learn more by watching this video from the Defender for Cloud in the Field video series: -- [AWS connector](episode-one.md)+You can learn more by watching the [New AWS connector in Defender for Cloud](episode-one.md) video from the *Defender for Cloud in the Field* video series. -For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). +For a reference list of all the recommendations that Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). ## Prerequisites -- You need a Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free subscription](https://azure.microsoft.com/pricing/free-trial/).+To complete the procedures in this article, you need: ++- A Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free one](https://azure.microsoft.com/pricing/free-trial/). -- You must [Set up Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription.+- [Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) set up on your Azure subscription. - Access to an AWS account. -- Required roles and permissions: **Contributor** permission for the relevant Azure subscription. <br> **Administrator** on the AWS account.+- **Contributor** permission for the relevant Azure subscription, and **Administrator** permission on the AWS account. > [!NOTE] > The AWS connector is not available on the national government clouds (Azure Government, Azure China 21Vianet). -- **To enable the Defender for Containers plan**, you need:- - At least one Amazon EKS cluster with permission to access to the EKS K8s API server. If you need to create a new EKS cluster, follow the instructions in [Getting started with Amazon EKS ΓÇô eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). - - The resource capacity to create a new SQS queue, Kinesis Fire Hose delivery stream, and S3 bucket in the cluster's region. +### Defender for Containers ++If you choose the Microsoft Defender for Containers plan, you need: -- **To enable the Defender for SQL plan**, you need:+- At least one Amazon EKS cluster with permission to access to the EKS Kubernetes API server. If you need to create a new EKS cluster, follow the instructions in [Getting started with Amazon EKS ΓÇô eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). +- The resource capacity to create a new Amazon SQS queue, Kinesis Data Firehose delivery stream, and Amazon S3 bucket in the cluster's region. - - Microsoft Defender for SQL enabled on your subscription. Learn how to [protect your databases](tutorial-enable-databases-plan.md). +### Defender for SQL - - An active AWS account, with EC2 instances running SQL server or RDS Custom for SQL Server. +If you choose the Microsoft Defender for SQL plan, you need: - - Azure Arc for servers installed on your EC2 instances/RDS Custom for SQL Server. - - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances. +- Microsoft Defender for SQL enabled on your subscription. [Learn how to protect your databases](tutorial-enable-databases-plan.md). +- An active AWS account, with EC2 instances running SQL Server or RDS Custom for SQL Server. +- Azure Arc for servers installed on your EC2 instances or RDS Custom for SQL Server. - Auto provisioning, which is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent preinstalled. If you already have the SSM agent preinstalled, the AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you need to install it using either of the following relevant instructions from Amazon: - - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - Ensure that your SSM agent has the managed policy ["AmazonSSMManagedInstanceCore"](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html) that enables AWS Systems Manager service core functionality. +We recommend that you use the auto-provisioning process to install Azure Arc on all of your existing and future EC2 instances. To enable the Azure Arc auto-provisioning, you need **Owner** permission on the relevant Azure subscription. - > [!NOTE] - > To enable the Azure Arc auto-provisioning, you'll need **Owner** permission on the relevant Azure subscription. +AWS Systems Manager (SSM) manages auto-provisioning by using the SSM Agent. Some Amazon Machine Images already have the [SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html). If your EC2 instances don't have the SSM Agent, install it by using these instructions from Amazon: [Install SSM Agent for a hybrid and multicloud environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html). - - Other extensions should be enabled on the Arc-connected machines: - - Microsoft Defender for Endpoint - - VA solution (TVM/Qualys) - - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA) +Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html). It enables core functionality for the AWS Systems Manager service. - Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription inherits the subscription settings for the LA agent and AMA. +Enable these other extensions on the Azure Arc-connected machines: + +- Microsoft Defender for Endpoint +- A vulnerability assessment solution (TVM or Qualys) +- The Log Analytics agent on Azure Arc-connected machines or the Azure Monitor agent - Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud. +Make sure the selected Log Analytics workspace has a security solution installed. The Log Analytics agent and the Azure Monitor agent are currently configured at the *subscription* level. All of your AWS accounts and Google Cloud Platform (GCP) projects under the same subscription inherit the subscription settings for the Log Analytics agent and the Azure Monitor agent. -- **To enable the Defender for Servers plan**, you need:+[Learn more about monitoring components](monitoring-components.md) for Defender for Cloud. - - Microsoft Defender for Servers enabled on your subscription. Learn how to enable [Defender for Servers](tutorial-enable-servers-plan.md). +### Defender for Servers - - An active AWS account, with EC2 instances. +If you choose the Microsoft Defender for Servers plan, you need: - - Azure Arc for servers installed on your EC2 instances. - - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances. +- Microsoft Defender for Servers enabled on your subscription. Learn how to enable plans in [Enable enhanced security features](enable-enhanced-security.md). +- An active AWS account, with EC2 instances. +- Azure Arc for servers installed on your EC2 instances. - Auto provisioning, which is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent preinstalled. If that is the case, their AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you need to install it using either of the following relevant instructions from Amazon: - - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html) - Ensure that your SSM agent has the managed policy ["AmazonSSMManagedInstanceCore"](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html) that enables AWS Systems Manager service core functionality. +We recommend that you use the auto-provisioning process to install Azure Arc on all of your existing and future EC2 instances. To enable the Azure Arc auto-provisioning, you need **Owner** permission on the relevant Azure subscription. - > [!NOTE] - > To enable the Azure Arc auto-provisioning, you need an **Owner** permission on the relevant Azure subscription. +AWS Systems Manager manages auto-provisioning by using the SSM Agent. Some Amazon Machine Images already have the [SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html). If your EC2 instances don't have the SSM Agent, install it by using either of the following instructions from Amazon: - - If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed. +- [Install SSM Agent for a hybrid and multicloud environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) +- [Install SSM Agent for a hybrid and multicloud environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html) - - Other extensions should be enabled on the Arc-connected machines: - - Microsoft Defender for Endpoint - - VA solution (TVM/Qualys) - - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA) +Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html), which enables core functionality for the AWS Systems Manager service. - Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription inherits the subscription settings for the LA agent and AMA. +If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed. - Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud. +Enable these other extensions on the Azure Arc-connected machines: + +- Microsoft Defender for Endpoint +- A vulnerability assessment solution (TVM or Qualys) +- The Log Analytics agent on Azure Arc-connected machines or the Azure Monitor agent - > [!NOTE] - > Defender for Servers assigns tags to your AWS resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources: - **AccountId**, **Cloud**, **InstanceId**, **MDFCSecurityConnector** +Make sure the selected Log Analytics workspace has a security solution installed. The Log Analytics agent and the Azure Monitor agent are currently configured at the *subscription* level. All of your AWS accounts and GCP projects under the same subscription inherit the subscription settings for the Log Analytics agent and the Azure Monitor agent. ++[Learn more about monitoring components](monitoring-components.md) for Defender for Cloud. ++Defender for Servers assigns tags to your AWS resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage them: `AccountId`, `Cloud`, `InstanceId`, and `MDFCSecurityConnector`. ## Connect your AWS account -**To connect your AWS account to Defender for Cloud**: +To connect your AWS to Defender for Cloud by using a native connector: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to **Defender for Cloud** > **Environment settings**. +1. Go to **Defender for Cloud** > **Environment settings**. 1. Select **Add environment** > **Amazon Web Services**. - :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-environment-settings.png" alt-text="Screenshot of connecting an AWS account to an Azure subscription." lightbox="media/quickstart-onboard-aws/add-aws-account-environment-settings.png"::: + :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-environment-settings.png" alt-text="Screenshot that shows connecting an AWS account to an Azure subscription." lightbox="media/quickstart-onboard-aws/add-aws-account-environment-settings.png"::: 1. Enter the details of the AWS account, including the location where you store the connector resource. - :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-details.png" alt-text="Screenshot of step 1 of the add AWS account wizard: Enter the account details." lightbox="media/quickstart-onboard-aws/add-aws-account-details.png"::: + :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-details.png" alt-text="Screenshot that shows the tab for entering account details for an AWS account." lightbox="media/quickstart-onboard-aws/add-aws-account-details.png"::: - (Optional) Select **Management account** to create a connector to a management account. Connectors are created for each member account discovered under the provided management account. Autoprovisioning is enabled for all of the newly onboarded accounts. + Optionally, select **Management account** to create a connector to a management account. Connectors are created for each member account discovered under the provided management account. Auto-provisioning is enabled for all of the newly onboarded accounts. 1. Select **Next: Select plans**. - > [!NOTE] - > Each plan has its own requirements for permissions, and might incur charges. Learn more about [each plan's requirements](concept-aws-connector.md#native-connector-plan-requirements) and their [prices](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h). + The **Select plans** tab is where you choose which Defender for Cloud capabilities to enable for this AWS account. Each plan has its own [requirements for permissions](concept-aws-connector.md#native-connector-plan-requirements) and might incur [charges](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h). - :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-plans-selection.png" alt-text="Screenshot of the select plans tab where you can choose which Defender for Cloud plans to enable for your AWS account." lightbox="media/quickstart-onboard-aws/add-aws-account-plans-selection.png"::: + :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-plans-selection.png" alt-text="Screenshot that shows the tab for selecting plans for an AWS account." lightbox="media/quickstart-onboard-aws/add-aws-account-plans-selection.png"::: > [!IMPORTANT]- > To present the current status of your recommendations, the CSPM plan queries the AWS resource APIs several times a day. These read-only API calls incur no charges, but they *are* registered in CloudTrail if you've enabled a trail for read events. As explained in [the AWS documentation](https://aws.amazon.com/cloudtrail/pricing/), there are no additional charges for keeping one trail. If you're exporting the data out of AWS (for example, to an external SIEM), this increased volume of calls might also increase ingestion costs. In such cases, We recommend filtering out the read-only calls from the Defender for Cloud user or role ARN: `arn:aws:iam::[accountId]:role/CspmMonitorAws` (this is the default role name, confirm the role name configured on your account). + > To present the current status of your recommendations, the Microsoft Defender Cloud Security Posture Management plan queries the AWS resource APIs several times a day. These read-only API calls incur no charges, but they *are* registered in CloudTrail if you've enabled a trail for read events. + > + > As explained in [the AWS documentation](https://aws.amazon.com/cloudtrail/pricing/), there are no additional charges for keeping one trail. If you're exporting the data out of AWS (for example, to an external SIEM system), this increased volume of calls might also increase ingestion costs. In such cases, we recommend filtering out the read-only calls from the Defender for Cloud user or ARN role: `arn:aws:iam::[accountId]:role/CspmMonitorAws`. (This is the default role name. Confirm the role name configured on your account.) - - By default the **Servers** plan is set to **On**. This is necessary to extend Defender for server's coverage to your AWS EC2. Ensure you've fulfilled the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud). +1. By default, the **Servers** plan is set to **On**. This setting is necessary to extend the coverage of Defender for Servers to AWS EC2. Ensure that you've fulfilled the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud). - - (Optional) Select **Configure**, to edit the configuration as required. + Optionally, select **Configure** to edit the configuration as required. > [!NOTE]- > The respective Azure Arc servers for EC2 instances or GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of ["Disconnected" or "Expired"](/azure/azure-arc/servers/overview)) will be removed after 7 days. This process removes irrelevant Azure Arc entities, ensuring only Azure Arc servers related to existing instances are displayed. + > The respective Azure Arc servers for EC2 instances or GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of [Disconnected or Expired](/azure/azure-arc/servers/overview)) are removed after 7 days. This process removes irrelevant Azure Arc entities to ensure that only Azure Arc servers related to existing instances are displayed. - - By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you've fulfilled the [network requirements](./defender-for-containers-enable.md?pivots=defender-for-container-eks&source=docs&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#network-requirements) for the Defender for Containers plan. +1. By default, the **Containers** plan is set to **On**. This setting is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure that you've fulfilled the [network requirements](./defender-for-containers-enable.md?pivots=defender-for-container-eks&source=docs&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#network-requirements) for the Defender for Containers plan. - > [!Note] - > Azure Arc-enabled Kubernetes, the Defender Arc extension, and the Azure Policy Arc extension should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks). + > [!NOTE] + > Azure Arc-enabled Kubernetes, the Azure Arc extension for Microsoft Defender, and the Azure Arc extension for Azure Policy should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Azure Arc, if necessary), as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks). - - (Optional) Select **Configure**, to edit the configuration as required. If you choose to disable this configuration, the `Threat detection (control plane)` feature is disabled. Learn more about the [feature availability](supported-machines-endpoint-solutions-clouds-containers.md). + Optionally, select **Configure** to edit the configuration as required. If you choose to turn off this configuration, the **Threat detection (control plane)** feature is also disabled. [Learn more about feature availability](supported-machines-endpoint-solutions-clouds-containers.md). - - By default the **Databases** plan is set to **On**. This is necessary to extend Defender for SQL's coverage to your AWS EC2 and RDS Custom for SQL Server. +1. By default, the **Databases** plan is set to **On**. This setting is necessary to extend coverage of Defender for SQL to AWS EC2 and RDS Custom for SQL Server. - - (Optional) Select **Configure**, to edit the configuration as required. We recommend you leave it set to the default configuration. + Optionally, select **Configure** to edit the configuration as required. We recommend that you leave it set to the default configuration. 1. Select **Next: Configure access**. -1. Select **Click to download the CloudFormation template**, to download the CloudFormation template. +1. On the **Configure access** tab, select **Click to download the CloudFormation template** to download the CloudFormation template. ++ :::image type="content" source="media/quickstart-onboard-aws/download-cloudformation-template.png" alt-text="Screenshot that shows the button to download the CloudFormation template." lightbox="media/quickstart-onboard-aws/download-cloudformation-template.png"::: ++1. Continue to configure access by making the following selections: - :::image type="content" source="media/quickstart-onboard-aws/download-cloudformation-template.png" alt-text="Screenshot that shows you where to select on the screen to download the CloudFormation template." lightbox="media/quickstart-onboard-aws/download-cloudformation-template.png"::: + a. Choose a deployment type: - - Default access - Allows Defender for Cloud to scan your resources and automatically include future capabilities. - - Least privileged access - Grants Defender for Cloud access only to the current permissions needed for the selected plans. If you select the least privileged permissions, you receive notifications on any new roles and permissions that are required to get full functionality on the connector health section. + - **Default access**: Allows Defender for Cloud to scan your resources and automatically include future capabilities. + - **Least privilege access**: Grants Defender for Cloud access only to the current permissions needed for the selected plans. If you select the least privileged permissions, you'll receive notifications on any new roles and permissions that are required to get full functionality for connector health. - b. Choose deployment method: **AWS CloudFormation** or **Terraform**. + b. Choose a deployment method: **AWS CloudFormation** or **Terraform**. - :::image type="content" source="media/quickstart-onboard-aws/aws-configure-access.png" alt-text="Screenshot showing the configure access and its deployment options and instructions."::: + :::image type="content" source="media/quickstart-onboard-aws/aws-configure-access.png" alt-text="Screenshot that shows deployment options and instructions for configuring access."::: -1. Follow the on-screen instructions for the selected deployment method to complete the required dependencies on AWS. If you're onboarding a management account, you need to run the CloudFormation template both as Stack and as StackSet. Connectors will be created for the member accounts up to 24 hours after the onboarding. +1. Follow the on-screen instructions for the selected deployment method to complete the required dependencies on AWS. If you're onboarding a management account, you need to run the CloudFormation template both as Stack and as StackSet. Connectors are created for the member accounts up to 24 hours after the onboarding. 1. Select **Next: Review and generate**. 1. Select **Create**. -Defender for Cloud immediately starts scanning your AWS resources and you see security recommendations within a few hours. For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). +Defender for Cloud immediately starts scanning your AWS resources. Security recommendations appear within a few hours. ## Deploy a CloudFormation template to your AWS account -As part of connecting an AWS account to Microsoft Defender for Cloud, a CloudFormation template should be deployed to the AWS account. This CloudFormation template creates all of the required resources necessary for Microsoft Defender for Cloud to connect to the AWS account. +As part of connecting an AWS account to Microsoft Defender for Cloud, you deploy a CloudFormation template to the AWS account. This template creates all of the required resources for the connection. -The CloudFormation template should be deployed using Stack (or StackSet if you have a management account). +Deploy the CloudFormation template by using Stack (or StackSet if you have a management account). When you're deploying the template, the Stack creation wizard offers the following options. -The Stack creation wizard offers the following options when you deploy the CloudFormation template: +- **Amazon S3 URL**: Upload the downloaded CloudFormation template to your own S3 bucket with your own security configurations. Enter the URL to the S3 bucket in the AWS deployment wizard. -1. **Amazon S3 URL** ΓÇô upload the downloaded CloudFormation template to your own S3 bucket with your own security configurations. Enter the URL to the S3 bucket in the AWS deployment wizard. --1. **Upload a template file** ΓÇô AWS automatically creates an S3 bucket that the CloudFormation template is saved to. The automation for the S3 bucket has a security misconfiguration that causes the `S3 buckets should require requests to use Secure Socket Layer` recommendation to appear. You can remediate this recommendation by applying the following policy: +- **Upload a template file**: AWS automatically creates an S3 bucket that the CloudFormation template is saved to. The automation for the S3 bucket has a security misconfiguration that causes the `S3 buckets should require requests to use Secure Socket Layer` recommendation to appear. You can remediate this recommendation by applying the following policy: ```bash {ΓÇ» The Stack creation wizard offers the following options when you deploy the Cloud ## Monitor your AWS resources -Defender for Cloud's security recommendations page displays your AWS resources. You can use the environments filter to enjoy Defender for Cloud's multicloud capabilities. +The security recommendations page in Defender for Cloud displays your AWS resources. You can use the environments filter to enjoy multicloud capabilities in Defender for Cloud. -To view all the active recommendations for your resources by resource type, use Defender for Cloud's asset inventory page and filter to the AWS resource type in which you're interested: +To view all the active recommendations for your resources by resource type, use the asset inventory page in Defender for Cloud and filter to the AWS resource type that you're interested in. ## Learn more -You can check out the following blogs: +Check out the following blogs: -- [Ignite 2021: Microsoft Defender for Cloud news](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/ignite-2021-microsoft-defender-for-cloud-news/ba-p/2882807).+- [Ignite 2021: Microsoft Defender for Cloud news](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/ignite-2021-microsoft-defender-for-cloud-news/ba-p/2882807) - [Security posture management and server protection for AWS and GCP](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388) ## Clean up resources -There's no need to clean up any resources for this tutorial. +There's no need to clean up any resources for this article. ## Next steps -Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud. --- [Protect all of your resources with Defender for Cloud](enable-all-plans.md)+Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud: -- Set up your [on-premises machines](quickstart-onboard-machines.md), [GCP projects](quickstart-onboard-gcp.md).-- Check out [common questions](faq-general.yml) about onboarding your AWS account.-- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)+- [Protect all of your resources with Defender for Cloud](enable-all-plans.md). +- Set up your [on-premises machines](quickstart-onboard-machines.md) and [GCP projects](quickstart-onboard-gcp.md). +- Get answers to [common questions](faq-general.yml) about onboarding your AWS account. +- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector). |
defender-for-cloud | Quickstart Onboard Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md | -With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), GitHub, and Azure DevOps (ADO). +Cloud workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Azure, Amazon Web Services, Google Cloud Platform, GitHub, and Azure DevOps. -To protect your ADO-based resources, you can connect your ADO organizations on the environment settings page in Microsoft Defender for Cloud. This page provides a simple onboarding experience (including auto discovery). +In this quickstart, you connect your Azure DevOps organizations on the **Environment settings** page in Microsoft Defender for Cloud. This page provides a simple onboarding experience (including auto-discovery). -By connecting your Azure DevOps repositories to Defender for Cloud, you'll extend Defender for Cloud's enhanced security features to your ADO resources. These features include: +By connecting your Azure DevOps repositories to Defender for Cloud, you extend the security features of Defender for Cloud to your Azure DevOps resources. These features include: -- **Defender for Cloud's Cloud Security Posture Management (CSPM) features** - Assesses your Azure DevOps resources according to ADO-specific security recommendations. You can also learn about all the [recommendations for DevOps](recommendations-reference.md) resources. Resources are assessed for compliance with built-in standards that are specific to DevOps. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature that helps you manage your Azure DevOps resources alongside your Azure resources.+- **Microsoft Defender Cloud Security Posture Management features**: You can assess your Azure DevOps resources for compliance with Azure DevOps-specific security recommendations. You can also learn about all the [recommendations for DevOps](recommendations-reference.md) resources. The Defender for Cloud [asset inventory page](asset-inventory.md) is a multicloud-enabled feature that helps you manage your Azure DevOps resources alongside your Azure resources. -- **Defender for Cloud's Workload Protection features** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your Azure DevOps resources.+- **Workload protection features**: You can extend the threat detection capabilities and advanced defenses in Defender for Cloud to your Azure DevOps resources. -API calls performed by Defender for Cloud count against the [Azure DevOps Global consumption limit](/azure/devops/integrate/concepts/rate-limits). For more information, see the [common questions](faq-defender-for-devops.yml) for Defender for DevOps. +API calls that Defender for Cloud performs count against the [Azure DevOps global consumption limit](/azure/devops/integrate/concepts/rate-limits). For more information, see the [common questions about Microsoft Defender for DevOps](faq-defender-for-devops.yml). ## Prerequisites -- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).+To complete this quickstart, you need: -- You must [configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).+- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- The [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md) configured. ## Availability | Aspect | Details | |--|--|-| Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. | +| Release state: | Preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability. | | Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). |-| Required permissions: | **- Azure account:** with permissions to sign into Azure portal <br> **- Contributor:** on the Azure subscription where the connector will be created <br> **- Security Admin Role:** in Defender for Cloud <br> **- Organization Administrator:** in Azure DevOps <br> **- Basic or Basic + Test Plans Access Level:** in Azure DevOps. <br> - In Azure DevOps, configure: Third-party applications gain access via OAuth, which must be set to `On` . [Learn more about OAuth](/azure/devops/organizations/accounts/change-application-access-policies)| +| Required permissions: | **Account Administrator** with permissions to sign in to the Azure portal. <br> **Contributor** on the Azure subscription where the connector will be created. <br> **Security Admin** in Defender for Cloud. <br> **Organization Administrator** in Azure DevOps. <br> **Basic or Basic + Test Plans Access Level** in Azure DevOps. Third-party applications gain access via OAuth, which must be set to `On`. [Learn more about OAuth](/azure/devops/organizations/accounts/change-application-access-policies).| | Regions: | Central US, West Europe, Australia East |-| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) | +| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) | ## Connect your Azure DevOps organization -**To connect your Azure DevOps organization**: +To connect your Azure DevOps organization to Defender for Cloud by using a native connector: 1. Sign in to the [Azure portal](https://portal.azure.com/). -1. Navigate to **Microsoft Defender for Cloud** > **Environment Settings**. +1. Go to **Microsoft Defender for Cloud** > **Environment settings**. 1. Select **Add environment**. 1. Select **Azure DevOps**. - :::image type="content" source="media/quickstart-onboard-ado/devop-connector.png" alt-text="Screenshot that shows you where to navigate to select the DevOps connector." lightbox="media/quickstart-onboard-ado/devop-connector.png"::: + :::image type="content" source="media/quickstart-onboard-ado/devop-connector.png" alt-text="Screenshot that shows selections for adding Azure DevOps as a connector." lightbox="media/quickstart-onboard-ado/devop-connector.png"::: -1. Enter a name, select a subscription, resource group, and region. +1. Enter a name, subscription, resource group, and region. - > [!NOTE] - > The subscription will be the location where Defender for DevOps will create and store the Azure DevOps connection. + The subscription is the location where Microsoft Defender for DevOps creates and stores the Azure DevOps connection. 1. Select **Next: Select plans**. 1. Select **Next: Authorize connection**. 1. Select **Authorize**.- - > [!NOTE] - > The authorization will automatically login using the session from your browser's tab. After you select **Authorize**, if you don't see the Azure DevOps organizations you expect to see, check whether you are logged in to Microsoft Defender for Cloud in one browser tab and logged in to Azure DevOps in another browser tab. -1. In the popup screen, read the list of permission requests, and select **Accept**. + The authorization automatically signs in by using the session from your browser's tab. After you select **Authorize**, if you don't see the Azure DevOps organizations that you expect, check whether you're signed in to Microsoft Defender for Cloud on one browser tab and signed in to Azure DevOps on another browser tab. - :::image type="content" source="media/quickstart-onboard-ado/accept.png" alt-text="Screenshot that shows you the accept button, to accept the permissions."::: +1. In the popup dialog, read the list of permission requests, and then select **Accept**. -1. Select your relevant organization(s) from the drop-down menu. + :::image type="content" source="media/quickstart-onboard-ado/accept.png" alt-text="Screenshot that shows the button for accepting permissions."::: -1. For projects +1. Select your relevant organizations from the drop-down menu. - - Select **Auto discover projects** to discover all projects automatically and apply auto discover to all current and future projects. - - or +1. For projects, do one of the following: - - Select your relevant project(s) from the drop-down menu. - - > [!NOTE] - > If you select your relevant project(s) from the drop down menu, you will also need to select auto discover repositories or select individual repositories. + - Select **Auto discover projects** to discover all projects automatically and apply auto-discovery to all current and future projects. -1. Select **Next: Review and create**. --1. Review the information and select **Create**. --The Defender for DevOps service automatically discovers the organizations, projects, and repositories you select and analyzes them for any security issues. + - Select your relevant projects from the drop-down menu. Then, select **Auto-discover repositories** or select individual repositories. -When auto-discovery is selected during the onboarding process, it can take up to 4 hours for repositories to appear. +1. Select **Next: Review and create**. -The Inventory page populates with your selected repositories, and the Recommendations page shows any security issues related to a selected repository. +1. Review the information, and then select **Create**. -## Learn more +The Defender for DevOps service automatically discovers the organizations, projects, and repositories that you selected and analyzes them for any security problems. -- Learn more about [Azure DevOps](/azure/devops/).+When you select auto-discovery during the onboarding process, repositories can take up to 4 hours to appear. -- Learn how to [create your first pipeline](/azure/devops/pipelines/create-first-pipeline).+The **Inventory** page shows your selected repositories. The **Recommendations** page shows any security problems related to a selected repository. ## Next steps - Learn more about [Defender for DevOps](defender-for-devops-introduction.md).-+- Learn more about [Azure DevOps](/azure/devops/). +- Learn how to [create your first pipeline](/azure/devops/pipelines/create-first-pipeline). - Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud.--- Check out [common questions](faq-defender-for-devops.yml) about Defender for DevOps |
defender-for-cloud | Quickstart Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md | Title: Connect your GCP project to Microsoft Defender for Cloud -description: Defend your GCP resources with Microsoft Defender for Cloud. +description: Defend your GCP resources by using Microsoft Defender for Cloud. Last updated 06/28/2023 -# Set up your GCP projects +# Connect your GCP project to Microsoft Defender for Cloud -With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Google Cloud Platform (GCP), but you need to set up the connection between them to your Azure subscription. +Workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Google Cloud Platform (GCP), but you need to set up the connection between them to your Azure subscription. -> [!NOTE] -> If you are connecting an GCP project that was previously connected with the classic connector, you must [remove them](how-to-use-the-classic-connector.md#remove-classic-gcp-connectors) first. Using a GCP project that is connected by both the classic and native connectors can produce duplicate recommendations. +If you're connecting a GCP project that you previously connected by using the classic connector, you must [remove it](how-to-use-the-classic-connector.md#remove-classic-gcp-connectors) first. Using a GCP project that's connected by both the classic and native connectors can produce duplicate recommendations. -This screenshot shows AWS accounts displayed in Defender for Cloud's [overview dashboard](overview-page.md). +This screenshot shows GCP accounts displayed in the Defender for Cloud [overview dashboard](overview-page.md). ## Prerequisites -- You need a Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free subscription](https://azure.microsoft.com/pricing/free-trial/).+To complete the procedures in this article, you need: ++- A Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free one](https://azure.microsoft.com/pricing/free-trial/). -- You must [Set up Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription.+- [Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) set up on your Azure subscription. - Access to a GCP project. -- Required roles and permissions: **Contributor** on the relevant Azure Subscription **Owner** on the GCP organization or project.+- **Contributor** permission on the relevant Azure subscription, and **Owner** permission on the GCP organization or project. -You can learn more about Defender for Cloud's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +You can learn more about Defender for Cloud pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). -When connecting your GCP projects to specific Azure subscriptions, consider the [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail) and these guidelines: +When you're connecting GCP projects to specific Azure subscriptions, consider the [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail) and these guidelines: -- You can connect your GCP projects to Microsoft Defender for Cloud on the project level.+- You can connect your GCP projects to Microsoft Defender for Cloud at the *project* level. - You can connect multiple projects to one Azure subscription. - You can connect multiple projects to multiple Azure subscriptions. ## Connect your GCP project -**To connect your GCP project to Defender for Cloud with a native connector**: +To connect your GCP project to Defender for Cloud by using a native connector: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to **Defender for Cloud** > **Environment settings**. +1. Go to **Defender for Cloud** > **Environment settings**. -1. Select **+ Add environment** > **Google Cloud Platform**. +1. Select **Add environment** > **Google Cloud Platform**. - :::image type="content" source="media/quickstart-onboard-gcp/google-cloud.png" alt-text="Screenshot of the location of the Google cloud environment button." lightbox="media/quickstart-onboard-gcp/google-cloud.png"::: + :::image type="content" source="media/quickstart-onboard-gcp/google-cloud.png" alt-text="Screenshot that shows selections for adding Google Cloud Platform as a connector." lightbox="media/quickstart-onboard-gcp/google-cloud.png"::: 1. Enter all relevant information. - :::image type="content" source="media/quickstart-onboard-gcp/create-connector.png" alt-text="Screenshot of the Create GCP connector page where you need to enter all relevant information." lightbox="media/quickstart-onboard-gcp/create-connector.png"::: + :::image type="content" source="media/quickstart-onboard-gcp/create-connector.png" alt-text="Screenshot of the pane for creating a GCP connector." lightbox="media/quickstart-onboard-gcp/create-connector.png"::: - (Optional) If you select **Organization**, a management project and an organization custom role is created on your GCP project for the onboarding process. Autoprovisioning is enabled for the onboarding of new projects. + Optionally, if you select **Organization**, a management project and an organization custom role are created on your GCP project for the onboarding process. Auto-provisioning is enabled for the onboarding of new projects. -1. Select the **Next: Select Plans**. +1. Select **Next: Select plans**. -1. Toggle the plans you want to connect to **On**. By default all necessary prerequisites and components are provisioned. (Optional) Learn how to [configure each plan](#optional-configure-selected-plans). +1. For the plans that you want to connect, turn the toggle to **On**. By default, all necessary prerequisites and components are provisioned. [Learn how to configure each plan](#optional-configure-selected-plans). - - Optional (**Containers only**) Ensure you've fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for the Defender for Containers plan. + If you choose to turn on the Microsoft Defender for Containers plan, ensure that you meet the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for it. -1. Select the **Next: Configure access**. +1. Select **Next: Configure access**. - 1. Choose deployment type, **Default access** or **Least privilege access**. + 1. Choose the deployment type: - - Default access - Allows Defender for Cloud to scan your resources and automatically include future capabilities. - - Least privileged access - Grants Defender for Cloud access only to the current permissions needed for the selected plans. If you select the least privileged permissions, you receive notifications on any new roles and permissions that are required to get full functionality on the connector health section. + - **Default access**: Allows Defender for Cloud to scan your resources and automatically include future capabilities. + - **Least privilege access**: Grants Defender for Cloud access to only the current permissions needed for the selected plans. If you select the least privileged permissions, you'll receive notifications on any new roles and permissions that are required to get full functionality for connector health. - 1. Choose deployment method: **GCP Cloud Shell** or **Terraform**. + 1. Choose the deployment method: **GCP Cloud Shell** or **Terraform**. 1. Select **Copy**. - :::image type="content" source="media/quickstart-onboard-gcp/copy-button.png" alt-text="Screenshot showing the location of the copy button."::: + :::image type="content" source="media/quickstart-onboard-gcp/copy-button.png" alt-text="Screenshot that shows the location of the copy button."::: - > [!NOTE] - > To discover GCP resources and for the authentication process, the following APIs must be enabled: `iam.googleapis.com`, `sts.googleapis.com`, `cloudresourcemanager.googleapis.com`, `iamcredentials.googleapis.com`, `compute.googleapis.com`. If these APIs are not enabled, we'll enable them during the onboarding process by running the GCloud script. + > [!NOTE] + > For the discovery of GCP resources and for the authentication process, you must enable the following APIs: `iam.googleapis.com`, `sts.googleapis.com`, `cloudresourcemanager.googleapis.com`, `iamcredentials.googleapis.com`, and `compute.googleapis.com`. If you don't enable these APIs, we'll enable them during the onboarding process by running the GCloud script. -1. Select **GCP Cloud Shell >**, the GCP Cloud Shell opens. +1. Select **GCP Cloud Shell >**. The GCP Cloud Shell opens. -1. Paste the script into the Cloud Shell terminal and run it. +1. Paste the script into the GCP Cloud Shell terminal and run it. -1. Ensure that the following resources were created: +1. Ensure that you created the following resources for Microsoft Defender Cloud Security Posture Management (CSPM) and Defender for Containers: | CSPM | Defender for Containers| |--|--|- | CSPM service account reader role <br><br> Microsoft Defender for Cloud identity federation <br><br> CSPM identity pool <br><br>*Microsoft Defender for Servers* service account (when the servers plan is enabled) <br><br>*Azure-Arc for servers onboarding* service account (when the Arc for servers autoprovisioning is enabled) | Microsoft Defender ContainersΓÇÖ service account role <br><br> Microsoft Defender Data Collector service account role <br><br> Microsoft Defender for Cloud identity pool | + | CSPM service account reader role <br><br> Microsoft Defender for Cloud identity federation <br><br> CSPM identity pool <br><br>Microsoft Defender for Servers service account (when the servers plan is enabled) <br><br>*Azure Arc for servers onboarding* service account (when Azure Arc for servers auto-provisioning is enabled) | Microsoft Defender for Containers service account role <br><br> Microsoft Defender Data Collector service account role <br><br> Microsoft Defender for Cloud identity pool | -Once the connector is created, a scan starts on your GCP environment. New recommendations will appear in Defender for Cloud after up to 6 hours. If you enabled autoprovisioning, Azure Arc and any enabled extensions install automatically for each new resource detected. +After you create the connector, a scan starts on your GCP environment. New recommendations appear in Defender for Cloud after up to 6 hours. If you enabled auto-provisioning, Azure Arc and any enabled extensions are installed automatically for each newly detected resource. -## (Optional) Configure selected plans +## Optional: Configure selected plans -By default, all plans are `On`. You can disable plans that you don't need. +By default, all plans are **On**. You can turn off plans that you don't need. -Connect your GCP VM instances to Azure Arc in order to have full visibility to Microsoft Defender for Servers security content. +### Configure the Defender for Servers plan -Microsoft Defender for Servers brings threat detection and advanced defenses to your GCP VMs instances. -To have full visibility to Microsoft Defender for Servers security content, ensure you have the following requirements configured: +Microsoft Defender for Servers brings threat detection and advanced defenses to your GCP virtual machine (VM) instances. To have full visibility into Microsoft Defender for Servers security content, connect your GCP VM instances to Azure Arc. If you choose the Microsoft Defender for Servers plan, you need: -- Microsoft Defender for Servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.+- Microsoft Defender for Servers enabled on your subscription. Learn how to enable plans in [Enable enhanced security features](enable-enhanced-security.md). - Azure Arc for servers installed on your VM instances.- - **(Recommended) Auto-provisioning** - Autoprovisioning is enabled by default in the onboarding process and requires owner permissions on the subscription. Arc autoprovisioning process is using the OS config agent on GCP end. Learn more about the [OS config agent availability on GCP machines](https://cloud.google.com/compute/docs/images/os-details#vm-manager). - > [!NOTE] - > The Arc auto-provisioning process leverages the VM manager on your Google Cloud Platform to enforce policies on the your VMs through the OS config agent. A VM with an [Active OS agent](https://cloud.google.com/compute/docs/manage-os#agent-state) will incur a cost according to GCP. Refer to [GCP's technical documentation](https://cloud.google.com/compute/docs/vm-manager#pricing) to see how this may affect your account. - > <br><br> Microsoft Defender for Servers does not install the OS config agent to a VM that does not have it installed. However, Microsoft Defender for Servers will enable communication between the OS config agent and the OS config service if the agent is already installed but not communicating with the service. - > <br><br> This can change the OS config agent from `inactive` to `active` and will lead to additional costs. +We recommend that you use the auto-provisioning process to install Azure Arc on your VM instances. Auto-provisioning is enabled by default in the onboarding process and requires **Owner** permissions on the subscription. The Azure Arc auto-provisioning process uses the OS Config agent on the GCP end. [Learn more about the availability of the OS Config agent on GCP machines](https://cloud.google.com/compute/docs/images/os-details#vm-manager). - - **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that aren't connected to Arc are surfaced by the recommendation `GCP VM instances should be connected to Azure Arc`. Select the **Fix** option in the recommendation to install Azure Arc on the selected machines. +The Azure Arc auto-provisioning process uses the VM manager on GCP to enforce policies on your VMs through the OS Config agent. A VM that has an [active OS Config agent](https://cloud.google.com/compute/docs/manage-os#agent-state) incurs a cost according to GCP. To see how this cost might affect your account, refer to the [GCP technical documentation](https://cloud.google.com/compute/docs/vm-manager#pricing). - > [!NOTE] - > The respective Azure Arc servers for EC2 instances or GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of ["Disconnected" or "Expired"](/azure/azure-arc/servers/overview)) will be removed after 7 days. This process removes irrelevant Azure Arc entities, ensuring only Azure Arc servers related to existing instances are displayed. +Microsoft Defender for Servers does not install the OS Config agent to a VM that doesn't have it installed. However, Microsoft Defender for Servers enables communication between the OS Config agent and the OS Config service if the agent is already installed but not communicating with the service. This communication can change the OS Config agent from `inactive` to `active` and lead to more costs. -- Ensure you've fulfilled the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud).+Alternatively, you can manually connect your VM instances to Azure Arc for servers. Instances in projects with the Defender for Servers plan enabled that aren't connected to Azure Arc are surfaced by the recommendation **GCP VM instances should be connected to Azure Arc**. Select the **Fix** option in the recommendation to install Azure Arc on the selected machines. -- Other extensions should be enabled on the Arc-connected machines.- - Microsoft Defender for Endpoint - - VA solution (Microsoft Defender Vulnerability Management/ Qualys) - - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA). Ensure the selected workspace has security solution installed. +The respective Azure Arc servers for EC2 instances or GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of [Disconnected or Expired](/azure/azure-arc/servers/overview)) are removed after 7 days. This process removes irrelevant Azure Arc entities to ensure that only Azure Arc servers related to existing instances are displayed. - The LA agent and AMA are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription inherits the subscription settings regarding the LA agent and AMA. +Ensure that you fulfill the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud). - Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud. +Enable these other extensions on the Azure Arc-connected machines: + +- Microsoft Defender for Endpoint +- A vulnerability assessment solution (Microsoft Defender Vulnerability Management or Qualys) +- The Log Analytics agent on Azure Arc-connected machines or the Azure Monitor agent - > [!NOTE] - > Defender for Servers assigns tags to your GCP resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources: - **Cloud**, **InstanceName**, **MDFCSecurityConnector**, **MachineId**, **ProjectId**, **ProjectNumber** +Make sure the selected Log Analytics workspace has a security solution installed. The Log Analytics agent and the Azure Monitor agent are currently configured at the *subscription* level. All the multicloud accounts and projects (from both AWS and GCP) under the same subscription inherit the subscription settings for the Log Analytics agent and the Azure Monitor agent. [Learn more about monitoring components for Defender for Servers](monitoring-components.md). -### Configure the servers plan +Defender for Servers assigns tags to your GCP resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Servers can manage your resources: `Cloud`, `InstanceName`, `MDFCSecurityConnector`, `MachineId`, `ProjectId`, and `ProjectNumber`. -Connect your GCP VM instances to Azure Arc in order to have full visibility to Microsoft Defender for Servers security content. +To configure the Defender for Servers plan: -**To configure the Servers plan**: +1. Follow the [steps to connect your GCP project](#connect-your-gcp-project). -1. Follow the steps to [Connect your GCP project](#connect-your-gcp-project). +1. On the **Select plans** tab, select **Configure**. -1. On the Select plans screen select **View configuration**. + :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot that shows the link for configuring the Defender for Servers plan."::: - :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot showing where to select to configure the Servers plan."::: +1. On the **Auto-provisioning configuration** pane, turn the toggles to **On** or **Off**, depending on your need. -1. On the Auto provisioning screen, toggle the switches on or off depending on your need. + :::image type="content" source="media/quickstart-onboard-gcp/auto-provision-screen.png" alt-text="Screenshot that shows the toggles for the Defender for Servers plan."::: - :::image type="content" source="media/quickstart-onboard-gcp/auto-provision-screen.png" alt-text="Screenshot showing the toggle switches for the Servers plan."::: -- > [!Note] - > If Azure Arc is toggled **Off**, you will need to follow the manual installation process mentioned above. + If **Azure Arc agent** is **Off**, you need to follow the manual installation process mentioned earlier. 1. Select **Save**. -1. Continue from step number 8 of the [Connect your GCP project](#connect-your-gcp-project) instructions. --### Configure the Databases plan +1. Continue from step 8 of the [Connect your GCP project](#connect-your-gcp-project) instructions. -**To configure the Databases plan**: +### Configure the Defender for Databases plan -Connect your GCP VM instances to Azure Arc in order to have full visibility to Microsoft Defender for SQL security content. +To have full visibility into Microsoft Defender for Databases security content, connect your GCP VM instances to Azure Arc. -**To configure the Databases plan**: +To configure the Defender for Databases plan: -1. Follow the steps to [Connect your GCP project](#connect-your-gcp-project). +1. Follow the [steps to connect your GCP project](#connect-your-gcp-project). -1. On the Select plans screen select **Configure**. +1. On the **Select plans** tab, select **Configure**. - :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot showing where to select to configure the Databases plan."::: + :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot that shows the link for configuring the Defender for Databases plan."::: -1. On the Auto provisioning screen, toggle the switches on or off depending on your need. +1. On the **Auto-provisioning configuration** pane, turn the toggles to **On** or **Off**, depending on your need. - :::image type="content" source="media/quickstart-onboard-gcp/auto-provision-databases-screen.png" alt-text="Screenshot showing the toggle switches for the Databases plan."::: + :::image type="content" source="media/quickstart-onboard-gcp/auto-provision-databases-screen.png" alt-text="Screenshot that shows the toggles for the Defender for Databases plan."::: - > [!Note] - > If Azure Arc is toggled **Off**, you will need to follow the manual installation process mentioned above. + If the toggle for Azure Arc is **Off**, you need to follow the manual installation process mentioned earlier. -1. Select **Save**. +1. Select **Save**. -1. Continue from step number 8 of the [Connect your GCP project](#connect-your-gcp-project) instructions. +1. Continue from step 8 of the [Connect your GCP project](#connect-your-gcp-project) instructions. -### Configure the Containers plan +### Configure the Defender for Containers plan -Microsoft Defender for Containers brings threat detection and advanced defenses to your GCP GKE Standard clusters. To get the full security value out of Defender for Containers and to fully protect GCP clusters, ensure you have the following requirements configured: +Microsoft Defender for Containers brings threat detection and advanced defenses to your GCP Google Kubernetes Engine (GKE) Standard clusters. To get the full security value out of Defender for Containers and to fully protect GCP clusters, ensure that you meet the following requirements. > [!NOTE]-> If you choose to disable the available configuration options, no agents or components will be deployed to your clusters. Learn more about [feature availability](supported-machines-endpoint-solutions-clouds-containers.md). +> If you choose to disable the available configuration options, no agents or components will be deployed to your clusters. [Learn more about feature availability](supported-machines-endpoint-solutions-clouds-containers.md). -- **Kubernetes audit logs to Defender for Cloud** - Enabled by default. This configuration is available at a GCP project level only. This provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud backend for further analysis.-- **Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extension** - Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in three different ways:- - **(Recommended)** Enable the Defender for Container autoprovisioning at the project level as explained in the instructions on this page. - - Defender for Cloud recommendations, for per cluster installation, which appears on the Microsoft Defender for Cloud's Recommendations page. Learn how to [deploy the solution to specific clusters](defender-for-containers-enable.md?tabs=defender-for-container-gke#deploy-the-solution-to-specific-clusters). - - Manual installation for [Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md) and [extensions](../azure-arc/kubernetes/extensions.md). +- **Kubernetes audit logs to Defender for Cloud**: Enabled by default. This configuration is available at the GCP project level only. It provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud back end for further analysis. +- **Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extension**: Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in three ways: + - Enable Defender for Containers auto-provisioning at the project level, as explained in the instructions in this section. We recommend this method. + - Use Defender for Cloud recommendations for per-cluster installation. They appear on the Microsoft Defender for Cloud recommendations page. [Learn how to deploy the solution to specific clusters](defender-for-containers-enable.md?tabs=defender-for-container-gke#deploy-the-solution-to-specific-clusters). + - Manually install [Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md) and [extensions](../azure-arc/kubernetes/extensions.md). -**To configure the Containers plan**: +To configure the Defender for Containers plan: -1. Follow the steps to [Connect your GCP project](#connect-your-gcp-project). +1. Follow the steps to [connect your GCP project](#connect-your-gcp-project). -1. On the Select plans screen select **Configure**. +1. On the **Select plans** tab, select **Configure**. - :::image type="content" source="media/quickstart-onboard-gcp/containers-configure.png" alt-text="Screenshot showing where to select to configure the Containers plan."::: + :::image type="content" source="media/quickstart-onboard-gcp/containers-configure.png" alt-text="Screenshot that shows the link for configuring the Defender for Containers plan."::: -1. On the Auto provisioning screen, toggle the switches **On**. +1. On the **Defender for Containers configuration** pane, turn the toggles to **On**. - :::image type="content" source="media/quickstart-onboard-gcp/containers-configuration.png" alt-text="Screenshot showing the toggle switches for the Containers plan."::: + :::image type="content" source="media/quickstart-onboard-gcp/containers-configuration.png" alt-text="Screenshot that shows toggles for the Defender for Containers plan."::: 1. Select **Save**. -1. Continue from step number 8 of the [Connect your GCP project](#connect-your-gcp-project) instructions. +1. Continue from step 8 of the [Connect your GCP project](#connect-your-gcp-project) instructions. ## Monitor your GCP resources -Microsoft Defender for Cloud's security recommendations page displays your GCP resources together with your Azure and AWS resources for a true multicloud view. +The security recommendations page in Defender for Cloud displays your GCP resources together with your Azure and AWS resources for a true multicloud view. -To view all the active recommendations for your resources by resource type, use Defender for Cloud's asset inventory page and filter to the GCP resource type that you're interested in: +To view all the active recommendations for your resources by resource type, use the asset inventory page in Defender for Cloud and filter to the GCP resource type that you're interested in. ## Next steps -Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following pages: +Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud: - [Protect all of your resources with Defender for Cloud](enable-all-plans.md).--- Set up your [on-premises machines](quickstart-onboard-machines.md), [AWS account](quickstart-onboard-aws.md).-+- Set up your [on-premises machines](quickstart-onboard-machines.md) and [AWS account](quickstart-onboard-aws.md). - [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector).--- Check out [common questions](faq-general.yml) about connecting your GCP project.+- Get answers to [common questions](faq-general.yml) about connecting your GCP project. |
defender-for-cloud | Quickstart Onboard Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md | -# Quickstart: Connect your GitHub repositories to Microsoft Defender for Cloud +# Quickstart: Connect your GitHub repositories to Microsoft Defender for Cloud -With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), GitHub, and Azure DevOps (ADO). +Cloud workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Azure, Amazon Web Services, Google Cloud Platform, GitHub, and Azure DevOps. -To protect your GitHub-based resources, you can connect your GitHub organizations on the environment settings page in Microsoft Defender for Cloud. This page provides a simple onboarding experience (including auto discovery). +In this quickstart, you connect your GitHub organizations on the **Environment settings** page in Microsoft Defender for Cloud. This page provides a simple onboarding experience (including auto-discovery). -By connecting your GitHub repositories to Defender for Cloud, you'll extend Defender for Cloud's enhanced security features to your GitHub resources. These features include: +By connecting your GitHub repositories to Defender for Cloud, you extend the enhanced security features of Defender for Cloud to your GitHub resources. These features include: -- **Defender for Cloud's Cloud Security Posture Management (CSPM) features** - Assesses your GitHub resources according to GitHub-specific security recommendations. You can also learn about all of the [recommendations for DevOps](recommendations-reference.md) resources. Resources are assessed for compliance with built-in standards that are specific to DevOps. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature that helps you manage your GitHub resources alongside your Azure resources.+- **Cloud Security Posture Management features**: You can assess your GitHub resources according to GitHub-specific security recommendations. You can also learn about all of the [recommendations for DevOps](recommendations-reference.md) resources. Resources are assessed for compliance with built-in standards that are specific to DevOps. The Defender for Cloud [asset inventory page](asset-inventory.md) is a multicloud-enabled feature that helps you manage your GitHub resources alongside your Azure resources. -- **Defender for Cloud's Cloud Workload Protection features** - Extends Defender for Cloud's threat detection capabilities and advanced defenses to your GitHub resources.+- **Workload protection features**: You can extend Defender for Cloud threat detection capabilities and advanced defenses to your GitHub resources. ## Prerequisites -- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).+To complete this quickstart, you need: -- To use all advanced security capabilities provided by GitHub Connector in Defender for DevOps, you need to have GitHub Enterprise with GitHub Advanced Security (GHAS) enabled.+- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- GitHub Enterprise with GitHub Advanced Security enabled, so you can use all advanced security capabilities that the GitHub connector provides in Defender for Cloud. ## Availability- > [!Note] - > During the preview, the maximum number of GitHub repositories that can be onboarded to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 GitHub repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded. - > - > If your organization is interested in onboarding more than 2,000 GitHub repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding). | Aspect | Details | |--|--|-| Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. | +| Release state: | Preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability. | | Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing).-| Required permissions: | **- Azure account:** with permissions to sign into Azure portal <br> **- Contributor:** on the Azure subscription where the connector will be created <br> **- Security Admin Role:** in Defender for Cloud <br> **- Organization Administrator:** in GitHub | -| GitHub supported versions: | GitHub Free, Pro, Team, and GitHub Enterprise Cloud | +| Required permissions: | **Account Administrator** with permissions to sign in to the Azure portal. <br> **Contributor** on the Azure subscription where the connector will be created. <br> **Security Admin** in Defender for Cloud. <br> **Organization Administrator** in GitHub. | +| GitHub supported versions: | GitHub Free, Pro, Team, and Enterprise Cloud | | Regions: | Australia East, Central US, West Europe |-| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) | +| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) | ++During the preview, the maximum number of GitHub repositories that you can onboard to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 GitHub repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded. ++If your organization is interested in onboarding more than 2,000 GitHub repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding). ## Connect your GitHub account -**To connect your GitHub account to Microsoft Defender for Cloud**: +To connect your GitHub account to Microsoft Defender for Cloud: 1. Sign in to the [Azure portal](https://portal.azure.com/). -1. Navigate to **Microsoft Defender for Cloud** > **Environment Settings**. +1. Go to **Microsoft Defender for Cloud** > **Environment settings**. 1. Select **Add environment**. 1. Select **GitHub**. - :::image type="content" source="media/quickstart-onboard-github/select-github.png" alt-text="Screenshot that shows you where to select, to select GitHub." lightbox="media/quickstart-onboard-github/select-github.png"::: + :::image type="content" source="media/quickstart-onboard-github/select-github.png" alt-text="Screenshot that shows selections for adding GitHub as a connector." lightbox="media/quickstart-onboard-github/select-github.png"::: -1. Enter a name (limit of 20 characters), select your subscription, resource group, and region. +1. Enter a name (limit of 20 characters), and then select your subscription, resource group, and region. - > [!NOTE] - > The subscription will be the location where Defender for DevOps will create and store the GitHub connection. + The subscription is the location where Defender for Cloud creates and stores the GitHub connection. 1. Select **Next: Select plans**. 1. Select **Next: Authorize connection**. -1. Select **Authorize** to grant your Azure subscription access to your GitHub repositories. Sign in, if necessary, with an account that has permissions to the repositories you want to protect. +1. Select **Authorize** to grant your Azure subscription access to your GitHub repositories. Sign in, if necessary, with an account that has permissions to the repositories that you want to protect. - > [!NOTE] - > The authorization will auto-login using the session from your browser tab. After you select Authorize, if you do not see the GitHub organizations you expect to see, check whether you are logged in to MDC in one browser tab and logged in to GitHub in another browser tab. - > After authorization, if you wait too long to install the DevOps application, the session will time out and you will receive an error message. + The authorization automatically signs in by using the session from your browser's tab. After you select **Authorize**, if you don't see the GitHub organizations that you expect, check whether you're signed in to Microsoft Defender for Cloud on one browser tab and signed in to GitHub on another browser tab. ++ After authorization, if you wait too long to install the DevOps application, the session will time out and you'll get an error message. 1. Select **Install**. 1. Select the repositories to install the GitHub application. - > [!Note] - > This will grant Defender for DevOps access to the selected repositories. + This step grants Defender for Cloud access to the selected repositories. -9. Select **Next: Review and create**. +1. Select **Next: Review and create**. -10. Select **Create**. +1. Select **Create**. -When the process completes, the GitHub connector appears on your Environment settings page. +When the process finishes, the GitHub connector appears on your **Environment settings** page. -The Defender for DevOps service automatically discovers the repositories you selected and analyzes them for any security issues. Initial repository discovery can take up to 10 minutes during the onboarding process. +The Defender for Cloud service automatically discovers the repositories that you selected and analyzes them for any security problems. Initial repository discovery can take up to 10 minutes during the onboarding process. -When auto-discovery is selected during the onboarding process, it can take up to 4 hours for repositories to appear after onboarding is completed. The auto-discovery process detects any new repositories and connects them to Defender for Cloud. +When you select auto-discovery during the onboarding process, repositories can take up to 4 hours to appear after onboarding is completed. The auto-discovery process detects any new repositories and connects them to Defender for Cloud. -The Inventory page populates with your selected repositories, and the Recommendations page shows any security issues related to a selected repository. This can take up to 3 hours or more. +The **Inventory** page shows your selected repositories. The **Recommendations** page shows any security problems related to a selected repository. This information can take 3 hours or more to appear. ## Learn more -- You can learn more about [how Azure and GitHub integrate](/azure/developer/github/).--- Learn about [security hardening practices for GitHub Actions](https://docs.github.com/actions/security-guides/security-hardening-for-github-actions).+- [Azure and GitHub integration](/azure/developer/github/) +- [Security hardening for GitHub Actions](https://docs.github.com/actions/security-guides/security-hardening-for-github-actions) ## Next steps-Learn more about [Defender for DevOps](defender-for-devops-introduction.md). --Learn how to [configure the MSDO GitHub action](github-action.md). -Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud. +- Learn about [Defender for DevOps](defender-for-devops-introduction.md). +- Learn how to [configure the Microsoft Security DevOps GitHub action](github-action.md). +- Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud. |
defender-for-cloud | Quickstart Onboard Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md | Title: Connect your on-premises machines to Defender for Cloud -description: Learn how to connect your on-premises machines to Microsoft Defender for Cloud + Title: Connect on-premises machines to Defender for Cloud +description: Learn how to connect your non-Azure machines to Microsoft Defender for Cloud. Last updated 06/29/2023 -Defender for Cloud can monitor the security posture of your non-Azure computers, but first you need to connect them to Azure. +Microsoft Defender for Cloud can monitor the security posture of your non-Azure machines, but first you need to connect them to Azure. You can connect your non-Azure computers in any of the following ways: - Onboarding with Azure Arc:- - [Using Azure Arc-enabled servers](#connect-on-premises-machines-using-azure-arc) (**recommended**) - - [From Defender for Cloud's pages in the Azure portal](#connect-on-premises-machines-using-the-azure-portal) -- [Onboarding directly with Defender for Endpoint](onboard-machines-with-defender-for-endpoint.md)+ - By using Azure Arc-enabled servers (recommended) + - By using the Azure portal +- [Onboarding directly with Microsoft Defender for Endpoint](onboard-machines-with-defender-for-endpoint.md) -> [!TIP] -> If you're connecting machines from other cloud providers, see [Connect your AWS accounts](quickstart-onboard-aws.md) or [Connect your GCP projects](quickstart-onboard-gcp.md). Defender for Cloud's multicloud connectors for AWS and GCP transparently handles the Azure Arc deployment for you. +This article describes the methods for onboarding with Azure Arc. ++If you're connecting machines from other cloud providers, see [Connect your AWS account](quickstart-onboard-aws.md) or [Connect your GCP project](quickstart-onboard-gcp.md). The multicloud connectors for Amazon Web Services (AWS) and Google Cloud Platform (GCP) in Defender for Cloud transparently handle the Azure Arc deployment for you. ## Prerequisites -- You need a Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free subscription](https://azure.microsoft.com/pricing/free-trial/).+To complete the procedures in this article, you need: -- You must [Set up Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) on your Azure subscription.+- A Microsoft Azure subscription. If you don't have an Azure subscription, you can [sign up for a free one](https://azure.microsoft.com/pricing/free-trial/). -- Access to an on-premises machine.+- [Microsoft Defender for Cloud](get-started.md#enable-defender-for-cloud-on-your-azure-subscription) set up on your Azure subscription. -## Connect on-premises machines using Azure Arc +- Access to an on-premises machine. -A machine that has [Azure Arc-enabled servers](../azure-arc/servers/overview.md) becomes an Azure resource. When you've installed the Log Analytics agent on it, it appears in Defender for Cloud with recommendations similar to your other Azure resources. +## Connect on-premises machines by using Azure Arc -In addition, Azure Arc-enabled servers provide enhanced capabilities such as the ability to enable guest configuration policies on the machine, simplify deployment with other Azure services and more. For an overview of the benefits of Azure Arc-enabled servers, see [Supported cloud operations](../azure-arc/servers/overview.md#supported-cloud-operations). +A machine that has [Azure Arc-enabled servers](../azure-arc/servers/overview.md) becomes an Azure resource. When you install the Log Analytics agent on it, it appears in Defender for Cloud with recommendations, like your other Azure resources. -> [!NOTE] -> Defender for Cloud's auto-deploy tools for deploying the Log Analytics agent works with machines running Azure Arc however this capability is currently in preview . When you've connected your machines using Azure Arc, use the relevant Defender for Cloud recommendation to deploy the agent and benefit from the full range of protections offered by Defender for Cloud: -> -> - [Log Analytics agent should be installed on your Linux-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/720a3e77-0b9a-4fa9-98b6-ddf0fd7e32c1) -> - [Log Analytics agent should be installed on your Windows-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27ac71b1-75c5-41c2-adc2-858f5db45b08) +Azure Arc-enabled servers provide enhanced capabilities, such as enabling guest configuration policies on the machine and simplifying deployment with other Azure services. For an overview of the benefits of Azure Arc-enabled servers, see [Supported cloud operations](../azure-arc/servers/overview.md#supported-cloud-operations). -**To deploy Azure Arc on one machine:** +To deploy Azure Arc on one machine, follow the instructions in [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md). -Follow the instructions in [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md). +To deploy Azure Arc on multiple machines at scale, follow the instructions in [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md). -**To deploy Azure Arc for multiple machines at scale:** +Defender for Cloud tools for automatically deploying the Log Analytics agent work with machines running Azure Arc. However, this capability is currently in preview. When you connect your machines by using Azure Arc, use the relevant Defender for Cloud recommendation to deploy the agent and benefit from the full range of protections that Defender for Cloud offers: -Follow the instructions in [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md). +- [Log Analytics agent should be installed on your Linux-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/720a3e77-0b9a-4fa9-98b6-ddf0fd7e32c1) +- [Log Analytics agent should be installed on your Windows-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27ac71b1-75c5-41c2-adc2-858f5db45b08) -## Connect on-premises machines using the Azure portal +## Connect on-premises machines by using the Azure portal -Once Defender for Cloud has been connected to your Azure subscription, you can start connecting your on-premises machines from the Getting started page within Defender for Cloud. +After you connect Defender for Cloud to your Azure subscription, you can start connecting your on-premises machines from the **Getting started** page in Defender for Cloud. 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Microsoft Defender for Cloud**. -1. In the Defender for Cloud menu, select **Getting started**. +1. On the Defender for Cloud menu, select **Getting started**. 1. Select the **Get started** tab. -1. Locate the Add on-premises servers and select **Configure** . +1. Find **Add non-Azure servers** and select **Configure**. - :::image type="content" source="./media/quickstart-onboard-machines/onboarding-get-started-tab.png" alt-text="Screenshot of the Get Started tab in the Getting started page." lightbox="./media/quickstart-onboard-machines/onboarding-get-started-tab.png"::: + :::image type="content" source="./media/quickstart-onboard-machines/onboarding-get-started-tab.png" alt-text="Screenshot of the tab for getting started with Defender for Cloud and adding an on-premises server." lightbox="./media/quickstart-onboard-machines/onboarding-get-started-tab.png"::: - A list of your Log Analytics workspaces is shown. + A list of your Log Analytics workspaces appears. -1. (Optional) If you don't already have a Log Analytics workspace, select **Create New workspace**, to create a new workspace in which to store the data. Follow the onscreen guide to create the workspace. +1. (Optional) If you don't already have a Log Analytics workspace in which to store the data, select **Create new workspace** and follow the on-screen guidance. -1. From the list of workspaces, select **Upgrade** for the relevant workspace to turn on Defender for Cloud's paid plans for 30 free days. +1. From the list of workspaces, select **Upgrade** for the relevant workspace to turn on Defender for Cloud paid plans for 30 free days. 1. From the list of workspaces, select **Add Servers** for the relevant workspace. - The **Agents management** page appears. +1. On the **Agents management** page, choose one of the following procedures, depending on the type of machines you're onboarding: - From here, choose the following relevant procedure depending on the type of machines you're onboarding: -- - [Onboard your Windows server](#onboard-your-windows-server) - - [Onboard your Linux servers](#onboard-your-linux-servers) + - [Onboard your Windows server](#onboard-your-windows-server) + - [Onboard your Linux server](#onboard-your-linux-server) ## Onboard your Windows server -When you add Windows server, you need the information on the Agents management page and to download the appropriate agent file (32/64-bit). +When you add a Windows server, you need to get the information on the **Agents management** page and download the appropriate agent file (32 bit or 64 bit). -**To onboard a Windows server**: +To onboard a Windows server: 1. Select **Windows servers**. - :::image type="content" source="media/quickstart-onboard-machines/windows-servers.png" alt-text="Screenshot that shows the Windows servers tab selected."::: + :::image type="content" source="media/quickstart-onboard-machines/windows-servers.png" alt-text="Screenshot that shows the tab for Windows servers."::: -1. Select the **Download Windows Agent** link applicable to your computer processor type to download the setup file. +1. Select the **Download Windows Agent** link that's applicable to your computer processor type to download the setup file. -1. From the **Agents management** page, copy the **Workspace ID** and **Primary Key** into Notepad. +1. From the **Agents management** page, copy the **Workspace ID** and **Primary Key** values into Notepad. 1. Copy the downloaded setup file to the target computer and run it. -1. Follow the installation wizard (**Next**, **I Agree**, **Next**, **Next**). +1. Follow the installation wizard (select **Next** > **I Agree** > **Next** > **Next**). ++1. On the **Azure Log Analytics** page, paste the **Workspace ID** and **Primary Key** values that you copied into Notepad. - 1. On the **Azure Log Analytics** page, paste the **Workspace ID** and **Workspace Key (Primary Key)** that you copied into Notepad. - - 1. If the computer should report to a Log Analytics workspace in Azure Government cloud, select **Azure US Government** from the **Azure Cloud** dropdown list. - - 1. If the computer needs to communicate through a proxy server to the Log Analytics service, select **Advanced** and provide the URL and port number of the proxy server. - - 1. When you've entered all of the configuration settings, select **Next**. - - 1. From the **Ready to Install** page, review the settings to be applied and select **Install**. - - 1. On the **Configuration completed successfully** page, select **Finish**. +1. If the computer should report to a Log Analytics workspace in the Azure Government cloud, select **Azure US Government** from the **Azure Cloud** dropdown list. -When complete, the **Microsoft Monitoring agent** appears in **Control Panel**. You can review your configuration there and verify that the agent is connected. +1. If the computer needs to communicate through a proxy server to the Log Analytics service, select **Advanced**. Then provide the URL and port number of the proxy server. ++1. When you finish entering all of the configuration settings, select **Next**. ++1. On the **Ready to Install** page, review the settings to be applied and select **Install**. ++1. On the **Configuration completed successfully** page, select **Finish**. ++When the process is complete, **Microsoft Monitoring agent** appears in **Control Panel**. You can review your configuration there and verify that the agent is connected. For more information on installing and configuring the agent, see [Connect Windows machines](../azure-monitor/agents/agent-windows.md#install-the-agent). -### Onboard your Linux servers +### Onboard your Linux server -To add Linux machines, you need the WGET command from the **Agents management** page. +To add Linux machines, you need the `wget` command from the **Agents management** page. -**To onboard your Linux server**: +To onboard your Linux server: 1. Select **Linux servers**. - :::image type="content" source="media/quickstart-onboard-machines/linux-servers.png" alt-text="Screenshot that shows the Linux servers tab selected."::: + :::image type="content" source="media/quickstart-onboard-machines/linux-servers.png" alt-text="Screenshot that shows the tab for Linux servers."::: -1. Copy the **WGET** command into Notepad. Save this file to a location that can be accessible from your Linux computer. +1. Copy the `wget` command into Notepad. Save this file to a location that you can access from your Linux computer. -1. On your Linux computer, open the file with the WGET command. Select the entire content and copy and paste it into a terminal console. +1. On your Linux computer, open the file that contains the `wget` command. Copy the entire contents and paste them into a terminal console. -1. When the installation completes, you can validate that the `omsagent` is installed by running the `pgrep` command. The command returns the `omsagent` PID. +1. When the installation finishes, validate that the Operations Management Suite Agent is installed by running the `pgrep` command. The command returns the `omsagent` persistent ID. - The logs for the Agent can be found at: `/var/opt/microsoft/omsagent/\<workspace id>/log/`. It might take up to 30 minutes for the new Linux machine to appear in Defender for Cloud. + You can find the logs for the agent at `/var/opt/microsoft/omsagent/<workspace id>/log/`. The new Linux machine might take up to 30 minutes to appear in Defender for Cloud. -## Verify your machines are connected +## Verify that your machines are connected Your Azure and on-premises machines are available to view in one location. -**To verify your machines are connected**: +To verify that your machines are connected: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Microsoft Defender for Cloud**. -1. In the Defender for Cloud menu, select [**Inventory**](asset-inventory.md). +1. On the Defender for Cloud menu, select **Inventory** to show the [asset inventory](asset-inventory.md). 1. Filter the page to view the relevant resource types. These icons distinguish the types: -  Non-Azure machine +  Non-Azure machine -  Azure VM +  Azure VM -  Azure Arc-enabled server +  Azure Arc-enabled server ## Clean up resources -There's no need to clean up any resources for this tutorial. +There's no need to clean up any resources for this article. ## Next steps -- [Protect all of your resources with Defender for Cloud](enable-all-plans.md)--- Set up your [AWS account](quickstart-onboard-aws.md), [GCP projects](quickstart-onboard-gcp.md).+- [Protect all of your resources with Defender for Cloud](enable-all-plans.md). +- Set up your [AWS account](quickstart-onboard-aws.md) and [GCP projects](quickstart-onboard-gcp.md). |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | Title: Release notes for Microsoft Defender for Cloud description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 07/09/2023 Last updated : 07/12/2023 # What's new in Microsoft Defender for Cloud? Updates in July include: |Date |Update | |||-|July 9 | [Support for disabling specific vulnerability findings](#support-for-disabling-specific-vulnerability-findings) +| July 12 | [New Security alert in Defender for Servers plan 2: Detecting Potential Attacks leveraging Azure VM GPU driver extensions](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions) +| July 9 | [Support for disabling specific vulnerability findings](#support-for-disabling-specific-vulnerability-findings) | July 1 | [Data Aware Security Posture is now Generally Available](#data-aware-security-posture-is-now-generally-available) | -### Support for disabling specific vulnerability findings ++### New security alert in Defender for Servers plan 2: detecting potential attacks leveraging Azure VM GPU driver extensions ++July 12, 2023 ++This alert focuses on identifying suspicious activities leveraging Azure virtual machine **GPU driver extensions** and provides insights into attackers' attempts to compromise your virtual machines. The alert targets suspicious deployments of GPU driver extensions; such extensions are often abused by threat actors to utilize the full power of the GPU card and perform cryptojacking. ++| Alert Display Name <br> (Alert Type) | Description | Severity | MITRE Tactic | +||||| +| Suspicious installation of GPU extension in your virtual machine (Preview) <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Low | Impact | ++For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md). ++ ### Support for disabling specific vulnerability findings July 9, 2023 |
defender-for-cloud | Tutorial Enable Container Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-gcp.md | You can learn more about Defender for Container's pricing on the [pricing page]( ## Deploy the solution to specific clusters -If you disabled any of the default auto provisioning configurations to Off, during the [GCP connector onboarding process](quickstart-onboard-gcp.md#configure-the-containers-plan), or afterwards. You need to manually install Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extensions to each of your GKE clusters to get the full security value out of Defender for Containers. +If you disabled any of the default auto provisioning configurations to Off, during the [GCP connector onboarding process](quickstart-onboard-gcp.md#configure-the-defender-for-containers-plan), or afterwards. You need to manually install Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extensions to each of your GKE clusters to get the full security value out of Defender for Containers. There are two dedicated Defender for Cloud recommendations you can use to install the extensions (and Arc if necessary): |
defender-for-iot | How To Manage Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md | Title: Manage OT plans and licenses - Microsoft Defender for IoT description: Manage Microsoft Defender for IoT plans and licenses for OT monitoring. Previously updated : 05/17/2023 Last updated : 06/19/2023 This procedure describes how to purchase Defender for IoT licenses in the Micros 1. Search for **Microsoft Defender for IoT**, and then locate the **Microsoft Defender for IoT** license for your site size. -1. Follow the options through to buy the license and add it to your Microsoft 365 products. Make sure to select the number of licenses you want to purchase, based on the number of sites you want to monitor at the selected size. +1. Follow the options through to buy the license and add it to your Microsoft 365 products. ++ Make sure to select the number of licenses you want to purchase, based on the number of sites you want to monitor at the selected size. > [!IMPORTANT] > All license management procedures are done from the Microsoft 365 admin center, including buying, canceling, renewing, setting to auto-renew, auditing, and more. For more information, see the [Microsoft 365 admin center help](/microsoft-365/admin/). This procedure describes how to add an OT plan for Defender for IoT in the Azure - Select the terms and conditions. - If you're working with an on-premises management console, select **Download OT activation file (Optional)**. - When you're finished, select **Save**. If you've selected to download the on-premises management console activation file, the file is downloaded and you're prompted to save it locally. + When you're finished, select **Save**. If you've selected to download the on-premises management console activation file, the file is downloaded and you're prompted to save it locally. You'll use it later, when [activating your on-premises management console](ot-deploy/activate-deploy-management.md#activate-the-on-premises-management-console). -Your new plan is listed under the relevant subscription on the **Plans and pricing** > **Plans** page. +Your new plan is listed under the relevant subscription on the **Plans and pricing** > **Plans** page. ## Cancel a Defender for IoT plan You may need to cancel a Defender for IoT plan from your Azure subscription, for Your changes take effect one hour after confirmation. -> [!IMPORTANT] -> Canceling an OT plan in the Azure portal *doesn't* also cancel your Defender for IoT license. To change your billed licenses, make sure that you also cancel your Defender for IoT license from the Microsoft 365 admin center. -> -> For more information, see the [Microsoft 365 admin center documentation](/microsoft-365/commerce/subscriptions/manage-self-service-purchases-admins#cancel-a-purchase-or-trial-subscription). +### Cancel your Defender for IoT licenses ++Canceling an OT plan in the Azure portal *doesn't* also cancel your Defender for IoT license. To change your billed licenses, make sure that you also cancel your Defender for IoT license from the Microsoft 365 admin center. + +For more information, see the [Microsoft 365 admin center documentation](/microsoft-365/commerce/subscriptions/manage-self-service-purchases-admins#cancel-a-purchase-or-trial-subscription). +++## Migrate from a legacy OT plan ++If you're an existing customer with a legacy OT plan, we recommend migrating your plan to a site-based Microsoft 365 plan. After you've edited your plan, make sure to update your site details with a site size that matches your Microsoft 365 license. ++After migrating your plan to a site-based Microsoft 365 plan, edits are supported only in the Microsoft 365 admin center. ++> [!NOTE] +> Defender for IoT supports migration for a single subscription only. If you have multiple subscriptions, choose the one you want to migrate, and then move all sensors to that subscription before you update your plan settings. >+> For more information, see [Move existing sensors to a different subscription](#move-existing-sensors-to-a-different-subscription). ++**To migrate your plan**: ++1. Purchase a new, site-based license in the Microsoft 365 Marketplace for the site size that you need. For more information, see [Purchase a Defender for IoT license](#purchase-a-defender-for-iot-license). ++1. In Defender for IoT in the Azure portal, go to **Plans and pricing** and locate the subscription for the plan you want to migrate. ++1. On the subscription row, select the options menu (**...**) at the right > select **Edit plan**. ++1. In the **Price plan** field, select **Microsoft 365 (recommended)** > **Next**. For example: ++ :::image type="content" source="media/release-notes/migrate-to-365.png" alt-text="Screenshot of updating your pricing plan to Microsoft 365."::: ++1. Review your plan details and select **Save**. ++**To update your site sizes**: ++1. In Defender for IoT in the Azure portal, select **Sites and sensors** and then select the name of the site you want to migrate. ++1. In the **Edit site** pane, in the **Size** field, edit your site size to match your licensed sites. For example: ++ :::image type="content" source="media/release-notes/edit-site-size.png" alt-text="Screenshot of editing a site size on the Azure portal."::: + ## Legacy procedures for plan management in the Azure portal Starting June 1, 2023, Microsoft Defender for IoT licenses for OT monitoring are Existing customers can continue to use any legacy OT plan, with no changes in functionality. For legacy customers, *committed devices* are the number of devices you're monitoring. For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot). -You might need to edit your plan to change your plan commitment or update the number of committed devices or sites. For example, you may have more devices that require monitoring if you're increasing existing site coverage, or there are network changes such as adding switches. +### Warnings for exceeding committed devices -> [!NOTE] -> If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, you may see a warning message in the Azure portal and on your OT sensor that you have exceeded the maximum number of devices for your subscription. -> -> This warning indicates you need to update the number of committed devices on the relevant subscription to the actual number of devices being monitored. Click the link in the warning message to take you to the **Plans and pricing** page, with the **Edit plan** pane already open. +If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, you may see a warning message in the Azure portal and on your OT sensor that you have exceeded the maximum number of devices for your subscription. ++This warning indicates you need to update the number of committed devices on the relevant subscription to the actual number of devices being monitored. Click the link in the warning message to take you to the **Plans and pricing** page, with the **Edit plan** pane already open. ++### Move existing sensors to a different subscription ++If you have multiple legacy subscriptions and are migrating to a Microsoft 365 plan, you'll first need to consolidate your sensors to a single subscription. To do this, you'll need to register the sensors under the new subscription and remove them from the original subscription. ++- Devices are synchronized from the sensor to the new subscription automatically. ++- Manual edits made in the portal aren't migrated. -**To edit a legacy plan on the Azure portal:** +- New alerts created by the sensor are created under the new subscription, and existing alerts in the old subscription can be closed in bulk. ++**To move sensors to a different subscription**: ++1. In the Azure portal, [onboard the sensor](onboard-sensors.md) from scratch to the new subscription in order to create a new activation file. When onboarding your sensor: ++ - Replicate site and sensor hierarchy as is. ++ - For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone, will be merged into one device. ++1. On your sensor, upload the new activation file. ++1. Delete the sensor identities from the previous subscription. For more information, see [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal). ++1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel a Defender for IoT plan](#cancel-a-defender-for-iot-plan). ++### Edit a legacy plan on the Azure portal 1. In the Azure portal, go to **Defender for IoT** > **Plans and pricing**. You might need to edit your plan to change your plan commitment or update the nu 1. Make any of the following changes as needed: - - Change your price plan from a trial to a monthly or annual commitment - - Update the number of [committed devices](best-practices/plan-prepare-deploy.md#calculate-devices-in-your-network) - - Update the number of sites (annual commitments only) + - Change your price plan from a trial to a monthly, annual, or Microsoft 365 plan + - Update the number of [committed devices](best-practices/plan-prepare-deploy.md#calculate-devices-in-your-network) (monthly and annual plans only) + - Update the number of sites (annual plans only) -1. Select the **I accept the terms and conditions** option, and then select **Purchase**. +1. Select the **I accept the terms and conditions** option, and then select **Save**. 1. After any changes are made, make sure to reactivate your sensors. For more information, see [Reactivate an OT sensor](how-to-manage-sensors-on-the-cloud.md#reactivate-an-ot-sensor). 1. If you have an on-premises management console, make sure to upload a new activation file, which reflects the changes made. For more information, see [Upload a new activation file](how-to-manage-the-on-premises-management-console.md#upload-a-new-activation-file). Changes to your plan will take effect one hour after confirming the change. This change will appear on your next monthly statement, and you'll be charged based on the length of time each plan was in effect.- ## Next steps For more information, see: |
defender-for-iot | How To Manage The On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md | You may need to reactivate your on-premises management console as part of mainte **To upload a new activation file to your on-premises management console**: -1. In Defender for IoT on the Azure portal, select **Plans and pricing** > **Download on-premises management console activation file**. +1. In Defender for IoT on the Azure portal, select **Plans and pricing**. ++1. Select your plan and then select **Download on-premises management console activation file**. Save your downloaded file in a location that's accessible from the on-premises management console. |
defender-for-iot | Activate Deploy Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/activate-deploy-management.md | Before performing the procedures in this article, you need to have: - Access to the Azure portal as a [Security Admin](../../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../../role-based-access-control/built-in-roles.md#owner) user +- A Defender for IoT OT plan. For more information, see [Add an OT plan to your Azure subscription](../how-to-manage-subscriptions.md#add-an-ot-plan-to-your-azure-subscription). ++ When you add a plan, you're given the option of downloading an activation file for your on-premises management console. Either use the file you'd downloaded then, or use the steps in this article to download it afresh. + ## Sign in to your on-premises management console During the [software installation process](install-software-on-premises-management-console.md#users), you'll have received a set of credentials for privileged access. We recommend using the **Support** credentials when signing into the on-premises management console for the first time. In a browser, go to the on-premises management console's IP address, and enter t ## Activate the on-premises management console -Activate your on-premises management console using a downloaded file from the Azure portal. Defender for IoT activation files track the number of devices detected by connected OT sensors against the number of devices covered by your [licenses](../billing.md). --If your sensors detect more devices than you're licensed for, purchase a new license for a larger site. For more information, see [Manage OT plans and licenses](../how-to-manage-subscriptions.md). +Activate your on-premises management console using a downloaded file from the Azure portal. Either use an activation file you'd downloaded when [adding your plan](../how-to-manage-subscriptions.md#add-an-ot-plan-to-your-azure-subscription), or use the steps in this procedure to download the activation file afresh. -**To activate**: +**To download the activation file**: -1. After signing into the on-premises management console for the first time, you'll see a message prompting you to take action for a missing activation file. In the message bar, select the **Take action** link. +1. In Defender for IoT in the Azure portal, select **Plans and pricing**. - An **Activation** dialog shows the number of monitored and licensed devices. Since you're just starting the deployment, both of these values should be **0**. --1. Select the link to the **Azure portal** to jump to Defender for IoT's **Plans and pricing** page in the Azure portal. + > [!NOTE] + > If you'd prefer to start in the on-premises management console, you'll see a message prompting you to take action for a missing activation file after signing into the on-premises management console for the first time. + > + > In the message bar, select the **Take action** link. An **Activation** dialog shows the number of monitored and licensed devices. <br><br>Since you're just starting the deployment, both of these values should be **0**. <br> <br> Select the link to the **Azure portal** to jump to Defender for IoT's **Plans and pricing** page in the Azure portal. | 1. In the **Plans** grid, select your subscription. If your sensors detect more devices than you're licensed for, purchase a new lic [!INCLUDE [root-of-trust](../includes/root-of-trust.md)] -1. Return to your on-premises management console. In the **Activation** dialog, select **CHOOSE FILE** and select the downloaded activation file. +**To activate your on-premises management console**: ++1. If you haven't yet, sign into your on-premises management console. In the **Activation** dialog, select **CHOOSE FILE** and select the downloaded activation file. A confirmation message appears to confirm that the file's been uploaded successfully. |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Title: What's new in Microsoft Defender for IoT description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 05/17/2023 Last updated : 06/26/2023 Features released earlier than nine months ago are described in the [What's new > Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > +## July 2023 ++|Service area |Updates | +||| +| **OT networks** | [Migrate to site-based licenses](#migrate-to-site-based-licenses) | +++### Migrate to site-based licenses ++Existing customers can now migrate their legacy Defender for IoT purchasing plans to a **Microsoft 365** plan, based on site-based, Microsoft 365 licenses. ++On the **Plans and pricing** page, edit your plan and select the **Microsoft 365** plan instead of your current monthly or annual plan. For example: +++Make sure to edit any relevant sites to match your newly licensed site sizes. For example: +++For more information, see [Migrate from a legacy OT plan](how-to-manage-subscriptions.md#migrate-from-a-legacy-ot-plan) and [Defender for IoT subscription billing](billing.md). + ## June 2023 |Service area |Updates | |
energy-data-services | Concepts Tier Details | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-tier-details.md | + + Title: Microsoft Azure Data Manager for Energy tier concepts +description: This article describes tier concepts ++++ Last updated : 07/11/2023++++# Azure Data Manager for Energy tiers +Azure Data Manager for Energy is available in two tiers; Standard and Developer. +++## Developer tier +The Developer tier of Azure Data Manager for Energy is designed for users who want more flexibility and speed in building out new applications and testing their [OSDU™](https://osduforum.org) Data Platform backed solutions. The Developer tier provides users with the same high bar of security features, and application integration services as the Standard tier at a lower cost and with reduced resource capacity. Organizations can isolate and manage their test and production environments more cost-effectively. Use cases for the Developer tier of Azure Data Manager for Energy includes the following: ++* Evaluating and creating data migration strategy +* Building proof of concepts and business case demonstrations +* Defining deployment pipeline +* Validating application compatibility +* Validating security features such as Customer Managed Encryption Keys (CMEK) +* Implementing sensitive data classification +* Testing new [OSDU™](https://osduforum.org) Data Platform releases +* Validating data by ingesting in a safe pre production environment +* Testing new third party or in-house applications +* Validating service updates +* Testing API functionality ++Customers can isolate their test and production environments in a safe and effective way. +++## Standard tier +The Standard tier of Azure Data Manager for energy is ideal for customers' production ready scenarios. These include the following: ++* Operationalizing domain workflows (such as seismic or well log) +* Deploying and testing predictive reservoir models to a production environment on the cloud +* Running subsurface models +* Migrating seismic data across applications ++The standard tier is designed for production scenarios as it provides high availability, reliability and scale. The Standard tier includes the following: ++* Availability Zones +* Disaster Recovery +* Financial Backed Service Level Agreement +* Higher Database Throughput +* Higher data partition maximum +* Higher support prioritization ++++## Tier details +| Features | Developer Tier | Standard Tier | +| | - | - | +Recommended Use Cases | Non-Production scenario such as [OSDU™](https://osduforum.org) Data Platform testing, data validation, feature testing, troubleshooting, training, and proof of concepts. | Production data availability and business workflows +[OSDU™](https://osduforum.org) Data Platform Compatibility| Yes | Yes +Pre Integration w/ Leading Industry Apps | Yes | Yes +Support | Yes | Yes +Azure Customer Managed Encryption Keys|Yes| Yes +Azure Private Links|Yes| Yes +Financial Backed Service Level Agreement (SLA) Credits | No | Yes +Disaster Recovery |No| Yes +Availability Zones |No| Yes +Database Throughput |Low| High +Included Data Partition | 1| 1 +Maximum Data Partition |5 | 10 ++## How to participate +You can easily create a Developer tier resource by going to Azure Marketplace, [create portal](https://portal.azure.com/#create/Microsoft.AzureDataManagerforEnergy), and select your desired tier. |
event-grid | Receive Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/receive-events.md | SDKs for other languages are available via the [Publish SDKs](./sdk-overview.md# ## Endpoint validation -The first thing you want to do is handle `Microsoft.EventGrid.SubscriptionValidationEvent` events. Every time someone subscribes to an event, Event Grid sends a validation event to the endpoint with a `validationCode` in the data payload. The endpoint is required to echo this back in the response body to [prove the endpoint is valid and owned by you](webhook-event-delivery.md). If you're using an [Event Grid Trigger](../azure-functions/functions-bindings-event-grid.md) rather than a WebHook triggered Function, endpoint validation is handled for you. If you use a third-party API service (like [Zapier](https://zapier.com/home) or [IFTTT](https://ifttt.com/)), you might not be able to programmatically echo the validation code. For those services, you can manually validate the subscription by using a validation URL that is sent in the subscription validation event. Copy that URL in the `validationUrl` property and send a GET request either through a REST client or your web browser. +The first thing you want to do is handle `Microsoft.EventGrid.SubscriptionValidationEvent` events. Every time someone subscribes to an event, Event Grid sends a validation event to the endpoint with a `validationCode` in the data payload. The endpoint is required to echo this back in the response body to [prove the endpoint is valid and owned by you](webhook-event-delivery.md). If you're using an [Event Grid Trigger](../azure-functions/functions-bindings-event-grid.md) rather than a WebHook triggered Function, endpoint validation is handled for you. If you use a third-party API service (like [Zapier](https://zapier.com/) or [IFTTT](https://ifttt.com/)), you might not be able to programmatically echo the validation code. For those services, you can manually validate the subscription by using a validation URL that is sent in the subscription validation event. Copy that URL in the `validationUrl` property and send a GET request either through a REST client or your web browser. In C#, the `ParseMany()` method is used to deserialize a `BinaryData` instance containing 1 or more events into an array of `EventGridEvent`. If you knew ahead of time that you are deserializing only a single event, you could use the `Parse` method instead. |
external-attack-surface-management | Data Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/data-connections.md | Title: Defender EASM Data Connections -description: "The data connector sends Defender EASM asset data to two different platforms: Microsoft Log Analytics and Azure Data Explorer. Users need to be active customers to export Defender EASM data to either tool, and data connections are subject to the pricing model for each respective platform." + Title: Defender EASM data connections +description: "The data connector sends Defender EASM asset data to Log Analytics and Azure Data Explorer. You can export Defender EASM data to either tool." -# Leveraging data connections +# Use data connections -Microsoft Defender External Attack Surface Management (Defender EASM) now offers data connections to help users seamlessly integrate their attack surface data into other Microsoft solutions to supplement existing workflows with new insights. Users must get data from Defender EASM into the other security tools they use for remediation purposes to best operationalize their attack surface data. +This article discusses the data connections feature in Microsoft Defender External Attack Surface Management (Defender EASM). -The data connector sends Defender EASM asset data to two different platforms: Microsoft Log Analytics and Azure Data Explorer. Users need to be active customers to export Defender EASM data to either tool, and data connections are subject to the pricing model for each respective platform. +## Overview +Defender EASM now offers data connections to help you seamlessly integrate your attack surface data into other Microsoft solutions to supplement existing workflows with new insights. You must get data from Defender EASM into the other security tools you use for remediation purposes to make the best use of your attack surface data. -[Microsoft Log Analytics](/azure/sentinel/overview) provides SIEM (security information and event management) and SOAR (security orchestration, automation and response) capabilities. Defender EASM asset or insights information can be used in Log Analytics to enrich existing workflows in conjunction with other security data. This information can supplement firewall and configuration information, threat intelligence, compliance data and more to provide visibility into your external-facing infrastructure on the open internet. Users can create or enrich security incidents, build investigation playbooks, train machine learning algorithms, or trigger remediation actions. +The data connector sends Defender EASM asset data to two different platforms: Log Analytics and Azure Data Explorer. You need to export Defender EASM data to either tool. Data connections are subject to the pricing model for each respective platform. -[Azure Data Explorer](/azure/data-explorer/data-explorer-overview) is a big data analytics platform that helps users analyze high volumes of data from various sources with flexible customization capabilities. Defender EASM asset and insights data can be integrated to leverage visualization, query, ingestion and management capabilities within the platform. Whether building custom reports with Power BI or hunting for assets that match precise KQL queries, exporting Defender EASM data to Azure Data Explorer enables users to leverage their attack surface data with endless customization potential. +[Log Analytics](/azure/sentinel/overview) provides security information and event management and security orchestration, automation, and response capabilities. Defender EASM asset or insights information can be used in Log Analytics to enrich existing workflows with other security data. This information can supplement firewall and configuration information, threat intelligence, and compliance data to provide visibility into your external-facing infrastructure on the open internet. +You can: ++- Create or enrich security incidents. +- Build investigation playbooks. +- Train machine learning algorithms. +- Trigger remediation actions. ++[Azure Data Explorer](/azure/data-explorer/data-explorer-overview) is a big data analytics platform that helps you analyze high volumes of data from various sources with flexible customization capabilities. Defender EASM asset and insights data can be integrated to use visualization, query, ingestion, and management capabilities within the platform. ++Whether you're building custom reports with Power BI or hunting for assets that match precise KQL queries, exporting Defender EASM data to Azure Data Explorer enables you to use your attack surface data with endless customization potential. ## Data content options -<br>Defender EASM data connections offer users the ability to integrate two different kinds of attack surface data into the tool of their choice. Users can elect to migrate asset data, attack surface insights or both data types. Asset data provides granular details about your entire inventory, whereas attack surface insights provide immediately actionable insights based on Defender EASM dashboards. +Defender EASM data connections offer you the ability to integrate two different kinds of attack surface data into the tool of your choice. You can elect to migrate asset data, attack surface insights, or both data types. Asset data provides granular details about your entire inventory. Attack surface insights provide immediately actionable insights based on Defender EASM dashboards. -To accurately present the infrastructure that matters most to your organization, please note that both content options will only include assets in the ΓÇ£Approved InventoryΓÇ¥ state. +To accurately present the infrastructure that matters most to your organization, both content options only include assets in the **Approved** inventory state. +**Asset data**: The Asset Data option sends data about all your inventory assets to the tool of your choice. This option is best for use cases where the granular underlying metadata is key to your Defender EASM integration. Examples include Microsoft Sentinel or customized reporting in Azure Data Explorer. You can export high-level context on every asset in inventory and granular details specific to the particular asset type. -**Asset data** -<br>The Asset Data option will send data about all your inventory assets to the tool of your choice. This option is best for use cases where the granular underlying metadata is key to the operationalization of your Defender EASM integration (e.g. Sentinel, customized reporting in Data Explorer). Users can export high-level context on every asset in inventory as well as granular details specific to the particular asset type. This option does not provide any pre-determined insights about the assets; instead, it offers an expansive amount of data so users can surface the customized insights they care about most. +This option doesn't provide any predetermined insights about the assets. Instead, it offers an expansive amount of data so that you can find the customized insights you care about most. +**Attack surface insights**: Attack surface insights provide an actionable set of results based on the key insights delivered through dashboards in Defender EASM. This option provides less granular metadata on each asset. It categorizes assets based on the corresponding insights and provides the high-level context required to investigate further. This option is ideal if you want to integrate these predetermined insights into custom reporting workflows with data from other tools. -**Attack surface insights** -<br>Attack Surface Insights provide an actionable set of results based on the key insights delivered through dashboards in Defender EASM. This option provides less granular metadata on each asset; instead, it categorizes assets based on the corresponding insight(s) and provides the high-level context required to investigate further. This option is ideal for those who want to integrate these pre-determined insights into custom reporting workflows in conjunction with data from other tools. +## Configuration overviews +This section presents general information on configuration. -## **Configuration overviews** +### Access data connections +On the leftmost pane in your Defender EASM resource pane, under **Manage**, select **Data Connections**. This page displays the data connectors for both Log Analytics and Azure Data Explorer. It lists any current connections and provides the option to add, edit, or remove connections. + -**Accessing data connections** -<br>Users can access Data Connections from the **Manage** section of the left-hand navigation pane within their Defender EASM resource blade. This page displays the data connectors for both Log Analytics and Azure Data Explorer, listing any current connections and providing the option to add, edit or remove connections. +### Connection prerequisites +To successfully create a data connection, you must first ensure that you've completed the required steps to grant Defender EASM permission to the tool of your choice. This process enables the application to ingest your exported data. It also provides the authentication credentials needed to configure the connection. - +## Configure Log Analytics permissions +1. Open the Log Analytics workspace that will ingest your Defender EASM data or [create a new workspace](/azure/azure-monitor/logs/quick-create-workspace?tabs=azure-portal). -**Connection prerequisites** -<br>To successfully create a data connection, users must first ensure that they have completed the required steps to grant Defender EASM permission to the tool of their choice. This process enables the application to ingest our exported data and provides the authentication credentials needed to configure the connection. +1. On the leftmost pane, under **Settings**, select **Agents**. -## Configuring Log Analytics permissions +  -1. Open the Log Analytics workspace that will ingest your Defender EASM data, or [create a new workspace](/azure/azure-monitor/logs/quick-create-workspace?tabs=azure-portal). +1. Expand the **Log Analytics agent instructions** section to view your workspace ID and primary key. These values are used to set up your data connection. -2. Select **Agents** from the **Settings** section of the left-hand navigation menu. +Use of this data connection is subject to the pricing structure of Log Analytics. For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). -  +## Configure Azure Data Explorer permissions -3. Expand the **Log Analytics agent instructions** section to view your Workspace ID and Primary key. These values will be used to set up your data connection. - -Please note that use of this data connection is subject to the pricing structure of Log Analytics. See [Azure monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for more information. - - - -## Configuring Data Explorer permissions +Ensure that the Defender EASM API service principal has access to the correct roles in the database where you want to export your attack surface data. First, ensure that your Defender EASM resource was created in the appropriate tenant because this action provisions the EASM API principal. -1. First, ensure that the Defender "EASM API" service principal has access to the correct roles in the database where you wish to export your attack surface data. For this reason, first ensure that your Defender EASM resource has been created in the appropriate tenant as this action provisions the EASM API principal. -2. Open the Data Explorer cluster that will ingest your Defender EASM data or [create a new cluster](/azure/data-explorer/create-cluster-database-portal). -3. Select **Databases** in the Data section of the left-hand navigation menu. -4. Select **+ Add Database** to create a database to house your Defender EASM data. +1. Open the Azure Data Explorer cluster that will ingest your Defender EASM data or [create a new cluster](/azure/data-explorer/create-cluster-database-portal). +1. On the leftmost pane, under **Data**, select **Databases**. +1. Select **Add Database** to create a database to house your Defender EASM data. -  +  -5. Name your database, configure retention and cache periods, then select **Create**. +1. Name your database, configure retention and cache periods, and select **Create**. -  +  -6. Once your Defender EASM database has been created, click on the database name to open the details page. Select **Permissions** from the Overview section of the left-hand navigation menu. - To successfully export Defender EASM data to Data Explorer, users must create two new permissions for the EASM API: **user** and **ingestor**. - -  +1. After your Defender EASM database is created, select the database name to open the details page. On the leftmost pane, under **Overview**, select **Permissions**. + To successfully export Defender EASM data to Azure Data Explorer, you must create two new permissions for the EASM API: **user** and **ingestor**. -7. First, select **+ Add** and create a user. Search for ΓÇ£**EASM API**ΓÇ¥, select the value then click **Select**. +  -8. Select **+ Add** to create an ingestor. Follow the same steps outlined above to add the **"EASM API"** as an ingestor. - -9. Your database is now ready to connect to Defender EASM. You will need the cluster name, database name and region when configuring your Data Connection. +1. Select **Add** and create a user. Search for **EASM API**, select the value, and choose **Select**. +1. Select **Add** to create an ingestor. Follow the same steps previously outlined to add the **EASM API** as an ingestor. +1. Your database is now ready to connect to Defender EASM. You need the cluster name, database name, and region when you configure your data connection. ## Add a data connection-<br>Users can connect their Defender EASM data to either Log Analytics or Azure Data Explorer. To do so, simply select **ΓÇ£Add connectionΓÇ¥** for the appropriate tool from the Data Connections page. +You can connect your Defender EASM data to either Log Analytics or Azure Data Explorer. To do so, select **Add connection** for the appropriate tool from the **Data Connections** page. -A configuration pane will open on the right-hand side of the Data Connections screen. The following fields are required for each respective tool: +A configuration pane opens on the right side of the **Data Connections** page. The following fields are required for each respective tool. ### Log Analytics-- **Name**: enter a name for this data connection.-- **Workspace ID**: the workspace ID for the Log Analytics instance where you wish to export Defender EASM data. -- **Api key**: the API key for the Log Analytics instance. -- **Content**: users can select to integrate asset data, attack surface insights or both datasets. -- **Frequency**: select the frequency that the Defender EASM connection sends updated data to the tool of your choice. Available options are daily, weekly and monthly.- -  ---### Azure Data Explorer -- **Name**: enter a name for this data connection.-- **Cluster name**: the name of the Azure Data Explorer cluster where you wish to export Defender EASM data. -- **Region**: the region of the Azure Data Explorer cluster. -- **Database name**: the name of the desired database. -- **Content**: users can select to integrate asset data, attack surface insights or both datasets. -- **Frequency**: select the frequency that the Defender EASM connection sends updated data to the tool of your choice. Available options are daily, weekly and monthly.--  - - - Once all fields are configured, select **Add** to create the data connection. At this point, the Data Connections page will display a banner that indicates the resource has been successfully created and data will begin populating within 30 minutes. Once connections are created, they will be listed under the applicable tool on the main Data Connections page. - ++- **Name**: Enter a name for this data connection. +- **Workspace ID**: Enter the workspace ID for the Log Analytics instance where you want to export Defender EASM data. +- **API key**: Enter the API key for the Log Analytics instance. +- **Content**: Select to integrate asset data, attack surface insights, or both datasets. +- **Frequency**: Select the frequency that the Defender EASM connection uses to send updated data to the tool of your choice. Available options are daily, weekly, and monthly. ++  ++### Azure Data Explorer ++- **Name**: Enter a name for this data connection. +- **Cluster name**: Enter the name of the Azure Data Explorer cluster where you want to export Defender EASM data. +- **Region**: Enter the region of the Azure Data Explorer cluster. +- **Database name**: Enter the name of the desired database. +- **Content**: Select to integrate asset data, attack surface insights, or both datasets. +- **Frequency**: Select the frequency that the Defender EASM connection uses to send updated data to the tool of your choice. Available options are daily, weekly, and monthly. ++  ++ After all fields are configured, select **Add** to create the data connection. At this point, the **Data Connections** page displays a banner that indicates the resource was successfully created. In 30 minutes, data begins to populate. After connections are created, they're listed under the applicable tool on the main **Data Connections** page. + ## Edit or delete a data connection-<br>Users can edit or delete a data connection. For example, you may notice that a connection is listed as ΓÇ£DisconnectedΓÇ¥ and would therefore need to re-enter the configuration details to fix the issue. -To edit or delete a data connection: +You can edit or delete a data connection. For example, you might notice that a connection is listed as **Disconnected**. In this case, you need to reenter the configuration details to fix the issue. ++To edit or delete a data connection: ++1. Select the appropriate connection from the list on the main **Data Connections** page. ++  ++1. A page opens that provides more data about the connection. It displays the configurations you chose when you created the connection and any error messages. You also see the following data: -1. Select the appropriate connection from the list on the main Data Connections page. -  + - **Recurring on**: The day of the week or month that Defender EASM sends updated data to the connected tool. + - **Created**: The date and time that the data connection was created. + - **Updated**: The date and time that the data connection was last updated. -1. This action will open a page that provides additional data about the connection. This page displays the configurations you elected when creating the connection, as well as any error messages. Users will also see the following additional data: - ΓÇó **Recurring on**: the day of the week or month that Defender EASM sends updated data to the connected tool. - ΓÇó **Created**: the date and time that the data connection was created. - ΓÇó **Updated**: the date and time that the data connection was last updated. -  +  -1. From this page, users can elect to reconnect, edit or delete their data connection. +1. From this page, you can reconnect, edit, or delete your data connection. - - **Reconnect**: this option attempts to validate the data connection without any changes to the configuration. This option is best for those who have validated the authentication credentials used for the data connection. - - **Edit**: this option allows users to change the configuration for the data connection. - - **Delete**: this option deletes the data connection. - + - **Reconnect**: Attempts to validate the data connection without any changes to the configuration. This option is best if you validated the authentication credentials used for the data connection. + - **Edit**: Allows you to change the configuration for the data connection. + - **Delete**: Deletes the data connection. |
external-attack-surface-management | Deploying The Defender Easm Azure Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/deploying-the-defender-easm-azure-resource.md | Title: Creating a Defender EASM Azure resource -description: This article explains how to create an Microsoft Defender External Attack Surface Management (Defender EASM) Azure resource using the Azure portal. + Title: Create a Defender EASM Azure resource +description: This article explains how to create a Microsoft Defender External Attack Surface Management (Defender EASM) Azure resource by using the Azure portal. -# Creating a Defender EASM Azure resource +# Create a Defender EASM Azure resource -This article explains how to create a Microsoft Defender External Attack Surface Management (Defender EASM) Azure resource using the Azure portal. +This article explains how to create a Microsoft Defender External Attack Surface Management (Defender EASM) Azure resource by using the Azure portal. -Creating the EASM Azure resource involves two steps: +Creating the Defender EASM Azure resource involves two steps: -- Create a resource group-- Create an EASM resource in the resource group+- Create a resource group. +- Create a Defender EASM resource in the resource group. ## Prerequisites -Before you create a Defender EASM resource group, we recommend that you are familiar with how to access and use the [Microsoft Azure portal](https://portal.azure.com/) and read the [Defender EASM Overview article](index.md) for key context on the product. You will need: +Before you create a Defender EASM resource group, become familiar with how to access and use the [Azure portal](https://portal.azure.com/). Also read the [Defender EASM Overview article](index.md) for key context on the product. You need: - A valid Azure subscription or free Defender EASM trial account. If you donΓÇÖt have an [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a free Azure account before you begin.--- Your Azure account must have a contributor role assigned for you to create a resource. To get this role assigned to your account, follow the steps in the [Assign roles](../role-based-access-control/role-assignments-steps.md) documentation, or contact your administrator.-+- A Contributor role assigned for you to create a resource. To get this role assigned to your account, follow the steps in the [Assign roles](../role-based-access-control/role-assignments-steps.md) documentation. Or you can contact your administrator. ## Create a resource group -1. To create a new resource group, first select **Resource groups** in the Azure portal. --  --2. Under Resource Groups, select **Create**: --  --3. Select or enter the following property values: +1. To create a new resource group, select **Resource groups** in the Azure portal. -- **Subscription**: Select an Azure subscription.-- **Resource Group**: Give the resource group a name.-- **Region**: Specify an Azure location. This location is where the resource group stores metadata about the resource. For compliance reasons, you may want to specify where that metadata is stored. In general, we recommend that you specify a location where most of your resources will be. Using the same location can simplify your template. The following regions are supported: - - southcentralus - - eastus - - australiaeast - - westus3 - - swedencentral - - eastasia - - japaneast - - westeurope - - northeurope - - switzerlandnorth - - canadacentral - - centralus - - norwayeast - - francecentral +  -  +1. Under **Resource groups**, select **Create**. -4. Select **Review + Create**. +  -5. Review the values, and then select **Create**. +1. Select or enter the following property values: -6. Select **Refresh** to view the new resource group in the list. + - **Subscription**: Select an Azure subscription. + - **Resource group**: Give the resource group a name. + - **Region**: Specify an Azure location. This location is where the resource group stores metadata about the resource. For compliance reasons, you might want to specify where that metadata is stored. In general, we recommend that you specify a location where most of your resources will be. Using the same location can simplify your template. The following regions are supported: + - southcentralus + - eastus + - australiaeast + - westus3 + - swedencentral + - eastasia + - japaneast + - westeurope + - northeurope + - switzerlandnorth + - canadacentral + - centralus + - norwayeast + - francecentral ++  ++1. Select **Review + create**. ++1. Review the values and select **Create**. ++1. Select **Refresh** to view the new resource group in the list. ## Create resources in a resource group -After you create a resource group, you can create EASM resources within the group by searching for EASM within the Azure portal. +After you create a resource group, you can create Defender EASM resources in the group by searching for Defender EASM in the Azure portal. - -1. In the search box, type **Microsoft Defender EASM**, and then press Enter. +1. In the search box, enter **Microsoft Defender EASM** and select Enter. -2. Select the **Create** button to create an EASM resource. +1. Select **Create** to create a Defender EASM resource. -  +  -3. Select or enter the following property values: +1. Select or enter the following property values: - **Subscription**: Select an Azure subscription.- - **Resource Group**: Select the Resource Group created in the earlier step, or you can create a new one as part of the process of creating this resource. - - **Name**: give the Defender EASM workspace a name. - - **Region**: Select an Azure location. See the supported regions above. -+ - **Resource group**: Select the resource group created in the earlier step. You can also create a new one as part of the process of creating this resource. + - **Name**: Give the Defender EASM workspace a name. + - **Region**: Select an Azure location. See the supported regions listed in the preceding section. -  +  -4. Select **Review + Create**. +1. Select **Review + create**. -5. Review the values, and then select **Create**. +1. Review the values and select **Create**. -6. Select **Refresh** to see the status of the resource creation. Once finished, you can go to the Resource to get started. +1. Select **Refresh** to see the status of the resource creation. Now you can go to the resource to get started. ## Next steps -- [Using and managing discovery](using-and-managing-discovery.md)-- [Understanding dashboards](understanding-dashboards.md)----+- [Use and manage discovery](using-and-managing-discovery.md) +- [Understand dashboards](understanding-dashboards.md) |
external-attack-surface-management | Inventory Filters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/inventory-filters.md | Title: Inventory filters overview -description: This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management (Defender EASM), helping users surface specific subsets of inventory assets based on selected parameters. +description: This article outlines the filter functionality available in Defender EASM to help you find specific subsets of inventory assets based on selected parameters. -This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management (Defender EASM), helping users surface specific subsets of inventory assets based on selected parameters. This documentation section outlines each filter and operator and provides guidance on input options that yield the best results. It also explains how to save queries for easy accessibility to the filtered results. +This article outlines the filter functionality available in Microsoft Defender External Attack Surface Management (Defender EASM). Filtering helps you find specific subsets of inventory assets based on selected parameters. This article outlines each filter and operator and provides guidance on input options that yield the best results. It also explains how to save queries for easy accessibility to the filtered results. -## How it works +## How it works -Inventory filters allow users to access a specific subset of data that meets their search parameters. A user can apply as many filters as they need to obtain the desired results. +Inventory filters allow you to access a specific subset of data that meets your search parameters. You can apply as many filters as you need to obtain the results you want. -By default, the Inventory screen displays only Approved Inventory assets, hiding any assets in an alternative state. This filter can be removed if a user wishes to view assets in a different state (for instance: Candidate, Dependency, Requires Investigation). Removing the Approved Inventory filter is useful when a user needs to review potential new assets, investigate a third-party dependency issue or simply needs a complete view of all potential owned assets when conducting a search. +By default, the **Inventory** screen displays only **Approved** inventory assets. Assets in an alternative state are hidden. This filter can be removed if you want to view assets in a different state. Other states are **Candidate**, **Dependency**, and **Requires investigation**. -Defender EASM offers a wide variety of filters to obtain results of differing levels of granularity. Some filters allow you to select value options from a dropdown, whereas others require the manual entry of the desired value. +Removing the **Approved** inventory filter is useful when you need to: - +- Review potential new assets. +- Investigate a third-party dependency issue. +- See a complete view of all potential owned assets when you conduct a search. +Defender EASM offers various filters to obtain results of differing levels of granularity. With some filters, you can select value options from a dropdown list. Others require you to manually enter the value you want. -## Saved queries + -Users can save queries of interest to quickly access the resulting asset list. This is beneficial to users who search for a particular subset of assets on a routine basis, or need to easily refer to a specific filter configuration at a later time. Saved filters help you easily access the assets you care about most based on highly customizable parameters. +## Saved queries +You can save queries of interest to quickly access the resulting asset list. This feature is beneficial if you need to search for a particular subset of assets on a routine basis. It's also helpful if you need to easily refer to a specific filter configuration at a later time. Saved filters help you easily access the assets you care about most based on highly customizable parameters. -To save a query: +To save a query: -1. First, carefully select the filter(s) that will produce your desired results. For more information on the applicable filters for each kind of asset, please see the "Next Steps" section. In this example, we are searching for domains expiring within 30 days that require renewal. Select **Search**. +1. First, carefully select the filters to produce the results you want. For more information on the applicable filters for each kind of asset, see the "Next steps" section. In this example, you're searching for domains that expire within 30 days that require renewal. Select **Search**. -  +  -2. Review the resulting assets. If you are satisfied with the selected filter(s) and wish to save the query, select **Save query**. +1. Review the resulting assets. If you're satisfied with the selected filters and want to save the query, select **Save query**. -3. Name your query and provide a description. Query names cannot be edited after the initial setup, but descriptions can be changed at a later time. Once done, select **Save**. A banner will appear that confirms the query has been saved. +1. Name your query and provide a description. Query names can't be edited after the initial setup, but descriptions can be changed at a later time. Select **Save**. A banner appears that confirms the query was saved. -  +  -4. To view your saved filters, select the "Saved queries" tab at the top of the inventory list page. Any saved queries will be visible from the top section, and selecting "Open query" will filter your inventory by the designated parameters. From this page, you can also edit or delete saved queries. --  +1. To view your saved filters, select the **Saved queries** tab at the top of the inventory list page. Any saved queries are visible in the top section. Selecting **Open query** filters your inventory by the designated parameters. From this page, you can also edit or delete saved queries. +  ## Operators -Inventory filters can be used with the following operators. Some operators aren't available for every filter; some operators are hidden if they aren't logically applicable to the specific filter. +Inventory filters can be used with the following operators. Some operators aren't available for every filter. Some operators are hidden if they aren't logically applicable to the specific filter. | Operator | Description | |--|:- |-| `Equals` | Returns results that exactly match the search value. This filter only returns results for one value at a time. For filters that populate a drop-down list of options, only one option can be selected at a time. To select multiple values, see ΓÇ£inΓÇ¥ operator. | +| `Equals` | Returns results that exactly match the search value. This filter only returns results for one value at a time. For filters that populate a dropdown list of options, only one option can be selected at a time. To select multiple values, see the `In` operator. | | `Not Equals` | Returns results where the field doesn't exactly match the search value. | | `Starts with` | Returns results where the field starts with the search value. | | `Does not start with` | Returns results where the field doesn't start with the search value. | | `Matches` | Returns results where a tokenized term in the field exactly matches the search value. |-| `Does not match` | Returns results where a tokenized term in the field doesn't exactly matches the search value. | -| `In` | Returns results where the field exactly matches one of the search values. For drop-down lists, multiple options can be selected. | -| `Not In` | Returns results where the field doesn't exactly match any of the search values. Multiple options can be selected, and manually inputted fields exclude results that match an exact value. | +| `Does not match` | Returns results where a tokenized term in the field doesn't exactly match the search value. | +| `In` | Returns results where the field exactly matches one of the search values. For dropdown lists, multiple options can be selected. | +| `Not In` | Returns results where the field doesn't exactly match any of the search values. Multiple options can be selected. Manually input fields exclude results that match an exact value. | | `Starts with in` | Returns results where the field starts with one of the search values. | | `Does not start with in` | Returns results where the field doesn't start with any of the search values. | | `Matches in` | Returns results where a tokenized term in the field exactly matches one of the search values. | Inventory filters can be used with the following operators. Some operators aren' | `Does Not Contain In` | Returns results where a tokenized term in the field content doesn't contain any of the search values. | | `Empty` | Returns assets that don't return any value for the specified filter. | | `Not Empty` | Returns all assets that return a value for the specified filter, regardless of the value. |-| `Greater Than or Equal To` | Returns results that are greater than or equal to a numerical value. This includes dates. | -| `Between` | Returns results within a numerical range. This includes date ranges. | -+| `Greater Than or Equal To` | Returns results that are greater than or equal to a numerical value. Includes dates. | +| `Between` | Returns results within a numerical range. Includes date ranges. | ## Common filters -These filters apply to all kinds of assets within inventory. These filters can be used when searching for a wider range of assets. For a list of filters for specific kinds of assets, see the ΓÇ£Next stepsΓÇ¥ section. +These filters apply to all kinds of assets within an inventory. You can use these filters when you search for a wider range of assets. For a list of filters for specific kinds of assets, see the "Next steps" section. +### Defined value filters - ### Defined value filters - The following filters provide a drop-down list of options to select. The available values are pre-defined. + The following filters provide a dropdown list of options that you can select. The available values are predefined. | Filter name | Description | Selectable values | Available operators | |--|-||--|-| Kind | Filters by specific web property types that comprise your inventory. | ASN, Contact, Domain, Host, IP Address, IP Block, Page, SSL Cert | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` | +| Kind | Filters by specific web property types that comprise your inventory. | ASN, Contact, Domain, Host, IP Address, IP Block, Page, SSL Cert | `Equals`, `Not Equals`, `In`, `Not In`, `Empty`, `Not Empty` | | State | The state assigned to assets to distinguish their relevance to your organization and how Defender EASM monitors them. | Approved, Candidate, Dependency, Monitor only, Requires investigation | |-| Removed from Inventory | The method with which an asset was removed from inventory. | Archived, Dismissed | | -| Created At | Filters by the date that an asset was created in your inventory. | Date range via calendar dropdown | `Greater Than or Equal To` `Less Than or Equal To` `Between` | +| Removed from Inventory | The method by which an asset was removed from inventory. | Archived, Dismissed | | +| Created At | Filters by the date that an asset was created in your inventory. | Date range via calendar dropdown | `Greater Than or Equal To`, `Less Than or Equal To`, `Between` | | First Seen | Filters by the date that an asset was first observed by the Defender EASM detection system. | Date range via calendar dropdown | | | | | Last Seen | Filters by the date that an asset was last observed by the Defender EASM detection system. | Date range via calendar dropdown | | |-| Labels | Filters for labels manually applied to inventory assets. | Accepts free-form responses, but also offers a dropdown of labels available in your Defender EASM resource. | +| Labels | Filters for labels manually applied to inventory assets. | Accepts freeform responses, but also offers a dropdown of labels available in your Defender EASM resource | | Updated At | Filters by the date that asset data was last updated in inventory. | Date range via calendar dropdown | | |-| Wildcard | A wildcard DNS record answers DNS requests for subdomains that haven't already been defined. For example: *.contoso.com | True, False | `Equals` `Not Equals` | +| Wildcard | A wildcard DNS record answers DNS requests for subdomains that haven't already been defined. An example is *.contoso.com. | True, False | `Equals`, `Not Equals` | +### Freeform filters -### Free form filters --The following filters require that the user manually enters the value with which they want to search. Many of these values are case-sensitive. +The following filters require you to manually enter the value you want to use for your search. Many of these values are case sensitive. | Filter name | Description | Value format | Applicable operators | |--|-||--|-| UUID | The universally unique identifier assigned to a particular asset. | acabe677-f0c6-4807-ab4e-3a59d9e66b22 | `Equals` `Not Equals` `In` `Not In` | -| Name | The name of an asset. | Must align to the format of the asset name as listed in Inventory. For instance, a host would appear as ΓÇ£mail.contoso.comΓÇ¥ or an IP as ΓÇ£192.168.92.73ΓÇ¥. | `Equals` `Not Equals` `Starts with` `Does not start with` `In` `Not In` `Starts with in` `Does not start with in` | -| External ID | An identifier provided by a third party. | Typically a numerical value. | `Equals` `Not Equals` `Starts with` `Does not start with` `Matches` `Does not match` `In` `Not In` `Starts with in` `Does not start with in` `Matches in` `Does not match in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` | ---## Filtering for assets outside of your approved inventory --1. Select **Inventory** on the left-hand navigation bar to view your inventory. --2. To remove the Approved Inventory filter, select the "X" next to the **State = Approved** filter. This will expand your inventory list to include assets in other states (e.g. Dismissed). -- --3. Identify the asset(s) you'd want to find and how to best surface them using the inventory filters. You may wish to review all assets in the "Candidate" state, adding any assets within your organization's purview to "Approved Inventory". -- - --4. Instead, you may need to find a single specific asset that you wish to add to Approved Inventory. To discover a specific asset, apply a filter searching for the name. -- - --5. Once your inventory list contains the unapproved assets that you were searching for, you can modify the assets. For more information on updating assets, see the [Modifying inventory assets](labeling-inventory-assets.md) article. -+| UUID | The universally unique identifier assigned to a particular asset. | acabe677-f0c6-4807-ab4e-3a59d9e66b22 | `Equals`, `Not Equals`, `In`, `Not In` | +| Name | The name of an asset. | Must align to the format of the asset name as listed in inventory. For instance, a host would appear as mail.contoso.com or an IP as 192.168.92.73. | `Equals`, `Not Equals`, `Starts with`, `Does not start with`, `In`, `Not In`, `Starts with in`, `Does not start with in` | +| External ID | An identifier provided by a third party. | Typically a numerical value. | `Equals`, `Not Equals`, `Starts with`, `Does not start with`, `Matches`, `Does not match`, `In`, `Not In`, `Starts with in`, `Does not start with in`, `Matches in`, `Does not match in`, `Contains`, `Does Not Contain`, `Contains In`, `Does Not Contain In`, `Empty`, `Not Empty` | +## Filter for assets outside your approved inventory -## Next Steps +1. On the leftmost pane, select **Inventory** to view your inventory. -[Understanding asset details](understanding-asset-details.md) +1. To remove the **Approved** inventory filter, select the **X** next to the **State = Approved** filter. Your inventory list expands to include assets in other states, such as **Dismissed**. -[ASN asset filters](asn-asset-filters.md) +  -[Contact asset filters](contact-asset-filters.md) +1. Use the inventory filters to identify the assets you want to find. You might want to review all assets in the **Candidate** state. You can also add any assets that are important to your organization to the **Approved** inventory. -[Domain asset filters](domain-asset-filters.md) +  +  -[Host asset filters](host-asset-filters.md) +1. Or you might need to find a single specific asset that you want to add to the **Approved** inventory. To discover a specific asset, apply a filter to search for the name. -[IP address asset filters](ip-address-asset-filters.md) +  +  -[IP block asset filters](ip-block-asset-filters.md) +1. When your inventory list shows the unapproved assets you were searching for, you can modify the assets. For more information on how to update assets, see [Modifying inventory assets](labeling-inventory-assets.md). -[Page asset filters](page-asset-filters.md) +## Next steps -[SSL certificate asset filters](ssl-certificate-asset-filters.md) +- [Understand asset details](understanding-asset-details.md) +- [ASN asset filters](asn-asset-filters.md) +- [Contact asset filters](contact-asset-filters.md) +- [Domain asset filters](domain-asset-filters.md) +- [Host asset filters](host-asset-filters.md) +- [IP address asset filters](ip-address-asset-filters.md) +- [IP block asset filters](ip-block-asset-filters.md) +- [Page asset filters](page-asset-filters.md) +- [SSL certificate asset filters](ssl-certificate-asset-filters.md) |
external-attack-surface-management | Labeling Inventory Assets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/labeling-inventory-assets.md | Title: Modifying inventory assets - -description: This article outlines how to update assets with labels (custom text values of a user's choice) for improved categorization and operationalization of their inventory data. It also dives into + Title: Modify inventory assets +description: This article outlines how to update assets with customized text labels to categorize and make use of inventory data. Last updated 3/1/2022 -# Modifying inventory assets +# Modify inventory assets -This article outlines how to modify inventory assets. Users can change the state of an asset or apply labels to help better contextualize and operationalize inventory data. This article describes how to modify a single asset or multiple assets, and track any updates with the Task Manager. +This article outlines how to modify inventory assets. You can change the state of an asset or apply labels to help provide context and use inventory data. This article describes how to modify a single asset or multiple assets and track any updates with the Task Manager. -## Labeling assets +## Label assets -Labels help you organize your attack surface and apply business context in a highly customizable way. You can apply any text label to a subset of assets to group assets and better operationalize your inventory. Customers commonly categorize assets that: +Labels help you organize your attack surface and apply business context in a customizable way. You can apply any text label to a subset of assets to group assets and make better use of your inventory. Customers commonly categorize assets that: -- Have recently come under your organizationΓÇÖs ownership through a merger or acquisition.+- Have recently come under your organization's ownership through a merger or acquisition. - Require compliance monitoring.-- Are owned by a specific business unit in their organization. -- Are impacted by a specific vulnerability that requires mitigation. +- Are owned by a specific business unit in their organization. +- Are affected by a specific vulnerability that requires mitigation. - Relate to a particular brand owned by the organization. - Were added to your inventory within a specific time range. -Labels are free-form text fields, so you can create a label for any use case that applies to your organization. +Labels are freeform text fields, so you can create a label for any use case that applies to your organization. -[](media/labels-1a.png#lightbox) +[](media/labels-1a.png#lightbox) +## Apply labels and modify asset states +You can apply labels or modify asset states from both the inventory list and asset details pages. You can make changes to a single asset from the asset details page. You can make changes to multiple assets from the inventory list page. The following sections describe how to apply changes from the two inventory views depending on your use case. -## Applying labels and modifying asset states +### Inventory list page -Users can apply labels or modify asset states from both the inventory list and asset details pages. You can make changes to a single asset from the asset details page, or multiple assets from the inventory list page. The following sections describe how to apply changes from the two inventory views depending on your use case. +You should modify assets from the inventory list page if you want to update numerous assets at once. You can refine your asset list based on filter parameters. This process helps you to identify assets that should be categorized with the label or state change that you want. To modify assets from this page: -### Inventory list page +1. On the leftmost pane of your Microsoft Defender External Attack Surface Management (Defender EASM) resource, select **Inventory**. -You should modify assets from the inventory list page if you want to update numerous assets at once. This process also allows you to refine your asset list based on filter parameters, helping you identify assets that should be categorized with the desired label or state change. To modify assets from this page: +1. Apply filters to produce your intended results. In this example, we're looking for domains that expire within 30 days that require renewal. The applied label helps you more quickly access any expiring domains to simplify the remediation process. You can apply as many filters as necessary to obtain the specific results that are needed. For more information on filters, see [Inventory filters overview](inventory-filters.md). -1. Select the **Inventory** page from the left-hand navigation pane of your Defender EASM resource. +  -2. Apply filters to produce your intended results. In this example, we are looking for domains expiring within 30 days that require renewal. The applied label helps you more quickly access any expiring domains, simplifying the remediation process. This is a simple use case; users can apply as many filters as needed to obtain the specific results needed. For more information on filters, see the [Inventory filters overview](inventory-filters.md) article. +1. After your inventory list is filtered, select the dropdown by the checkbox next to the **Asset** table header. This dropdown gives you the option to select all results that match your query or the results on that specific page (up to 25). The **None** option clears all assets. You can also choose to select only specific results on the page by selecting the individual check marks next to each asset. - +  -3. Once your inventory list is filtered, select the dropdown by checkbox next to the "Asset" table header. This dropdown gives you the option to select all results that match your query, the results on that specific page (up to 25), or "none" which unselects all assets. You can also choose to select only specific results on the page by selecting the individual checkmarks next to each asset. +1. Select **Modify assets**. - - -4. Select **Modify assets**. +1. On the **Modify Assets** pane that opens on the right side of your screen, you can quickly change the state of the selected assets. For this example, you create a new label. Select **Create a new label**. -5. This action opens a new ΓÇ£Modify AssetsΓÇ¥ pane on the right-hand side of your screen. From this screen, you can quickly change the state of the selected asset(s). For this example, we will create a new label. Select **Create a new label**. +1. Determine the label name and display text values. The label name can't be changed after you initially create the label, but the display text can be edited at a later time. The label name is used to query for the label in the product interface or via API, so edits are disabled to ensure these queries work properly. To edit a label name, you need to delete the original label and create a new one. -6. Determine the label name and display text values. The label name cannot be changed after you initially create the label, but the display text can be edited at a later time. The label name is used to query for the label in the product interface or via API, so edits are disabled to ensure these queries work properly. To edit a label name, you need to delete the original label and create a new one. - -Select a color for your new label, then select **Add**. This action navigates you back to the ΓÇ£Modify AssetsΓÇ¥ screen. + Select a color for your new label and select **Add**. This action takes you back to the **Modify Assets** screen. - +  +1. Apply your new label to the assets. Click inside the **Add labels** text box to view a full list of available labels. Or you can type inside the box to search by keyword. After you select the labels you want to apply, select **Update**. -7. Apply your new label to the assets. Click inside the ΓÇ£Add labelsΓÇ¥ text box to view a full list of available labels, or type inside the box to search by keyword. Once you have selected the label(s) you wish to apply, select **Update**. +  - +1. Allow a few moments for the labels to be applied. After the process is finished, you see a "Completed" notification. The page automatically refreshes and displays your asset list with the labels visible. A banner at the top of the screen confirms that your labels were applied. -8. Allow a few moments for the labels to be applied. You will immediately see a notification that confirms the update is in progress. Once complete, you'll see a "completed" notification and the page automatically refreshes, displaying your asset list with the labels visible. A banner at the top of the screen confirms that your labels have been applied. + [](media/labels-6.png#lightbox) -[](media/labels-6.png#lightbox) +### Asset details page +You can also modify a single asset from the asset details page. This option is ideal for situations when assets need to be thoroughly reviewed before a label or state change is applied. -### Asset details page +1. On the leftmost pane of your Defender EASM resource, select **Inventory**. -Users can also modify a single asset from the asset details page. This is ideal for situations when assets need to be thoroughly reviewed before a label or state change is applied. - +1. Select the specific asset you want to modify to open the asset details page. -1. Select the **Inventory** page from the left-hand navigation pane of your Defender EASM resource. - -2. Select the specific asset to which you want to modify to open the asset details page. - -3. From this page, select **Modify asset**. +1. On this page, select **Modify asset**. - +  -4. Follow steps 5-7 as listed above in the ΓÇ£Inventory list pageΓÇ¥ section. +1. Follow steps 5 to 7 in the "Inventory list page" section. -5. Once complete, the asset details page refreshes, displaying the newly applied label or state change and a banner that indicates the asset was successfully updated. +1. The asset details page refreshes and displays the newly applied label or state change. A banner indicates that the asset was successfully updated. +## Modify, remove, or delete labels -## Modify, remove or delete labels +Users can remove a label from an asset by accessing the same **Modify asset** pane from either the inventory list or asset details view. From the inventory list view, you can select multiple assets at once and then add or remove the desired label in one action. -Users may remove a label from an asset by accessing the same ΓÇ£Modify assetΓÇ¥ pane from either the inventory list or asset details view. From the inventory list view, you can select multiple assets at once and then add or remove the desired label in one action. +To modify the label itself or delete a label from the system: -To modify the label itself or delete a label from the system, access the main Labels management page. - +1. On the leftmost pane of your Defender EASM resource, select **Labels (Preview)**. -1. Select the **Labels (Preview)** page under the **Manage** section in the left-hand navigation pane of your Defender EASM resource. + [](media/labels-8a.png#lightbox) -[](media/labels-8a.png#lightbox) + This page displays all the labels within your Defender EASM inventory. Labels on this page might exist in the system but not be actively applied to any assets. You can also add new labels from this page. -This page displays all the labels within your Defender EASM inventory. Please note that labels on this page may exist in the system but not be actively applied to any assets. You can also add new labels from this page. +1. To edit a label, select the pencil icon in the **Actions** column of the label you want to edit. A pane opens on the right side of your screen where you can modify the name or color of a label. Select **Update**. -2. To edit a label, select the pencil icon in the **Actions** column of the label you wish to edit. This action will open the right-hand pane that allows you to modify the name or color of a label. Once done, select **Update**. +1. To remove a label, select the trash can icon from the **Actions** column of the label you want to delete. Select **Remove Label**. -3. To remove a label, select the trash can icon from the **Actions** column of the label you wish to delete. A box appears that asks you to confirm the removal of this label; select **Remove Label** to confirm. +  - +The **Labels** page automatically refreshes. The label is removed from the list and also removed from any assets that had the label applied. A banner confirms the removal. - - -The Labels page will automatically refresh and the label will be removed from the list, as well as removed from any assets that had the label applied. A banner appears to confirm the removal. +## Task Manager and notifications +After a task is submitted, a notification confirms that the update is in progress. From any page in Azure, select the notification (bell) icon to see more information about recent tasks. -## Task manager and notifications + + -Once a task is submitted, you will immediately see a notification pop-up that confirms that the update is in progress. From any page in Azure, simply click on the notification (bell) icon to view additional information about recent tasks. +The Defender EASM system can take seconds to update a handful of assets or minutes to update thousands. You can use the Task Manager to check on the status of any modification tasks in progress. This section outlines how to access the Task Manager and use it to better understand the completion of submitted updates. -  +1. On the leftmost pane of your Defender EASM resource, select **Task Manager**. +  -The Defender EASM system can take seconds to update a handful of assets or minutes to update thousands. The Task Manager enables you to check on the status of any modification tasks in progress. This section outlines how to access the Task Manager and use it to better understand the completion of submitted updates. +1. This page displays all your recent tasks and their status. Tasks are listed as **Completed**, **Failed**, or **In Progress**. A completion percentage and progress bar also appear. To see more details about a specific task, select the task name. A pane opens on the right side of your screen that provides more information. -1. From your Defender EASM resource, select **Task Manager** on the left-hand navigation menu. +1. Select **Refresh** to see the latest status of all items in the Task Manager. - +## Filter for labels -2. This page displays all your recent tasks and their status. Tasks will be listed as "Completed", "Failed" or "In Progress" with a completion percentage and progress bar also displayed. To see more details about a specific task, simply select the task name. A right-hand pane will open that provides additional information. +After you label assets in your inventory, you can use inventory filters to retrieve a list of all assets with a specific label applied. -3. Select **Refresh** to see the latest status of all items in the Task Manager. +1. On the leftmost pane of your Defender EASM resource, select **Inventory**. +1. Select **Add filter**. +1. Select **Labels** from the **Filter** dropdown list. Select an operator and choose a label from the dropdown list of options. The following example shows how to search for a single label. You can use the **In** operator to search for multiple labels. For more information on filters, see the [inventory filters overview](inventory-filters.md). -## Filtering for labels +  -Once you have labeled assets in your inventory, you can use inventory filters to retrieve a list of all assets with a specific label applied. +1. Select **Apply**. The inventory list page reloads and displays all assets that match your criteria. --1. Select the **Inventory** page from the left-hand navigation pane of your Defender EASM resource. --2. Select **Add filter**. - -3. Select **Labels** from the Common filter section. Select an operator, then choose a label from the drop-down list of options. The example below is searching for a single label, but you can use the ΓÇ£InΓÇ¥ operator to search for multiple labels. For more information on filters, see the [Inventory filters overview](inventory-filters.md) -- --4. Select **Apply**. The inventory list page will reload, displaying all assets that match your criteria. ----## Next steps +## Next steps - [Inventory filters overview](inventory-filters.md)-- [Understanding inventory assets](understanding-inventory-assets.md) -- [Understanding asset details](understanding-asset-details.md)-+- [Understand inventory assets](understanding-inventory-assets.md) +- [Understand asset details](understanding-asset-details.md) |
external-attack-surface-management | Understanding Asset Details | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md | Title: Understanding asset details -description: Understanding asset details- Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface. + Title: Understand asset details +description: Learn how Microsoft Defender External Attack Surface Management discovers and defines your organization's internet-exposed attack surface. Last updated 07/14/2022 -# Understanding asset details +# Understand asset details -## Overview +Microsoft Defender External Attack Surface Management (Defender EASM) frequently scans all inventory assets and collects robust contextual metadata that powers Attack Surface Insights. This data can also be viewed more granularly on the asset details page. The data that's provided changes depending on the asset type. For instance, the platform provides unique Whois data for domains, hosts, and IP addresses. It provides signature algorithm data for Secure Sockets Layer (SSL) certificates. -Defender EASM frequently scans all inventory assets, collecting robust contextual metadata that powers Attack Surface Insights and can also be viewed more granularly on the Asset Details page. The provided data changes depending on the asset type. For instance, the platform provides unique WHOIS data for domains, hosts and IP addresses and signature algorithm data for SSL certificates. +This article describes how to view and interpret the expansive data collected by Microsoft for each of your inventory assets. It defines this metadata for each asset type and explains how the insights derived from it can help you manage the security posture of your online infrastructure. -This article provides guidance on how to view and interpret the expansive data collected by Microsoft for each of your inventory assets. It defines this metadata for each asset type and explains how the insights derived from it can help you manage the security posture of your online infrastructure. --*For more information, see [understanding inventory assets](understanding-inventory-assets.md) to familiarize yourself with the key concepts mentioned in this article.* +For more information, see [Understanding inventory assets](understanding-inventory-assets.md) to familiarize yourself with the key concepts mentioned in this article. ## Asset details summary view -You can view the Asset Details page for any asset by clicking on its name from your inventory list. On the left pane of this page, you can view an asset summary that provides key information about that particular asset. This section is primarily comprised of data that applies to all asset types, although additional fields will be available in some cases. The chart below for more information on the metadata provided for each asset type in the summary section. +You can view the asset details page for any asset by selecting its name from your inventory list. On the left pane of this page, you can view an asset summary that provides key information about that particular asset. This section primarily includes data that applies to all asset types, although more fields are available in some cases. For more information on the metadata provided for each asset type in the summary section, see the following chart. - + ### General information -This section is comprised of high-level information that is key to understanding your assets at a glance. Most of these fields are applicable to all assets, although this section can also include information that is specific to one or more asset types. +This section includes high-level information that's key to understanding your assets at a glance. Most of these fields apply to all assets. This section can also include information that's specific to one or more asset types. -| Name | Definition | Asset Types | +| Name | Definition | Asset types | |--|--|--|-| Asset Name | The name of an asset. | All | -| UUID | This 128-bit label represents the universally unique identifier (UUID) for the | All | -| Added to inventory | The date that an asset was added to inventory, whether automatically to the ΓÇ£Approved InventoryΓÇ¥ state or in another state (e.g. ΓÇ£CandidateΓÇ¥). | All | -| Status | The status of the asset within the RiskIQ system. Options include Approved Inventory, Candidate, Dependencies, or Requires Investigation. | All | -| First seen (Global Security Graph) | The date that Microsoft first scanned the asset and added it to our comprehensive Global Security Graph. | All | +| Asset name | The name of an asset. | All | +| UUID | This 128-bit label represents the universally unique identifier (UUID) for the asset. | All | +| Added to inventory | The date that an asset was added to inventory, whether it was automatically added to the **Approved Inventory** state or it's in another state like **Candidate**. | All | +| Status | The status of the asset within the RiskIQ system. Options include **Approved Inventory**, **Candidate**, **Dependencies**, or **Requires Investigation**. | All | +| First seen (Global Security Graph) | The date that Microsoft first scanned the asset and added it to the comprehensive Global Security Graph. | All | | Last seen (Global Security Graph) | The date that Microsoft most recently scanned the asset. | All |-| Discovered on | Indicates the creation date of the Discovery Group that detected the asset. | All | -| Last updated | The date that the asset was last updated by a manual user actions (e.g. a state change, asset removal). | All | +| Discovered on | Indicates the creation date of the discovery group that detected the asset. | All | +| Last updated | The date that a manual user last updated the asset (for example, by making a state change or asset removal). | All | | Country | The country of origin detected for this asset. | All | | State/Province | The state or province of origin detected for this asset. | All | | City | The city of origin detected for this asset. | All |-| WhoIs name | The name associated with a Whois record. | Host | -| WhoIs email | The primary contact email in a Whois record. | Host | -| WhoIS organization | The listed organization in a Whois record. | Host | -| WhoIs registrar | The listed registrar in a Whois record. | Host | -| WhoIs name servers | The listed name servers in a Whois record. | Host | +| Whois name | The name associated with a Whois record. | Host | +| Whois email | The primary contact email in a Whois record. | Host | +| Whois organization | The listed organization in a Whois record. | Host | +| Whois registrar | The listed registrar in a Whois record. | Host | +| Whois name servers | The listed name servers in a Whois record. | Host | | Certificate issued | The date when a certificate was issued. | SSL certificate |-| Certificate expires | The date when a certificate will expire. | SSL certificate | +| Certificate expires | The date when a certificate expires. | SSL certificate | | Serial number | The serial number associated with an SSL certificate. | SSL certificate |-| SSL version | The version of SSL that the certificate was registered | SSL certificate | +| SSL version | The version of SSL that the certificate was registered. | SSL certificate | | Certificate key algorithm | The key algorithm used to encrypt the SSL certificate. | SSL certificate |-| Certificate key size | The number of bits within a SSL certificate key. | SSL certificate | -| Signature algorithm oid | The OID identifying the hash algorithm used to sign the certificate request. | SSL certificate | +| Certificate key size | The number of bits in an SSL certificate key. | SSL certificate | +| Signature algorithm OID | The OID that identifies the hash algorithm used to sign the certificate request. | SSL certificate | | Self-signed | Indicates whether the SSL certificate was self-signed.| SSL certificate | ### Network -IP address information that provides additional context about the usage of the IP. +The following IP address information provides more context about the use of the IP. -| Name | Defi |