Updates from: 05/03/2022 01:08:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Single Sign On Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-saml-protocol.md
Previously updated : 08/24/2021 Last updated : 02/05/2022
The protocol diagram below describes the single sign-on sequence. The cloud serv
To request a user authentication, cloud services send an `AuthnRequest` element to Azure AD. A sample SAML 2.0 `AuthnRequest` could look like the following example:
-```
+```xml
<samlp:AuthnRequest
-xmlns="urn:oasis:names:tc:SAML:2.0:metadata"
-ID="id6c1c178c166d486687be4aaf5e482730"
-Version="2.0" IssueInstant="2013-03-18T03:28:54.1839884Z"
-xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
-<Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">https://www.contoso.com</Issuer>
+ xmlns="urn:oasis:names:tc:SAML:2.0:metadata"
+ ID="id6c1c178c166d486687be4aaf5e482730"
+ Version="2.0" IssueInstant="2013-03-18T03:28:54.1839884Z"
+ xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol">
+ <Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">https://www.contoso.com</Issuer>
</samlp:AuthnRequest> ```
The `Issuer` element in an `AuthnRequest` must exactly match one of the **Servic
A SAML excerpt containing the `Issuer` element looks like the following sample:
-```
+```xml
<Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">https://www.contoso.com</Issuer> ```
This element requests a particular name ID format in the response and is optiona
A `NameIdPolicy` element looks like the following sample:
-```
+```xml
<NameIDPolicy Format="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent"/> ```
If `SPNameQualifier` is specified, Azure AD will include the same `SPNameQualifi
Azure AD ignores the `AllowCreate` attribute. ### RequestedAuthnContext+ The `RequestedAuthnContext` element specifies the desired authentication methods. It is optional in `AuthnRequest` elements sent to Azure AD. Azure AD supports `AuthnContextClassRef` values such as `urn:oasis:names:tc:SAML:2.0:ac:classes:Password`. ### Scoping+ The `Scoping` element, which includes a list of identity providers, is optional in `AuthnRequest` elements sent to Azure AD. If provided, don't include the `ProxyCount` attribute, `IDPListOption` or `RequesterID` element, as they aren't supported. ### Signature+ A `Signature` element in `AuthnRequest` elements is optional. Azure AD does not validate signed authentication requests if a signature is present. Requestor verification is provided for by only responding to registered Assertion Consumer Service URLs. ### Subject+ Don't include a `Subject` element. Azure AD doesn't support specifying a subject for a request and will return an error if one is provided. ## Response+ When a requested sign-on completes successfully, Azure AD posts a response to the cloud service. A response to a successful sign-on attempt looks like the following sample:
-```
+```xml
<samlp:Response ID="_a4958bfd-e107-4e67-b06d-0d85ade2e76a" Version="2.0" IssueInstant="2013-03-18T07:38:15.144Z" Destination="https://contoso.com/identity/inboundsso.aspx" InResponseTo="id758d0ef385634593a77bdf7e632984b6" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"> <Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion"> https://login.microsoftonline.com/82869000-6ad1-48f0-8171-272ed18796e9/</Issuer> <ds:Signature xmlns:ds="https://www.w3.org/2000/09/xmldsig#">
Azure AD sets the `Issuer` element to `https://sts.windows.net/<TenantIDGUID>/`
For example, a response with Issuer element could look like the following sample:
-```
+```xml
<Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion"> https://sts.windows.net/82869000-6ad1-48f0-8171-272ed18796e9/</Issuer> ```
The `Status` element conveys the success or failure of sign-on. It includes the
The following sample is a SAML response to an unsuccessful sign-on attempt.
-```
+```xml
<samlp:Response ID="_f0961a83-d071-4be5-a18c-9ae7b22987a4" Version="2.0" IssueInstant="2013-03-18T08:49:24.405Z" InResponseTo="iddce91f96e56747b5ace6d2e2aa9d4f8c" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"> <Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">https://sts.windows.net/82869000-6ad1-48f0-8171-272ed18796e9/</Issuer> <samlp:Status>
The following sample is a SAML response to an unsuccessful sign-on attempt.
<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:RequestUnsupported" /> </samlp:StatusCode> <samlp:StatusMessage>AADSTS75006: An error occurred while processing a SAML2 Authentication request. AADSTS90011: The SAML authentication request property 'NameIdentifierPolicy/SPNameQualifier' is not supported.
-Trace ID: 66febed4-e737-49ff-ac23-464ba090d57c
-Timestamp: 2013-03-18 08:49:24Z</samlp:StatusMessage>
- </samlp:Status>
+ Trace ID: 66febed4-e737-49ff-ac23-464ba090d57c
+ Timestamp: 2013-03-18 08:49:24Z</samlp:StatusMessage>
+ </samlp:Status>
+</samlp:Response>
``` ### Assertion
In addition to the `ID`, `IssueInstant` and `Version`, Azure AD sets the followi
This is set to `https://sts.windows.net/<TenantIDGUID>/`where \<TenantIDGUID> is the Tenant ID of the Azure AD tenant.
-```
+```xml
<Issuer>https://sts.windows.net/82869000-6ad1-48f0-8171-272ed18796e9/</Issuer> ```
Azure AD signs the assertion in response to a successful sign-on. The `Signature
To generate this digital signature, Azure AD uses the signing key in the `IDPSSODescriptor` element of its metadata document.
-```
+```xml
<ds:Signature xmlns:ds="https://www.w3.org/2000/09/xmldsig#">
- digital_signature_here
- </ds:Signature>
+ digital_signature_here
+</ds:Signature>
``` #### Subject
This specifies the principal that is the subject of the statements in the assert
The `Method` attribute of the `SubjectConfirmation` element is always set to `urn:oasis:names:tc:SAML:2.0:cm:bearer`.
-```
+```xml
<Subject>
- <NameID>Uz2Pqz1X7pxe4XLWxV9KJQ+n59d573SepSAkuYKSde8=</NameID>
- <SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
- <SubjectConfirmationData InResponseTo="id758d0ef385634593a77bdf7e632984b6" NotOnOrAfter="2013-03-18T07:43:15.144Z" Recipient="https://contoso.com/identity/inboundsso.aspx" />
- </SubjectConfirmation>
+ <NameID>Uz2Pqz1X7pxe4XLWxV9KJQ+n59d573SepSAkuYKSde8=</NameID>
+ <SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
+ <SubjectConfirmationData InResponseTo="id758d0ef385634593a77bdf7e632984b6" NotOnOrAfter="2013-03-18T07:43:15.144Z" Recipient="https://contoso.com/identity/inboundsso.aspx" />
+ </SubjectConfirmation>
</Subject> ```
The `Method` attribute of the `SubjectConfirmation` element is always set to `ur
This element specifies conditions that define the acceptable use of SAML assertions.
-```
+```xml
<Conditions NotBefore="2013-03-18T07:38:15.128Z" NotOnOrAfter="2013-03-18T08:48:15.128Z">
- <AudienceRestriction>
- <Audience>https://www.contoso.com</Audience>
- </AudienceRestriction>
+ <AudienceRestriction>
+ <Audience>https://www.contoso.com</Audience>
+ </AudienceRestriction>
</Conditions> ```
The `NotBefore` and `NotOnOrAfter` attributes specify the interval during which
This contains a URI that identifies an intended audience. Azure AD sets the value of this element to the value of `Issuer` element of the `AuthnRequest` that initiated the sign-on. To evaluate the `Audience` value, use the value of the `App ID URI` that was specified during application registration.
-```
+```xml
<AudienceRestriction>
- <Audience>https://www.contoso.com</Audience>
+ <Audience>https://www.contoso.com</Audience>
</AudienceRestriction> ```
Like the `Issuer` value, the `Audience` value must exactly match one of the serv
This contains claims about the subject or user. The following excerpt contains a sample `AttributeStatement` element. The ellipsis indicates that the element can include multiple attributes and attribute values.
-```
+```xml
<AttributeStatement>
- <Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name">
- <AttributeValue>testuser@contoso.com</AttributeValue>
- </Attribute>
- <Attribute Name="http://schemas.microsoft.com/identity/claims/objectidentifier">
- <AttributeValue>3F2504E0-4F89-11D3-9A0C-0305E82C3301</AttributeValue>
- </Attribute>
- ...
+ <Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name">
+ <AttributeValue>testuser@contoso.com</AttributeValue>
+ </Attribute>
+ <Attribute Name="http://schemas.microsoft.com/identity/claims/objectidentifier">
+ <AttributeValue>3F2504E0-4F89-11D3-9A0C-0305E82C3301</AttributeValue>
+ </Attribute>
+ ...
</AttributeStatement> ```
This element asserts that the assertion subject was authenticated by a particula
* The `AuthnInstant` attribute specifies the time at which the user authenticated with Azure AD. * The `AuthnContext` element specifies the authentication context used to authenticate the user.
-```
+```xml
<AuthnStatement AuthnInstant="2013-03-18T07:33:56.000Z" SessionIndex="_bf9c623d-cc20-407a-9a59-c2d0aee84d12">
- <AuthnContext>
- <AuthnContextClassRef> urn:oasis:names:tc:SAML:2.0:ac:classes:Password</AuthnContextClassRef>
- </AuthnContext>
+ <AuthnContext>
+ <AuthnContextClassRef> urn:oasis:names:tc:SAML:2.0:ac:classes:Password</AuthnContextClassRef>
+ </AuthnContext>
</AuthnStatement> ```
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 04/14/2022 Last updated : 05/02/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on April 14th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on May 2nd, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Common Data Service Database Capacity | CDS_DB_CAPACITY | e612d426-6bc3-4181-9658-91aa906b0ac0 | CDS_DB_CAPACITY (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Database Capacity (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Common Data Service Database Capacity for Government | CDS_DB_CAPACITY_GOV | eddf428b-da0e-4115-accf-b29eb0b83965 | CDS_DB_CAPACITY_GOV (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Common Data Service for Apps Database Capacity for Government (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)| | Common Data Service Log Capacity | CDS_LOG_CAPACITY | 448b063f-9cc6-42fc-a0e6-40e08724a395 | CDS_LOG_CAPACITY (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Log Capacity (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
-| COMMUNICATIONS CREDITS | MCOPSTNC | 47794cd0-f0e5-45c5-9033-2eb6b5fc84e0 | MCOPSTNC (505e180f-f7e0-4b65-91d4-00d670bbd18c) | COMMUNICATIONS CREDITS (505e180f-f7e0-4b65-91d4-00d670bbd18c) |
+| COMMUNICATIONS CREDITS | MCOPSTNC | 47794cd0-f0e5-45c5-9033-2eb6b5fc84e0 | MCOPSTNC (505e180f-f7e0-4b65-91d4-00d670bbd18c) | COMMUNICATIONS CREDITS (505e180f-f7e0-4b65-91d4-00d670bbd18c) |
+| Compliance Manager Premium Assessment Add-On for GCC | CMPA_addon_GCC | a9d7ef53-9bea-4a2a-9650-fa7df58fe094 | COMPLIANCE_MANAGER_PREMIUM_ASSESSMENT_ADDON (3a117d30-cfac-4f00-84ac-54f8b6a18d78) | Compliance Manager Premium Assessment Add-On (3a117d30-cfac-4f00-84ac-54f8b6a18d78) |
| Dynamics 365 - Additional Database Storage (Qualified Offer) | CRMSTORAGE | 328dc228-00bc-48c6-8b09-1fbc8bc3435d | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRMSTORAGE (77866113-0f3e-4e6e-9666-b1e25c6f99b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Storage Add-On (77866113-0f3e-4e6e-9666-b1e25c6f99b0) | | Dynamics 365 - Additional Production Instance (Qualified Offer) | CRMINSTANCE | 9d776713-14cb-4697-a21d-9a52455c738a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRMINSTANCE (eeea837a-c885-4167-b3d5-ddde30cbd85f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Instance (eeea837a-c885-4167-b3d5-ddde30cbd85f) | | Dynamics 365 - Additional Non-Production Instance (Qualified Offer) | CRMTESTINSTANCE | e06abcc2-7ec5-4a79-b08b-d9c282376f72 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> CRMTESTINSTANCE (a98b7619-66c7-4885-bdfc-1d9c8c3d279f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Additional Test Instance (a98b7619-66c7-4885-bdfc-1d9c8c3d279f) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 F1 | M365_F1_COMM | 50f60901-3181-4b75-8a2c-4c8e4c1d5a72 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/> RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F3 GCC | M365_F1_GOV | 2a914830-d700-444a-b73c-e3f31980d833 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_F1_GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>CDS_O365_F1_GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>EXCHANGE_S_DESKLESS_GOV (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>FORMS_GOV_F1 (bfd4133a-bbf3-4212-972b-60412137c428)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K_GOV (d65648f1-9504-46e4-8611-2658763f28b8)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708- 6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_S1_GOV (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>FLOW_O365_S1_GOV (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SHAREPOINTDESKLESS_GOV (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>MCOIMP_GOV (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service - O365 F1 GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>Common Data Service for Teams_F1 GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>Exchange Online (Kiosk) for Government (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>Forms for Government (Plan F1) (bfd4133a-bbf3-4212-972b-60412137c428)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (F1) (d65648f1-9504-46e4-8611-2658763f28b8)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>Power Apps for Office 365 F3 for Government (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>Power Automate for Office 365 F3 for Government (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SharePoint KioskG (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>Skype for Business Online (Plan 1) for Government (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3) | | MICROSOFT 365 G3 GCC | M365_G3_GOV | e823ca47-49c4-46b3-b38d-ca11d5abe3d2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>CONTENTEXPLORER_STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>AZURE RIGHTS MANAGEMENT PREMIUM FOR GOVERNMENT (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
+| Microsoft 365 GCC G5 | M365_G5_GCC | e2be619b-b125-455f-8660-fb503e431a5d | CDS_O365_P3_GCC (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>FORMS_GOV_E5 (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS_GOV (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>STREAM_O365_E5_GOV (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_PREMIUM2_GOV (5400a66d-eaa5-427d-80f2-0f26d59d8fce)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P3_GCC (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P3_GOV (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>FLOW_O365_P3_GOV (8055d84a-c172-42eb-b997-6c2ae4628246) | Common Data Service for Teams (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>Customer Lockbox for Government (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>Exchange Online (Plan 2) for Government (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>Microsoft 365 Audio Conferencing for Government (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System for Government (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>Microsoft Defender for Office 365 (Plan 2) for Government (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>Microsoft Forms for Government (Plan E5) (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics for Government (Full) (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery for Government (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>Skype for Business Online (Plan 2) for Government (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>Stream for Office 365 for Government (E5) (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Information Protection Premium P2 for GCC (5400a66d-eaa5-427d-80f2-0f26d59d8fce)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 for Government (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>Power Automate for Office 365 for Government (8055d84a-c172-42eb-b997-6c2ae4628246) |
| MICROSOFT 365 PHONE SYSTEM | MCOEV | e43b5b99-8dfb-405f-9987-dc307f34bcbd | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM FOR DOD | MCOEV_DOD | d01d9287-694b-44f3-bcc5-ada78c8d953e | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM FOR FACULTY | MCOEV_FACULTY | d979703c-028d-4de5-acbf-7955566b69b9 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM(4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
|Microsoft Stream Storage Add-On (500 GB) | STREAM_STORAGE | 9bd7c846-9556-4453-a542-191d527209e8 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_STORAGE (83bced11-77ce-4071-95bd-240133796768) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Storage Add-On (83bced11-77ce-4071-95bd-240133796768) | | MICROSOFT TEAMS (FREE) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) | | MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 |
-| Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Teams_Room_Standard (92c6b761-01de-457a-9dd9-793a975238f7)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Azure Active Directory Premium Plan 1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Teams Room Standard (92c6b761-01de-457a-9dd9-793a975238f7)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
| Microsoft Teams Rooms Standard without Audio Conferencing | MEETING_ROOM_NOAUDIOCONF | 61bec411-e46a-4dab-8f46-8b58ec845ffe | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Teams Trial | MS_TEAMS_IW | 74fbf1bb-47c6-4796-9623-77dc7371723b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Commercial Cloud | TEAMS_COMMERCIAL_TRIAL | 29a2f828-8f39-4837-b8ff-c957e86abe3c | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service for Teams_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Stream for O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>Power Virtual Agents for Office 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Office 365 Cloud App Security | ADALLOM_O365 | 84d5f90f-cd0d-4864-b90b-1c7ba63b4808 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b) | | Office 365 Extra File Storage | SHAREPOINTSTORAGE | 99049c9c-6011-4908-bf17-15f496e6519d | SHAREPOINTSTORAGE (be5a7ed5-c598-4fcd-a061-5e6724c68a58) | Office 365 Extra File Storage (be5a7ed5-c598-4fcd-a061-5e6724c68a58) |
-| OFFICE 365 E1 | STANDARDPACK | 18181a46-0d4e-45cd-891e-60aabd171b4e | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Office 365 E1| STANDARDPACK | 18181a46-0d4e-45cd-891e-60aabd171b4e | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813) | Common Data Service for Teams (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Kaizala Pro (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SharePoint (Plan 1) (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Common Data Service (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Microsoft Stream for Office 365 E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>Power Virtual Agents for Office 365 (0683001c-0492-4d59-9515-d9a6426b5813) |
| OFFICE 365 E2 | STANDARDWOFFPACK | 6634e0ce-1a9f-428c-a498-f84ec7b8aa2e | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Office 365 E3 | ENTERPRISEPACK | 6fd2c87f-b296-42f0-b197-1e91e994b900 | DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/> Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/> Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/> Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 E3 DEVELOPER | DEVELOPERPACK | 189a915c-fe4f-4ffa-bde4-85b9628d07a0 | BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINT_S_DEVELOPER (a361d6e2-509e-4e25-a8ad-950060064ef4)<br/>SHAREPOINTWAC_DEVELOPER (527f7cdd-0e86-4c47-b879-f5fd357a3ac6)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929) | BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>MICROSOFT FORMS (PLAN E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365(c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINT FOR DEVELOPER (a361d6e2-509e-4e25-a8ad-950060064ef4)<br/>OFFICE ONLINE FOR DEVELOPER (527f7cdd-0e86-4c47-b879-f5fd357a3ac6)<br/>MICROSOFT STREAM FOR O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929) |
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
Previously updated : 03/21/2022 Last updated : 05/02/2022
With outbound settings, you select which of your users and groups will be able t
1. Select **Save**.
+## Remove an organization
+
+When you remove an organization from your Organizational settings, the default cross-tenant access settings will go into effect for that organization.
+
+> [!NOTE]
+> If the organization is a cloud service provider for your organization (the isServiceProvider property in the Microsoft Graph [partner-specific configuration](/graph/api/resources/crosstenantaccesspolicyconfigurationpartner) is true), you won't be able to remove the organization.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+
+1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
+
+1. Select the **Organizational settings** tab.
+
+1. Find the organization in the list, and then select the trash can icon on that row.
+ ## Next steps - See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.
active-directory Cross Tenant Access Settings B2b Direct Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md
With outbound settings, you select which of your users and groups will be able t
## Remove an organization
-When you remove an organization from your Organizational settings, the default cross-tenant access settings will go into effect for all B2B collaboration with that organization.
+When you remove an organization from your Organizational settings, the default cross-tenant access settings will go into effect for that organization.
> [!NOTE] > If the organization is a cloud service provider for your organization (the isServiceProvider property in the Microsoft Graph [partner-specific configuration](/graph/api/resources/crosstenantaccesspolicyconfigurationpartner) is true), you won't be able to remove the organization.
When you remove an organization from your Organizational settings, the default c
1. Select the **Organizational settings** tab.
-2. Find the organization in the list, and then select the trash can icon on that row.
+1. Find the organization in the list, and then select the trash can icon on that row.
## Next steps
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 03/31/2022 Last updated : 05/02/2022
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## April 2022
+
+### Updated articles
+
+- [Email one-time passcode authentication](one-time-passcode.md)
+- [Configure external collaboration settings](external-collaboration-settings-configure.md)
+- [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md)
+- [B2B direct connect overview (Preview)](b2b-direct-connect-overview.md)
+- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
+- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
+- [Azure Active Directory External Identities: What's new](whats-new-docs.md)
+- [Azure Active Directory B2B best practices](b2b-fundamentals.md)
+- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
+- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)
+- [B2B collaboration overview](what-is-b2b.md)
+ ## March 2022 ### New articles
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md) - [Properties of an Azure Active Directory B2B collaboration user](user-properties.md) - [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)-
-## January 2022
-
-### Updated articles
--- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
For more information, see [License requirements](access-reviews-overview.md#lice
![Screenshot that shows the interface that appears if you selected applications instead of groups.](./media/create-access-review/select-application-detailed.png)
- > [!NOTE]
- > Selecting multiple groups or applications results in the creation of multiple access reviews. For example, if you select five groups to review, the result is five separate access reviews.
-
-1. Now you can select a scope for the review. Your options are:
+> [!NOTE]
+> Selecting multiple groups or applications results in the creation of multiple access reviews. For example, if you select five groups to review, the result is five separate access reviews.
+7. Now you can select a scope for the review. Your options are:
- **Guest users only**: This option limits the access review to only the Azure AD B2B guest users in your directory. - **Everyone**: This option scopes the access review to all user objects associated with the resource. > [!NOTE] > If you selected **All Microsoft 365 groups with guest users**, your only option is to review **Guest users only**.
+1. Or if you are conducting group membership review, you can create access reviews only for inactive users in the group (preview). In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
+ 1. Select **Next: Reviews**. ### Next: Reviews
B2B direct connect users and teams are included in access reviews of the Teams-e
- User administrator - Identity Governance Administrator
-Ue the following instructions to create an access review on a team with shared channels:
+Use the following instructions to create an access review on a team with shared channels:
-1. Sign in to the Azure Portal as a Global Admin, User Admin or Identity Governance Admin.
+1. Sign in to the Azure portal as a Global Admin, User Admin or Identity Governance Admin.
1. Open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
active-directory Migrate Application Authentication To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory.md
Title: 'Migrate application authentication to Azure Active Directory' description: This whitepaper details the planning for and benefits of migrating your application authentication to Azure AD. -+ Last updated 02/05/2021-+
active-directory Pim Create Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md
The need for access to privileged Azure resource and Azure AD roles by employees
:::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/users.png" alt-text="Users scope to review role membership of screenshot.":::
-11. Under **Review role membership**, select the privileged Azure resource or Azure AD roles to review.
+11. Or, you can create access reviews only for inactive users (preview). In the *Users scope* section, set the **Inactive users (on tenant level) only** to **true**. If the toggle is set to *true*, the scope of the review will focus on inactive users only. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users inactive for the specified number of days will be the only users in the review.
+
+12. Under **Review role membership**, select the privileged Azure resource or Azure AD roles to review.
> [!NOTE] > Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews. :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/review-role-membership.png" alt-text="Review role memberships screenshot.":::
-12. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
+13. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
:::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/assignment-type-select.png" alt-text="Reviewers list of assignment types screenshot.":::
-13. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access.
+14. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access.
:::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/reviewers.png" alt-text="Reviewers list of selected users or members (self)":::
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
na Previously updated : 12/17/2021 Last updated : 05/02/2022
Customizing the view enables you to display additional fields or remove fields t
![All interactive columns](./media/concept-all-sign-ins/all-interactive-columns.png)
-Select an item in the list view to get more detailed information about the related sign-in.
-![Sign-in activity](./media/concept-all-sign-ins/interactive-user-sign-in-details.png "Interactive user sign-ins")
To make it easier to digest the data, non-interactive sign-in events are grouped
- Resource ID
-You can:
--- Expand a node to see the individual items of a group. --- Click an individual item to see all details --
-![Non-interactive user sign-in details](./media/concept-all-sign-ins/non-interactive-sign-ins-details.png)
To make it easier to digest the data in the service principal sign-in logs, serv
- Resource name or ID
-You can:
--- Expand a node to see the individual items of a group. --- Click an individual item so see all details --
-![Column details](./media/concept-all-sign-ins/service-principals-sign-ins-view.png "Column details")
To access the new sign-in logs with non-interactive and application sign-ins:
-## Download sign-in activity logs
-
-When you download a sign-in activity report, the following is true:
--- You can download the sign-in report as CSV or JSON file.--- You can download up to 100-K records. If you want to download more data, use the reporting API.--- Your download is based on the filter selection you made.--- The number of records you can download is constrained by the [Azure Active Directory report retention policies](reference-reports-data-retention.md). --
-![Download logs](./media/concept-all-sign-ins/download-reports.png "Download logs")
--
-Each CSV download consists of six different files:
--- Interactive sign-ins--- Auth details of the interactive sign-ins--- Non-interactive sign-ins -- Auth details of the non-interactive sign-ins--- Service principal sign-ins--- Managed identity for Azure resources sign-ins-
-Each JSON download consists of four different files:
--- Interactive sign-ins (includes auth details)--- Non-interactive sign-ins (includes auth details)--- Service principal sign-ins--- Managed identity for Azure resources sign-ins-
-![Download files](./media/concept-all-sign-ins/download-files.png "Download files")
--
-## Return log data with Microsoft Graph
-
-In addition to using the Azure portal, you can query sign-in logs using the Microsoft Graph API to return different types of sign-in information. To avoid potential performance issues, scope your query to just the data you care about.
-
-The following example scopes the query by the number records, by a specific time period, and by type of sign-in event:
-
-```msgraph-interactive
-GET https://graph.microsoft.com/beta/auditLogs/signIns?$top=100&$filter=createdDateTime ge 2020-09-10T06:00:00Z and createdDateTime le 2020-09-17T06:00:00Z and signInEventTypes/any(t: t eq 'nonInteractiveUser')
-```
-
-The query parameters in the example provide the following results:
--- The [$top](/graph/query-parameters#top-parameter) parameter returns the top 100 results.-- The [$filter](/graph/query-parameters#filter-parameter) parameter limits the time frame for results to return and uses the signInEventTypes property to return only non-interactive user sign-ins.-
-The following values are available for filtering by different sign-in types:
+## Next steps
-- interactiveUser-- nonInteractiveUser-- servicePrincipal -- managedIdentity
+- [Basic info in the Azure AD sign-in logs](reference-basic-info-sign-in-logs.md)
-## Next steps
+- [How to download logs in Azure Active Directory](howto-download-logs.md)
-* [Sign-in activity report error codes](./concept-sign-ins.md)
-* [Azure AD data retention policies](reference-reports-data-retention.md)
-* [Azure AD report latencies](reference-reports-latencies.md)
+- [How to access activity logs in Azure AD](howto-access-activity-logs.md)
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
na Previously updated : 10/25/2021 Last updated : 05/02/2022
that have more than one value for a given sign-in request as column. This is, fo
![Screenshot shows the Columns dialog box where you can select attributes.](./media/concept-sign-ins/columns.png "Sign-in activity")
-Select an item in the list view to get more detailed information.
-
-![Screenshot shows a detailed information view.](./media/concept-sign-ins/basic-sign-in.png "Sign-in activity")
The **Location** - The location the connection was initiated from:
------
-## Download sign-in activities
-
-Click the **Download** option to create a CSV or JSON file of the most recent 250,000 records. Start with [download the sign-ins data](./howto-download-logs.md) if you want to work with it outside the Azure portal.
-
-![Download](./media/concept-sign-ins/71.png "Download")
-
-> [!IMPORTANT]
-> The number of records you can download is constrained by the [Azure Active
-> Directory report retention policies](reference-reports-data-retention.md).
-- ## Sign-ins data shortcuts Azure AD and the Azure portal both provide you with additional entry points to sign-ins data:
You can also access the Microsoft 365 activity logs programmatically by using th
## Next steps
-* [Azure AD data retention policies](reference-reports-data-retention.md)
-* [Azure AD report latencies](reference-reports-latencies.md)
-* [First party Microsoft applications in sign-ins report](/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in#application-ids-for-commonly-used-microsoft-applications)
+- [Basic info in the Azure AD sign-in logs](reference-basic-info-sign-in-logs.md)
+
+- [How to download logs in Azure Active Directory](howto-download-logs.md)
+
+- [How to access activity logs in Azure AD](howto-access-activity-logs.md)
active-directory Adobe Echosign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-echosign-tutorial.md
To configure Azure AD single sign-on with Adobe Sign, perform the following step
`https://<companyname>.echosign.com` > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Adobe Sign Client support team](https://helpx.adobe.com/in/contact/support.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Adobe Sign Client support team](https://helpx.adobe.com/support.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Adobe Sign SSO
-1. Before configuration, contact the [Adobe Sign Client support team](https://helpx.adobe.com/in/contact/support.html) to add your domain in the Adobe Sign allowlist. Here's how to add the domain:
+1. Before configuration, contact the [Adobe Sign Client support team](https://helpx.adobe.com/support.html) to add your domain in the Adobe Sign allowlist. Here's how to add the domain:
- a. The [Adobe Sign Client support team](https://helpx.adobe.com/in/contact/support.html) sends you a randomly generated token. For your domain, the token will be like the following: **adobe-sign-verification= xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx**
+ a. The [Adobe Sign Client support team](https://helpx.adobe.com/support.html) sends you a randomly generated token. For your domain, the token will be like the following: **adobe-sign-verification= xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx**
- b. Publish the verification token in a DNS text record, and notify the [Adobe Sign Client support team](https://helpx.adobe.com/in/contact/support.html).
+ b. Publish the verification token in a DNS text record, and notify the [Adobe Sign Client support team](https://helpx.adobe.com/support.html).
> [!NOTE] > This can take a few days, or longer. Note that DNS propagation delays mean that a value published in DNS might not be visible for an hour or more. Your IT administrator should be knowledgeable about how to publish this token in a DNS text record.
- c. When you notify the [Adobe Sign Client support team](https://helpx.adobe.com/in/contact/support.html) through the support ticket, after the token is published, they validate the domain and add it to your account.
+ c. When you notify the [Adobe Sign Client support team](https://helpx.adobe.com/support.html) through the support ticket, after the token is published, they validate the domain and add it to your account.
d. Generally, here's how to publish the token on a DNS record:
active-directory Teamslide Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamslide-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with TeamSlide'
+description: Learn how to configure single sign-on between Azure Active Directory and TeamSlide.
++++++++ Last updated : 04/29/2022+++
+# Tutorial: Azure AD SSO integration with TeamSlide
+
+In this tutorial, you'll learn how to integrate TeamSlide with Azure Active Directory (Azure AD). When you integrate TeamSlide with Azure AD, you can:
+
+* Control in Azure AD who has access to TeamSlide.
+* Enable your users to be automatically signed-in to TeamSlide with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* TeamSlide single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* TeamSlide supports **SP** initiated SSO.
+* TeamSlide supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add TeamSlide from the gallery
+
+To configure the integration of TeamSlide into Azure AD, you need to add TeamSlide from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **TeamSlide** in the search box.
+1. Select **TeamSlide** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for TeamSlide
+
+Configure and test Azure AD SSO with TeamSlide using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TeamSlide.
+
+To configure and test Azure AD SSO with TeamSlide, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure TeamSlide SSO](#configure-teamslide-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create TeamSlide test user](#create-teamslide-test-user)** - to have a counterpart of B.Simon in TeamSlide that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **TeamSlide** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://www.teamslide.io/AuthServices/`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://www.teamslide.io/AuthServices/Acs`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://www.teamslide.io/ChooseSso?domain=<CustomerDomain>`
+
+ > [!NOTE]
+ > The Sign-on URL is not real. Update the value with the actual Sign-on URL. Contact [TeamSlide Client support team](mailto:support@aploris.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. TeamSlide application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot showing the list of default attributes.](common/default-attributes.png)
+
+1. In addition to above, TeamSlide application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | displayname | user.displayname |
+ | groups | user.groups [All] |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot showing the Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TeamSlide.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **TeamSlide**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure TeamSlide SSO
+
+1. Log in to your TeamSlide company site as an administrator.
+
+1. Go to **Global Settings** > **SSO Settings** tab.
+
+1. In the **Single sign-on settings** page, perform the following steps:
+
+ ![Screenshot that shows the Configuration Settings.](./media/teamslide-tutorial/settings.png "Configuration")
+
+ a. In the **Entity ID** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ b. In the **Sign-On URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. In the **Metadata location** textbox, paste the **App Federation Metadata Url** value which you have copied from the Azure portal.
+
+ d. Click **Save Changes**.
+
+### Create TeamSlide test user
+
+In this section, a user called B.Simon is created in TeamSlide. TeamSlide supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in TeamSlide, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to TeamSlide Sign-on URL where you can initiate the login flow.
+
+* Go to TeamSlide Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the TeamSlide tile in the My Apps, this will redirect to TeamSlide Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure TeamSlide you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Use Windows Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-windows-hpc.md
+
+ Title: Use Windows HostProcess containers
+description: Learn how to use HostProcess & Privileged containers for Windows workloads on AKS
++ Last updated : 4/6/2022++++
+# Use Windows HostProcess containers
+
+HostProcess / Privileged containers extend the Windows container model to enable a wider range of Kubernetes cluster management scenarios. HostProcess containers run directly on the host and maintain behavior and access similar to that of a regular process. HostProcess containers allow users to package and distribute management operations and functionalities that require host access while retaining versioning and deployment methods provided by containers.
+
+A privileged DaemonSet can carry out changes or monitor a Linux host on Kubernetes but not Windows hosts. HostProcess containers are the Windows equivalent of host elevation.
++
+## Limitations
+
+* HostProcess containers require Kubernetes 1.23 or greater.
+* HostProcess containers require `containerd` 1.6 or higher container runtime.
+* HostProcess pods can only contain HostProcess containers. This is a current limitation of the Windows operating system. Non-privileged Windows containers can't share a vNIC with the host IP namespace.
+* HostProcess containers run as a process on the host. The only isolation those containers have from the host is the resource constraints imposed on the HostProcess user account.
+* Filesystem isolation and Hyper-V isolation aren't supported for HostProcess containers.
+* Volume mounts are supported and are mounted under the container volume. See Volume Mounts.
+* A limited set of host user accounts are available for Host Process containers by default. See Choosing a User Account.
+* Resource limits such as disk, memory, and cpu count, work the same way as fashion as processes on the host.
+* Named pipe mounts and Unix domain sockets are not directly supported, but can be accessed on their host path, for example `\\.\pipe\*`.
++
+## Run a HostProcess workload
+
+To use HostProcess features with your deployment, set *privilaged: true*, *hostProcess: true*, and *hostNetwork: true*:
+
+```yaml
+ spec:
+ ...
+ containers:
+ ...
+ securityContext:
+ privileged: true
+ windowsOptions:
+ hostProcess: true
+ ...
+ hostNetwork: true
+ ...
+```
+
+To run an example workload that uses HostProcess features on an existing AKS cluster with Windows nodes, create `hostprocess.yaml` with the following:
+
+```yaml
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: privileged-daemonset
+ namespace: kube-system
+ labels:
+ app: privileged-daemonset
+spec:
+ selector:
+ matchLabels:
+ app: privileged-daemonset
+ template:
+ metadata:
+ labels:
+ app: privileged-daemonset
+ spec:
+ nodeSelector:
+ kubernetes.io/os: windows
+ containers:
+ - name: powershell
+ image: mcr.microsoft.com/powershell:lts-nanoserver-1809
+ securityContext:
+ privileged: true
+ windowsOptions:
+ hostProcess: true
+ runAsUserName: "NT AUTHORITY\\SYSTEM"
+ command:
+ - pwsh.exe
+ - -command
+ - |
+ $AdminRights = ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]"Administrator")
+ Write-Host "Process has admin rights: $AdminRights"
+ while ($true) { Start-Sleep -Seconds 2147483 }
+ hostNetwork: true
+ terminationGracePeriodSeconds: 0
+```
+
+Use `kubectl` to run the example workload:
+
+```azurecli-interactive
+kubectl apply -f hostprocess.yaml
+```
+
+You should see the following output:
+
+```output
+$ kubectl apply -f hostprocess.yaml
+daemonset.apps/privileged-daemonset created
+```
+
+You can verify your workload use the features of HostProcess by view the pod's logs.
+
+Use `kubectl` to find the name of the pod in the `kube-system` namespace.
+
+```output
+$ kubectl get pods --namespace kube-system
+
+NAME READY STATUS RESTARTS AGE
+...
+privileged-daemonset-12345 1/1 Running 0 2m13s
+```
+
+Use `kubctl log` to view the logs of the pod and verify the pod has administrator rights:
+
+```output
+$ kubectl logs privileged-daemonset-12345 --namespace kube-system
+InvalidOperation: Unable to find type [Security.Principal.WindowsPrincipal].
+Process has admin rights:
+```
+
+## Next steps
+
+For more details on HostProcess containers and Microsoft's contribution to Kubernetes upstream, see the [Alpha in v1.22: Windows HostProcess Containers][blog-post].
++
+<!-- LINKS - External -->
+[blog-post]: https://kubernetes.io/blog/2021/08/16/windows-hostprocess-containers/
api-management Api Management Error Handling Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-error-handling-policies.md
Policies in Azure API Management are divided into `inbound`, `backend`, `outboun
```xml <policies>
- <inbound>
- <!-- statements to be applied to the request go here -->
- </inbound>
- <backend>
- <!-- statements to be applied before the request is
- forwarded to the backend service go here -->
+ <inbound>
+ <!-- statements to be applied to the request go here -->
+ </inbound>
+ <backend>
+ <!-- statements to be applied before the request is
+ forwarded to the backend service go here -->
</backend> <outbound>
- <!-- statements to be applied to the response go here -->
+ <!-- statements to be applied to the response go here -->
</outbound> <on-error> <!-- statements to be applied if there is an error condition go here -->
- </on-error>
+ </on-error>
</policies> ```
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
Follow these steps to grant:
#Login and Set the Subscription az login az account set --subscription $subId
- #Assign the following permissions: Microsoft Graph Delegated Permission: User.Read, Microsoft Graph Application Permission: Directory.ReadAll, Azure Active Directory Graph Application Permission: Directory.ReadAll (legacy)
+ #Assign the following permissions: Microsoft Graph Delegated Permission: User.Read, Microsoft Graph Application Permission: Directory.ReadAll
az rest --method PATCH --uri "https://graph.microsoft.com/v1.0/$($tenantId)/applications/$($appObjectID)" --body "{'requiredResourceAccess':[{'resourceAccess': [{'id': 'e1fe6dd8-ba31-4d61-89e7-88639da4683d','type': 'Scope'},{'id': '7ab1d382-f21e-4acd-a863-ba3e13f7da61','type': 'Role'}],'resourceAppId': '00000003-0000-0000-c000-000000000000'}]}" ```
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-resource-manager.md
This quickstart describes how to :
- Create key-values in an App Configuration store using ARM template. - Read key-values in an App Configuration store from ARM template.
+> [!TIP]
+> Feature flags and Key Vault references are special types of key-values. Check out the [Next steps](#next-steps) for examples of creating them using the ARM template.
+ [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
Write-Host "Press [ENTER] to continue..."
To learn about adding feature flag and Key Vault reference to an App Configuration store, check below ARM template examples. -- [app-configuration-store-ff](https://azure.microsoft.com/resources/templates/app-configuration-store-ff/)-- [app-configuration-store-keyvaultref](https://azure.microsoft.com/resources/templates/app-configuration-store-keyvaultref/)
+- [ARM template for feature flag](https://azure.microsoft.com/resources/templates/app-configuration-store-ff/)
+- [ARM template for Key Vault reference](https://azure.microsoft.com/resources/templates/app-configuration-store-keyvaultref/)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
The following private cloud environments and their versions are officially suppo
* To onboard the Arc resource bridge, you are a member of the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
-* To read, modify, and delete the resource bridge, you are a member of the **Name of role** role in the resource group.
+* To read, modify, and delete the resource bridge, you are a member of the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
### Networking
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
using Microsoft.ApplicationInsights.Channel;
} ```
+> [!NOTE]
+> See [Flushing data](api-custom-events-metrics.md#flushing-data) if you want to flush the buffer--for example, if you are using the SDK in an application that shuts down.
+ ### Disable telemetry dynamically If you want to disable telemetry conditionally and dynamically, you can resolve the `TelemetryConfiguration` instance with an ASP.NET Core dependency injection container anywhere in your code and set the `DisableTelemetry` flag on it.
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
This section will guide you through manually adding Application Insights to a te
2. In some cases, the *ApplicationInsights.config* file is created for you automatically. If the file is already present, skip to step 4.
- If it's not created automatically, you'll need to create it yourself. At the same level in your project as the *Global.asax* file, create a new file called *ApplicationInsights.config*.
+ If it's not created automatically, you'll need to create it yourself. In the root directory of an ASP.NET application, create a new file called *ApplicationInsights.config*.
3. Copy the following XML configuration into your newly created file:
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
Every outgoing operation, such as an HTTP call to another component, is represen
You can build a view of the distributed logical operation by using `operation_Id`, `operation_parentId`, and `request.id` with `dependency.id`. These fields also define the causality order of telemetry calls.
-In a microservices environment, traces from components can go to different storage items. Every component can have its own instrumentation key in Application Insights. To get telemetry for the logical operation, Application Insights queries data from every storage item. When the number of storage items is large, you'll need a hint about where to look next. The Application Insights data model defines two fields to solve this problem: `request.source` and `dependency.target`. The first field identifies the component that initiated the dependency request. The second field identifies which component returned the response of the dependency call.
+In a microservices environment, traces from components can go to different storage items. Every component can have its own connection string in Application Insights. To get telemetry for the logical operation, Application Insights queries data from every storage item. When the number of storage items is large, you'll need a hint about where to look next. The Application Insights data model defines two fields to solve this problem: `request.source` and `dependency.target`. The first field identifies the component that initiated the dependency request. The second field identifies which component returned the response of the dependency call.
+
+For information on querying from multiple disparate instances using the `app` query expression, see [app() expression in Azure Monitor query](../logs/app-expression.md#app-expression-in-azure-monitor-query).
## Example
You can analyze the resulting telemetry by running a query:
| project timestamp, itemType, name, id, operation_ParentId, operation_Id ```
-In the results, note that all telemetry items share the root `operation_Id`. When an Ajax call is made from the page, a new unique ID (`qJSXU`) is assigned to the dependency telemetry, and the ID of the pageView is used as `operation_ParentId`. The server request then uses the Ajax ID as `operation_ParentId`.
+In the results, all telemetry items share the root `operation_Id`. When an Ajax call is made from the page, a new unique ID (`qJSXU`) is assigned to the dependency telemetry, and the ID of the pageView is used as `operation_ParentId`. The server request then uses the Ajax ID as `operation_ParentId`.
| itemType | name | ID | operation_ParentId | operation_Id | |||--|--|--|
The [W3C Trace-Context](https://w3c.github.io/trace-context/) and Application In
| |-| | `Id` of `Request` and `Dependency` | [parent-id](https://w3c.github.io/trace-context/#parent-id) | | `Operation_Id` | [trace-id](https://w3c.github.io/trace-context/#trace-id) |
-| `Operation_ParentId` | [parent-id](https://w3c.github.io/trace-context/#parent-id) of this span's parent span. If this is a root span, then this field must be empty. |
+| `Operation_ParentId` | [parent-id](https://w3c.github.io/trace-context/#parent-id) of this span's parent span. This field must be empty if it's a root span.|
For more information, see [Application Insights telemetry data model](../../azure-monitor/app/data-model.md).
W3C TraceContext based distributed tracing is enabled by default in all recent
#### Java 3.0 agent
- Java 3.0 agent supports W3C out of the box and no additional configuration is needed.
+ Java 3.0 agent supports W3C out of the box and no more configuration is needed.
#### Java SDK - **Incoming configuration**
Add the following configuration:
## Telemetry correlation in OpenCensus Python
-OpenCensus Python supports [W3C Trace-Context](https://w3c.github.io/trace-context/) without requiring additional configuration.
+OpenCensus Python supports [W3C Trace-Context](https://w3c.github.io/trace-context/) without requiring extra configuration.
As a reference, the OpenCensus data model can be found [here](https://github.com/census-instrumentation/opencensus-specs/tree/master/trace). ### Incoming request correlation
-OpenCensus Python correlates W3C Trace-Context headers from incoming requests to the spans that are generated from the requests themselves. OpenCensus will do this automatically with integrations for these popular web application frameworks: Flask, Django, and Pyramid. You just need to populate the W3C Trace-Context headers with the [correct format](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format) and send them with the request. Here's a sample Flask application that demonstrates this:
+OpenCensus Python correlates W3C Trace-Context headers from incoming requests to the spans that are generated from the requests themselves. OpenCensus will correlate automatically with integrations for these popular web application frameworks: Flask, Django, and Pyramid. You just need to populate the W3C Trace-Context headers with the [correct format](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format) and send them with the request.
+
+**Sample Flask application**
```python from flask import Flask
By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-cont
`trace-flags`: `01`
-If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find this data under Logs (Analytics) in the Azure Monitor Application Insights resource.
+If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under Logs (Analytics) in the Azure Monitor Application Insights resource.
![Request telemetry in Logs (Analytics)](./media/opencensus-python/0011-correlation.png)
The `operation_ParentId` field is in the format `<trace-id>.<parent-id>`, where
### Log correlation
-OpenCensus Python enables you to correlate logs by adding a trace ID, a span ID, and a sampling flag to log records. You add these attributes by installing OpenCensus [logging integration](https://pypi.org/project/opencensus-ext-logging/). The following attributes will be added to Python `LogRecord` objects: `traceId`, `spanId`, and `traceSampled`. Note that this takes effect only for loggers that are created after the integration.
+OpenCensus Python enables you to correlate logs by adding a trace ID, a span ID, and a sampling flag to log records. You add these attributes by installing OpenCensus [logging integration](https://pypi.org/project/opencensus-ext-logging/). The following attributes will be added to Python `LogRecord` objects: `traceId`, `spanId`, and `traceSampled`. (applicable only for loggers that are created after the integration)
-Here's a sample application that demonstrates this:
+**Sample application**
```python import logging
When this code runs, the following prints in the console:
2019-10-17 11:25:59,384 traceId=c54cb1d4bbbec5864bf0917c64aeacdc spanId=70da28f5a4831014 In the span 2019-10-17 11:25:59,385 traceId=c54cb1d4bbbec5864bf0917c64aeacdc spanId=0000000000000000 After the span ```
-Notice that there's a `spanId` present for the log message that's within the span. This is the same `spanId` that belongs to the span named `hello`.
+Notice that there's a `spanId` present for the log message that's within the span. The `spanId` is the same as that which belongs to the span named `hello`.
You can export the log data by using `AzureLogHandler`. For more information, see [this article](./opencensus-python.md#logs).
You might want to customize the way component names are displayed in the [Applic
- With Application Insights Java SDK 2.5.0 and later, you can specify the `cloud_RoleName` by adding `<RoleName>` to your ApplicationInsights.xml file: + ```xml <?xml version="1.0" encoding="utf-8"?> <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" schemaVersion="2014-05-30">
- <InstrumentationKey>** Your instrumentation key **</InstrumentationKey>
+ <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000</ConnectionString>
<RoleName>** Your role name **</RoleName> ... </ApplicationInsights>
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
The Azure Monitor Application Insights .NET and .NET Core SDKs have two differen
## Pre-aggregating vs non pre-aggregating API
-`TrackMetric()` sends raw telemetry denoting a metric. It is inefficient to send a single telemetry item for each value. `TrackMetric()` is also inefficient in terms of performance since every `TrackMetric(item)` goes through the full SDK pipeline of telemetry initializers and processors. Unlike `TrackMetric()`, `GetMetric()` handles local pre-aggregation for you and then only submits an aggregated summary metric at a fixed interval of one minute. So if you need to closely monitor some custom metric at the second or even millisecond level you can do so while only incurring the storage and network traffic cost of only monitoring every minute. This also greatly reduces the risk of throttling occurring since the total number of telemetry items that need to be sent for an aggregated metric are greatly reduced.
+`TrackMetric()` sends raw telemetry denoting a metric. It's inefficient to send a single telemetry item for each value. `TrackMetric()` is also inefficient in terms of performance since every `TrackMetric(item)` goes through the full SDK pipeline of telemetry initializers and processors. Unlike `TrackMetric()`, `GetMetric()` handles local pre-aggregation for you and then only submits an aggregated summary metric at a fixed interval of one minute. So if you need to closely monitor some custom metric at the second or even millisecond level you can do so while only incurring the storage and network traffic cost of only monitoring every minute. This behavior also greatly reduces the risk of throttling occurring since the total number of telemetry items that need to be sent for an aggregated metric are greatly reduced.
-In Application Insights, custom metrics collected via `TrackMetric()` and `GetMetric()` are not subject to [sampling](./sampling.md). Sampling important metrics can lead to scenarios where alerting you may have built around those metrics could become unreliable. By never sampling your custom metrics, you can generally be confident that when your alert thresholds are breached, an alert will fire. But since custom metrics aren't sampled, there are some potential concerns.
+In Application Insights, custom metrics collected via `TrackMetric()` and `GetMetric()` aren't subject to [sampling](./sampling.md). Sampling important metrics can lead to scenarios where alerting you may have built around those metrics could become unreliable. By never sampling your custom metrics, you can generally be confident that when your alert thresholds are breached, an alert will fire. But since custom metrics aren't sampled, there are some potential concerns.
-If you need to track trends in a metric every second, or at an even more granular interval this can result in:
+Trend tracking in a metric every second, or at an even more granular interval can result in:
-- Increased data storage costs. There is a cost associated with how much data you send to Azure Monitor. (The more data you send the greater the overall cost of monitoring.)-- Increased network traffic/performance overhead. (In some scenarios this could have both a monetary and application performance cost.)-- Risk of ingestion throttling. (The Azure Monitor service drops ("throttles") data points when your app sends a very high rate of telemetry in a short time interval.)
+- Increased data storage costs. There's a cost associated with how much data you send to Azure Monitor. (The more data you send the greater the overall cost of monitoring.)
+- Increased network traffic/performance overhead. (In some scenarios this overhead could have both a monetary and application performance cost.)
+- Risk of ingestion throttling. (The Azure Monitor service drops ("throttles") data points when your app sends a high rate of telemetry in a short time interval.)
-Throttling is of particular concern in that like sampling, throttling can lead to missed alerts since the condition to trigger an alert could occur locally and then be dropped at the ingestion endpoint due to too much data being sent. This is why for .NET and .NET Core we don't recommend using `TrackMetric()` unless you have implemented your own local aggregation logic. If you are trying to track every instance an event occurs over a given time period, you may find that [`TrackEvent()`](./api-custom-events-metrics.md#trackevent) is a better fit. Though keep in mind that unlike custom metrics, custom events are subject to sampling. You can of course still use `TrackMetric()` even without writing your own local pre-aggregation, but if you do so be aware of the pitfalls.
+Throttling is a concern as it can lead to missed alerts. The condition to trigger an alert could occur locally and then be dropped at the ingestion endpoint due to too much data being sent. We don't recommend using `TrackMetric()` for .NET and .NET Core unless you've implemented your own local aggregation logic. If you're trying to track every instance an event occurs over a given time period, you may find that [`TrackEvent()`](./api-custom-events-metrics.md#trackevent) is a better fit. Though keep in mind that unlike custom metrics, custom events are subject to sampling. You can still use `TrackMetric()` even without writing your own local pre-aggregation, but if you do so be aware of the pitfalls.
-In summary `GetMetric()` is the recommended approach since it does pre-aggregation, it accumulates values from all the Track() calls and sends a summary/aggregate once every minute. This can significantly reduce the cost and performance overhead by sending fewer data points, while still collecting all relevant information.
+In summary `GetMetric()` is the recommended approach since it does pre-aggregation, it accumulates values from all the Track() calls and sends a summary/aggregate once every minute. `GetMetric()` can significantly reduce the cost and performance overhead by sending fewer data points, while still collecting all relevant information.
> [!NOTE] > Only the .NET and .NET Core SDKs have a GetMetric() method. If you are using Java, see [sending custom metrics using micrometer](./java-in-process-agent.md#send-custom-metrics-by-using-micrometer). For JavaScript and Node.js you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python you can use [OpenCensus.stats](./opencensus-python.md#metrics) to send custom metrics but the metrics implementation is different. ## Getting started with GetMetric
-For our examples, we are going to use a basic .NET Core 3.1 worker service application. If you would like to exactly replicate the test environment that was used with these examples, follow steps 1-6 of the [monitoring worker service article](./worker-service.md#net-core-30-worker-service-application) to add Application Insights to a basic worker service project template. These concepts apply to any general application where the SDK can be used including web apps and console apps.
+For our examples, we're going to use a basic .NET Core 3.1 worker service application. If you would like to replicate the test environment used with these examples, follow steps 1-6 of the [monitoring worker service article](./worker-service.md#net-core-30-worker-service-application). These steps will add Application Insights to a basic worker service project template and the concepts apply to any general application where the SDK can be used including web apps and console apps.
### Sending metrics
-Replace the contents of your `worker.cs` file with the following:
+Replace the contents of your `worker.cs` file with the following code:
```csharp using System;
namespace WorkerService3
} ```
-If you run the code above and watch the telemetry being sent via the Visual Studio output window or a tool like Telerik's Fiddler, you will see the while loop repeatedly executing with no telemetry being sent and then a single telemetry item will be sent by around the 60-second mark, which in the case of our test looks as follows:
+When running the sample code, you'll see the while loop repeatedly executing with no telemetry being sent in the Visual Studio output window. A single telemetry item will be sent by around the 60-second mark, which in our test looks as follows:
```json Application Insights Telemetry: {"name":"Microsoft.ApplicationInsights.Dev.00000000-0000-0000-0000-000000000000.Metric", "time":"2019-12-28T00:54:19.0000000Z",
This single telemetry item represents an aggregate of 41 distinct metric measure
> [!NOTE] > GetMetric does not support tracking the last value (i.e. "gauge") or tracking histograms/distributions.
-If we examine our Application Insights resource in the Logs (Analytics) experience, this individual telemetry item would look as follows:
+If we examine our Application Insights resource in the Logs (Analytics) experience, the individual telemetry item would look as follows:
![Log Analytics query view](./media/get-metric/log-analytics.png)
You can also access your custom metric telemetry in the [_Metrics_](../essential
### Caching metric reference for high-throughput usage
-In some cases metric values are observed very frequently. For example, a high-throughput service that processes 500 requests/second may want to emit 20 telemetry metrics for each request. This means tracking 10,000 values per second. In such high-throughput scenarios, users may need to help the SDK by avoiding some lookups.
+Metric values may be observed frequently in some cases. For example, a high-throughput service that processes 500 requests/second may want to emit 20 telemetry metrics for each request. The result means tracking 10,000 values per second. In such high-throughput scenarios, users may need to help the SDK by avoiding some lookups.
-For example, in this case, the example above performed a lookup for a handle for the metric "ComputersSold" and then tracked an observed value 42. Instead, the handle may be cached for multiple track invocations:
+For example, the example above performed a lookup for a handle for the metric "ComputersSold" and then tracked an observed value 42. Instead, the handle may be cached for multiple track invocations:
```csharp //...
In addition to caching the metric handle, the example above also reduced the `Ta
The examples in the previous section show zero-dimensional metrics. Metrics can also be multi-dimensional. We currently support up to 10 dimensions.
- Here is an example of how to create a one-dimensional metric:
+ Here's an example of how to create a one-dimensional metric:
```csharp //...
The examples in the previous section show zero-dimensional metrics. Metrics can
```
-Running this code for at least 60 seconds will result in three distinct telemetry items being sent to Azure, each representing the aggregation of one of the three form factors. As before you can examine these in Logs (Analytics) view:
+Running the sample code for at least 60 seconds will result in three distinct telemetry items being sent to Azure, each representing the aggregation of one of the three form factors. As before you can further examine in the Logs (Analytics) view:
![Log analytics view of multidimensional metric](./media/get-metric/log-analytics-multi-dimensional.png)
-As well as in the Metrics explorer experience:
+In the Metrics explorer experience:
![Custom metrics](./media/get-metric/custom-metrics.png)
-However, you will notice that you aren't able to split the metric by your new custom dimension, or view your custom dimension with the metrics view:
+However, you'll notice that you aren't able to split the metric by your new custom dimension, or view your custom dimension with the metrics view:
![Splitting support](./media/get-metric/splitting-support.png)
-By default multi-dimensional metrics within the Metric explorer experience are not turned on in Application Insights resources.
+By default multi-dimensional metrics within the Metric explorer experience aren't turned on in Application Insights resources.
### Enable multi-dimensional metrics
-To enable multi-dimensional metrics for an Application Insights resource, Select **Usage and estimated costs** > **Custom Metrics** > **Enable alerting on custom metric dimensions** > **OK**. More details about this can be found [here](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
+To enable multi-dimensional metrics for an Application Insights resource, Select **Usage and estimated costs** > **Custom Metrics** > **Enable alerting on custom metric dimensions** > **OK**. More details about can be found [here](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
-Once you have made that change and send new multi-dimensional telemetry, you will be able to **Apply splitting**.
+Once you have made that change and send new multi-dimensional telemetry, you'll be able to **Apply splitting**.
> [!NOTE] > Only newly sent metrics after the feature was turned on in the portal will have dimensions stored.
computersSold.TrackValue(110,"Laptop", "Nvidia", "DDR4", "39Wh", "1TB");
## Custom metric configuration
-If you want to alter the metric configuration, you need to do this in the place where the metric is initialized.
+If you want to alter the metric configuration, you need make alterations in the place where the metric is initialized.
### Special dimension names
-Metrics do not use the telemetry context of the `TelemetryClient` used to access them, special dimension names available as constants in `MetricDimensionNames` class is the best workaround for this limitation.
+Metrics don't use the telemetry context of the `TelemetryClient` used to access them, special dimension names available as constants in `MetricDimensionNames` class is the best workaround for this limitation.
Metric aggregates sent by the below "Special Operation Request Size"-metric will **not** have their `Context.Operation.Name` set to "Special Operation". Whereas `TrackMetric()` or any other TrackXXX() will have `OperationName` set correctly to "Special Operation".
For example, when the metric aggregate resulting from the next statement is sent
_telemetryClient.GetMetric("Request Size", MetricDimensionNames.TelemetryContext.Operation.Name).TrackValue(requestSize, "Special Operation"); ```
-The values of this special dimension will be copied into the `TelemetryContext` and will not be used as a 'normal' dimension. If you want to also keep an operation dimension for normal metric exploration, you need to create a separate dimension for that purpose:
+The values of this special dimension will be copied into the `TelemetryContext` and won't be used as a 'normal' dimension. If you want to also keep an operation dimension for normal metric exploration, you need to create a separate dimension for that purpose:
```csharp _telemetryClient.GetMetric("Request Size", "Operation Name", MetricDimensionNames.TelemetryContext.Operation.Name).TrackValue(requestSize, "Special Operation", "Special Operation");
_telemetryClient.GetMetric("Request Size", "Operation Name", MetricDimensionName
To prevent the telemetry subsystem from accidentally using up your resources, you can control the maximum number of data series per metric. The default limits are no more than 1000 total data series per metric, and no more than 100 different values per dimension.
- In the context of dimension and time series capping we use `Metric.TrackValue(..)` to make sure that the limits are observed. If the limits are already reached, `Metric.TrackValue(..)` will return "False" and the value will not be tracked. Otherwise it will return "True". This is useful if the data for a metric originates from user input.
+> [!IMPORTANT]
+> Use low cardinal values for dimensions to avoid throttling.
+
+ In the context of dimension and time series capping, we use `Metric.TrackValue(..)` to make sure that the limits are observed. If the limits are already reached, `Metric.TrackValue(..)` will return "False" and the value won't be tracked. Otherwise it will return "True". This behavior is useful if the data for a metric originates from user input.
The `MetricConfiguration` constructor takes some options on how to manage different series within the respective metric and an object of a class implementing `IMetricSeriesConfiguration` that specifies aggregation behavior for each individual series of the metric:
computersSold.TrackValue(100, "Dim1Value1", "Dim2Value3");
* `valuesPerDimensionLimit` limits the number of distinct values per dimension in a similar manner. * `restrictToUInt32Values` determines whether or not only non-negative integer values should be tracked.
-Here is an example of how to send a message to know if cap limits are exceeded:
+Here's an example of how to send a message to know if cap limits are exceeded:
```csharp if (! computersSold.TrackValue(100, "Dim1Value1", "Dim2Value3"))
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Title: Azure Monitor Application Insights Java description: Application performance monitoring for Java applications running in any environment without requiring code modification. Distributed tracing and application map. Previously updated : 06/24/2021 Last updated : 05/02/2022 ms.devlang: java
Telemetry emitted by these Azure SDKs is automatically collected by default:
This section explains how to modify telemetry.
+### Add spans
+
+You can use `opentelemetry-api` to create [tracers](https://opentelemetry.io/docs/instrumentation/java/manual/#tracing) and spans. Spans populate the dependencies table in Application Insights. The string passed in for the span's name is saved to the _target_ field within the dependency.
+
+> [!NOTE]
+> This feature is only in 3.2.0 and later.
+
+1. Add `opentelemetry-api-1.6.0.jar` to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.6.0</version>
+ </dependency>
+ ```
+
+1. Add spans in your code:
+
+ ```java
+ import io.opentelemetry.api.trace.Span;
+
+ Span span = tracer.spanBuilder("mySpan").startSpan();
+ ```
+
+### Add span events
+
+You can use `opentelemetry-api` to create span events, which populate the traces table in Application Insights. The string passed in to `addEvent()` is saved to the _message_ field within the trace.
+
+> [!NOTE]
+> This feature is only in 3.2.0 and later.
+
+1. Add `opentelemetry-api-1.6.0.jar` to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.6.0</version>
+ </dependency>
+ ```
+
+1. Add span events in your code:
+
+ ```java
+ import io.opentelemetry.api.trace.Span;
+
+ span.addEvent("eventName");
+ ```
+ ### Add span attributes You can use `opentelemetry-api` to add attributes to spans. These attributes can include adding a custom business dimension to your telemetry. You can also use attributes to set optional fields in the Application Insights schema, such as User ID or Client IP. #### Add a custom dimension
-Adding one or more custom dimensions populates the _customDimensions_ field in the requests, dependencies, or exceptions table.
+Adding one or more custom dimensions populates the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
> [!NOTE] > This feature is only in 3.2.0 and later. 1. Add `opentelemetry-api-1.6.0.jar` to your application:
- ```xml
- <dependency>
- <groupId>io.opentelemetry</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.6.0</version>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.6.0</version>
+ </dependency>
+ ```
1. Add custom dimensions in your code:
- ```java
+ ```java
import io.opentelemetry.api.trace.Span;
-
- Span.current().setAttribute("mycustomdimension", "myvalue1");
- ```
+ import io.opentelemetry.api.common.AttributeKey;
+ import io.opentelemetry.api.common.Attributes;
+
+ Attributes attributes = Attributes.of(AttributeKey.stringKey("mycustomdimension"), "myvalue1");
+ span.setAllAttributes(attributes);
+ span.addEvent("eventName", attributes);
+ ```
+
+### Update span status and record exceptions
+
+You can use `opentelemetry-api` to update the status of a span and record exceptions.
+
+> [!NOTE]
+> This feature is only in 3.2.0 and later.
+
+1. Add `opentelemetry-api-1.6.0.jar` to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.6.0</version>
+ </dependency>
+ ```
+
+1. Set status to error and record an exception in your code:
+
+ ```java
+ import io.opentelemetry.api.trace.Span;
+ import io.opentelemetry.api.trace.StatusCode;
+
+ span.setStatus(StatusCode.ERROR, "errorMessage");
+ span.recordException(e);
+ ```
#### Set the user ID
Populate the _user ID_ field in the requests, dependencies, or exceptions table.
1. Add `opentelemetry-api-1.6.0.jar` to your application:
- ```xml
- <dependency>
- <groupId>io.opentelemetry</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.6.0</version>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.6.0</version>
+ </dependency>
+ ```
1. Set `user_Id` in your code:
- ```java
- import io.opentelemetry.api.trace.Span;
-
- Span.current().setAttribute("enduser.id", "myuser");
- ```
+ ```java
+ import io.opentelemetry.api.trace.Span;
+
+ Span.current().setAttribute("enduser.id", "myuser");
+ ```
### Get the trace ID or span ID
You can use `opentelemetry-api` to get the trace ID or span ID. This action can
> This feature is only in 3.2.0 and later. 1. Add `opentelemetry-api-1.6.0.jar` to your application:
-
- ```xml
- <dependency>
- <groupId>io.opentelemetry</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.6.0</version>
- </dependency>
- ```
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.6.0</version>
+ </dependency>
+ ```
1. Get the request trace ID and the span ID in your code:
- ```java
- import io.opentelemetry.api.trace.Span;
-
- String traceId = Span.current().getSpanContext().getTraceId();
- String spanId = Span.current().getSpanContext().getSpanId();
- ```
+ ```java
+ import io.opentelemetry.api.trace.Span;
+
+ String traceId = Span.current().getSpanContext().getTraceId();
+ String spanId = Span.current().getSpanContext().getSpanId();
+ ```
## Custom telemetry
The following table represents currently supported custom telemetry types that y
| Exceptions | | Yes | Yes | Yes | | Page views | | | Yes | | | Requests | | | Yes | Yes |
-| Traces | | Yes | Yes | |
+| Traces | | Yes | Yes | Yes |
Currently, we're not planning to release an SDK with Application Insights 3.x.
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Most configuration fields are named such that they can be defaulted to false. Al
| maxBatchInterval | How long to batch telemetry for before sending (milliseconds) | numeric<br/>15000 | | disable&#8203;ExceptionTracking | If true, exceptions aren't autocollected. | boolean<br/> false | | disableTelemetry | If true, telemetry isn't collected or sent. | boolean<br/>false |
-| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `consoleLoggingLevel` or `telemetryLoggingLevel` instead of `enableDebug`. | boolean<br/>false |
+| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false |
| loggingLevelConsole | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 | | loggingLevelTelemetry | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 | | diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue | numeric<br/> 10000 |
azure-monitor Resource Manager App Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-app-resource.md
Title: Resource Manager template samples for Application Insights Resources description: Sample Azure Resource Manager templates to deploy Application Insights resources in Azure Monitor. Previously updated : 07/08/2020 Last updated : 04/27/2022
This article includes sample [Azure Resource Manager templates](../../azure-reso
## Classic Application Insights resource
-The following sample creates a [classic Application Insights resource](../app/create-new-resource.md).
+The following sample creates a [classic Application Insights resource](../app/create-new-resource.md).
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of Application Insights resource.')
+param name string
+
+@description('Type of app you are deploying. This field is for legacy reasons and will not impact the type of App Insights resource you deploy.')
+param type string
+
+@description('Which Azure Region to deploy the resource to. This must be a valid Azure regionId.')
+param regionId string
+
+@description('See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources.')
+param tagsArray object
+
+@description('Source of Azure Resource Manager deployment')
+param requestSource string
+
+resource component 'Microsoft.Insights/components@2020-02-02' = {
+ name: name
+ location: regionId
+ tags: tagsArray
+ kind: 'other'
+ properties: {
+ Application_Type: type
+ Flow_Type: 'Bluefield'
+ Request_Source: requestSource
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "name": {
- "type": "string",
- "metadata": {
- "description": "Name of Application Insights resource."
- }
- },
- "type": {
- "type": "string",
- "metadata": {
- "description": "Type of app you are deploying. This field is for legacy reasons and will not impact the type of App Insights resource you deploy."
- }
- },
- "regionId": {
- "type": "string",
- "metadata": {
- "description": "Which Azure Region to deploy the resource to. This must be a valid Azure regionId."
- }
- },
- "tagsArray": {
- "type": "object",
- "metadata": {
- "description": "See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources."
- }
- },
- "requestSource": {
- "type": "string",
- "metadata": {
- "description": "Source of Azure Resource Manager deployment"
- }
- }
- },
- "resources": [
- {
- "name": "[parameters('name')]",
- "type": "microsoft.insights/components",
- "location": "[parameters('regionId')]",
- "tags": "[parameters('tagsArray')]",
- "apiVersion": "2014-08-01",
- "properties": {
- "ApplicationId": "[parameters('name')]",
- "Application_Type": "[parameters('type')]",
- "Flow_Type": "Redfield",
- "Request_Source": "[parameters('requestSource')]"
- }
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "name": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of Application Insights resource."
+ }
+ },
+ "type": {
+ "type": "string",
+ "metadata": {
+ "description": "Type of app you are deploying. This field is for legacy reasons and will not impact the type of App Insights resource you deploy."
+ }
+ },
+ "regionId": {
+ "type": "string",
+ "metadata": {
+ "description": "Which Azure Region to deploy the resource to. This must be a valid Azure regionId."
+ }
+ },
+ "tagsArray": {
+ "type": "object",
+ "metadata": {
+ "description": "See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources."
+ }
+ },
+ "requestSource": {
+ "type": "string",
+ "metadata": {
+ "description": "Source of Azure Resource Manager deployment"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/components",
+ "apiVersion": "2020-02-02",
+ "name": "[parameters('name')]",
+ "location": "[parameters('regionId')]",
+ "tags": "[parameters('tagsArray')]",
+ "kind": "other",
+ "properties": {
+ "Application_Type": "[parameters('type')]",
+ "Flow_Type": "Bluefield",
+ "Request_Source": "[parameters('requestSource')]"
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "type": {
- "value": "web"
- },
- "name": {
- "value": "my_app_insights_resource"
- },
- "regionId": {
- "value": "westus2"
- },
- "tagsArray": {
- "value": {}
- },
- "requestSource": {
- "value": "CustomDeployment"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "type": {
+ "value": "web"
+ },
+ "name": {
+ "value": "my_app_insights_resource"
+ },
+ "regionId": {
+ "value": "westus2"
+ },
+ "tagsArray": {
+ "value": {}
+ },
+ "requestSource": {
+ "value": "CustomDeployment"
}
+ }
} ```
-## Workspace-based Application Insights resource
-
-The following sample creates a [workspace-based Application Insights resource](../app/create-workspace-resource.md). Workspace-based Application Insights are currently in **preview**.
+## Workspace-based Application Insights resource
+The following sample creates a [workspace-based Application Insights resource](../app/create-workspace-resource.md). Workspace-based Application Insights are currently in **preview**.
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+@description('Name of Application Insights resource.')
+param name string
+
+@description('Type of app you are deploying. This field is for legacy reasons and will not impact the type of App Insights resource you deploy.')
+param type string
+
+@description('Which Azure Region to deploy the resource to. This must be a valid Azure regionId.')
+param regionId string
+
+@description('See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources.')
+param tagsArray object
+
+@description('Source of Azure Resource Manager deployment')
+param requestSource string
+
+@description('Log Analytics workspace ID to associate with your Application Insights resource.')
+param workspaceResourceId string
+
+resource component 'Microsoft.Insights/components@2020-02-02' = {
+ name: name
+ location: regionId
+ tags: tagsArray
+ kind: 'other'
+ properties: {
+ Application_Type: type
+ Flow_Type: 'Bluefield'
+ Request_Source: requestSource
+ WorkspaceResourceId: workspaceResourceId
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "name": {
- "type": "string",
- "metadata": {
- "description": "Name of Application Insights resource."
- }
- },
- "type": {
- "type": "string",
- "metadata": {
- "description": "Type of app you are deploying. This field is for legacy reasons and will not impact the type of App Insights resource you deploy."
- }
- },
- "regionId": {
- "type": "string",
- "metadata": {
- "description": "Which Azure Region to deploy the resource to. This must be a valid Azure regionId."
- }
- },
- "tagsArray": {
- "type": "object",
- "metadata": {
- "description": "See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources."
- }
- },
- "requestSource": {
- "type": "string",
- "metadata": {
- "description": "Source of Azure Resource Manager deployment"
- }
- },
- "workspaceResourceId": {
- "type": "string",
- "metadata": {
- "description": "Log Analytics workspace ID to associate with your Application Insights resource."
- }
- }
- },
- "resources": [
- {
- "name": "[parameters('name')]",
- "type": "microsoft.insights/components",
- "location": "[parameters('regionId')]",
- "tags": "[parameters('tagsArray')]",
- "apiVersion": "2020-02-02-preview",
- "properties": {
- "ApplicationId": "[parameters('name')]",
- "Application_Type": "[parameters('type')]",
- "Flow_Type": "Redfield",
- "Request_Source": "[parameters('requestSource')]",
- "WorkspaceResourceId": "[parameters('workspaceResourceId')]"
- }
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "name": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of Application Insights resource."
+ }
+ },
+ "type": {
+ "type": "string",
+ "metadata": {
+ "description": "Type of app you are deploying. This field is for legacy reasons and will not impact the type of App Insights resource you deploy."
+ }
+ },
+ "regionId": {
+ "type": "string",
+ "metadata": {
+ "description": "Which Azure Region to deploy the resource to. This must be a valid Azure regionId."
+ }
+ },
+ "tagsArray": {
+ "type": "object",
+ "metadata": {
+ "description": "See documentation on tags: https://docs.microsoft.com/azure/azure-resource-manager/management/tag-resources."
+ }
+ },
+ "requestSource": {
+ "type": "string",
+ "metadata": {
+ "description": "Source of Azure Resource Manager deployment"
+ }
+ },
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Log Analytics workspace ID to associate with your Application Insights resource."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/components",
+ "apiVersion": "2020-02-02",
+ "name": "[parameters('name')]",
+ "location": "[parameters('regionId')]",
+ "tags": "[parameters('tagsArray')]",
+ "kind": "other",
+ "properties": {
+ "Application_Type": "[parameters('type')]",
+ "Flow_Type": "Bluefield",
+ "Request_Source": "[parameters('requestSource')]",
+ "WorkspaceResourceId": "[parameters('workspaceResourceId')]"
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json
azure-monitor Resource Manager Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-function-app.md
Title: Resource Manager template samples for Azure Function App + Application Insights Resources description: Sample Azure Resource Manager templates to deploy an Azure Function App with an Application Insights resource. Previously updated : 08/06/2020 Last updated : 04/27/2022 # Resource Manager template sample for creating Azure Function apps with Application Insights monitoring
This article includes sample [Azure Resource Manager templates](../../azure-reso
## Azure Function App
-The following sample creates a .NET Core 3.1 Azure Function app running on a Windows App Service plan and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
+The following sample creates a .NET Core 3.1 Azure Function app running on a Windows App Service plan and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
### Template file
-```json
-{
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "type": "string"
- },
- "name": {
- "type": "string"
- },
- "location": {
- "type": "string"
- },
- "hostingPlanName": {
- "type": "string"
- },
- "serverFarmResourceGroup": {
- "type": "string"
- },
- "alwaysOn": {
- "type": "bool"
- },
- "storageAccountName": {
- "type": "string"
+# [Bicep](#tab/bicep)
+
+```bicep
+param subscriptionId string
+param name string
+param location string
+param hostingPlanName string
+param serverFarmResourceGroup string
+param alwaysOn bool
+param storageAccountName string
+
+resource site 'Microsoft.Web/sites@2021-03-01' = {
+ name: name
+ kind: 'functionapp'
+ location: location
+ properties: {
+ siteConfig: {
+ appSettings: [
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~3'
}
- },
- "resources": [
{
- "apiVersion": "2018-11-01",
- "name": "[parameters('name')]",
- "type": "Microsoft.Web/sites",
- "kind": "functionapp",
- "location": "[parameters('location')]",
- "tags": {},
- "dependsOn": [
- "microsoft.insights/components/function-app-01",
- "[concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]"
- ],
- "properties": {
- "name": "[parameters('name')]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "dotnet"
- },
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference('microsoft.insights/components/function-app-01', '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
- "value": "[reference('microsoft.insights/components/function-app-01', '2015-05-01').ConnectionString]"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2019-06-01').keys[0].value,';EndpointSuffix=','core.windows.net')]"
- }
- ],
- "alwaysOn": "[parameters('alwaysOn')]"
- },
- "serverFarmId": "[concat('/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
- "clientAffinityEnabled": true
- }
- },
+ name: 'FUNCTIONS_WORKER_RUNTIME'
+ value: 'dotnet'
+ }
{
- "apiVersion": "2015-05-01",
- "name": "function-app-01",
- "type": "microsoft.insights/components",
- "location": "centralus",
- "tags": {},
- "properties": {
- "ApplicationId": "[parameters('name')]",
- "Request_Source": "IbizaWebAppExtensionCreate"
- }
- },
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: reference('microsoft.insights/components/function-app-01', '2015-05-01').InstrumentationKey
+ }
+ {
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: reference('microsoft.insights/components/function-app-01', '2015-05-01').ConnectionString
+ }
{
- "apiVersion": "2019-06-01",
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[parameters('storageAccountName')]",
- "location": "[parameters('location')]",
- "tags": {},
- "sku": {
- "name": "Standard_LRS"
+ name: 'AzureWebJobsStorage'
+ value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0]};EndpointSuffix=core.windows.net'
+ }
+ ]
+ alwaysOn: alwaysOn
+ }
+ serverFarmId: '/subscriptions/${subscriptionId}/resourcegroups/${serverFarmResourceGroup}/providers/Microsoft.Web/serverfarms/${hostingPlanName}'
+ clientAffinityEnabled: true
+ }
+ dependsOn: [
+ functionApp
+ ]
+}
+
+resource functionApp 'microsoft.insights/components@2015-05-01' = {
+ name: 'function-app-01'
+ location: location
+ kind: 'web'
+ properties: {
+ Application_Type: 'web'
+ Request_Source: 'rest'
+ }
+}
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ properties: {
+ supportsHttpsTrafficOnly: true
+ }
+}
+```
+
+# [JSON](#tab/json)
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "subscriptionId": {
+ "type": "string"
+ },
+ "name": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ },
+ "hostingPlanName": {
+ "type": "string"
+ },
+ "serverFarmResourceGroup": {
+ "type": "string"
+ },
+ "alwaysOn": {
+ "type": "bool"
+ },
+ "storageAccountName": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2021-03-01",
+ "name": "[parameters('name')]",
+ "kind": "functionapp",
+ "location": "[parameters('location')]",
+ "properties": {
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "dotnet"
+ },
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference('microsoft.insights/components/function-app-01', '2015-05-01').InstrumentationKey]"
+ },
+ {
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference('microsoft.insights/components/function-app-01', '2015-05-01').ConnectionString]"
},
- "properties": {
- "supportsHttpsTrafficOnly": true
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1};EndpointSuffix=core.windows.net', parameters('storageAccountName'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2021-08-01').keys[0])]"
}
- }
- ]
+ ],
+ "alwaysOn": "[parameters('alwaysOn')]"
+ },
+ "serverFarmId": "[format('/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.Web/serverfarms/{2}', parameters('subscriptionId'), parameters('serverFarmResourceGroup'), parameters('hostingPlanName'))]",
+ "clientAffinityEnabled": true
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', 'function-app-01')]",
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Insights/components",
+ "apiVersion": "2015-05-01",
+ "name": "function-app-01",
+ "location": "[parameters('location')]",
+ "kind": "web",
+ "properties": {
+ "Application_Type": "web",
+ "Request_Source": "rest"
+ }
+ },
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-08-01",
+ "name": "[parameters('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "properties": {
+ "supportsHttpsTrafficOnly": true
+ }
+ }
+ ]
} ``` ++ ### Parameters file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "subscriptionId": {
azure-monitor Resource Manager Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-web-app.md
Title: Resource Manager template samples for Azure App Service + Application Ins
description: Sample Azure Resource Manager templates to deploy an Azure App Service with an Application Insights resource. Previously updated : 08/06/2020 Last updated : 04/27/2022 # Resource Manager template samples for creating Azure App Services web apps with Application Insights monitoring
This article includes sample [Azure Resource Manager templates](../../azure-reso
## .NET Core runtime
-The following sample creates a basic Azure App Service web app with the .NET Core runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
+The following sample creates a basic Azure App Service web app with the .NET Core runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+param subscriptionId string
+param name string
+param location string
+param hostingPlanName string
+param serverFarmResourceGroup string
+param alwaysOn bool
+param phpVersion string
+param errorLink string
+
+resource site 'Microsoft.Web/sites@2021-03-01' = {
+ name: name
+ location: location
+ properties: {
+ siteConfig: {
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey
+ }
+ {
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString
+ }
+ {
+ name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
+ value: '~2'
+ }
+ {
+ name: 'XDT_MicrosoftApplicationInsights_Mode'
+ value: 'default'
+ }
+ {
+ name: 'ANCM_ADDITIONAL_ERROR_PAGE_LINK'
+ value: errorLink
+ }
+ ]
+ phpVersion: phpVersion
+ alwaysOn: alwaysOn
+ }
+ serverFarmId: '/subscriptions/${subscriptionId}/resourcegroups/${serverFarmResourceGroup}/providers/Microsoft.Web/serverfarms/${hostingPlanName}'
+ clientAffinityEnabled: true
+ }
+ dependsOn: [
+ webApp
+ ]
+}
+
+resource webApp 'Microsoft.Insights/components@2020-02-02' = {
+ name: 'web-app-name-01'
+ location: location
+ kind: 'web'
+ properties: {
+ Application_Type: 'web'
+ Request_Source: 'rest'
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "type": "string"
- },
- "name": {
- "type": "string"
- },
- "location": {
- "type": "string"
- },
- "hostingPlanName": {
- "type": "string"
- },
- "serverFarmResourceGroup": {
- "type": "string"
- },
- "alwaysOn": {
- "type": "bool"
- },
- "currentStack": {
- "type": "string"
- },
- "phpVersion": {
- "type": "string"
- },
- "errorLink": {
- "type": "string"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "subscriptionId": {
+ "type": "string"
},
- "resources": [
- {
- "apiVersion": "2018-11-01",
- "name": "[parameters('name')]",
- "type": "Microsoft.Web/sites",
- "location": "[parameters('location')]",
- "tags": {},
- "dependsOn": [
- "microsoft.insights/components/web-app-name-01"
- ],
- "properties": {
- "name": "[parameters('name')]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString]"
- },
- {
- "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
- "value": "~2"
- },
- {
- "name": "XDT_MicrosoftApplicationInsights_Mode",
- "value": "default"
- },
- {
- "name": "ANCM_ADDITIONAL_ERROR_PAGE_LINK",
- "value": "[parameters('errorLink')]"
- }
- ],
- "metadata": [
- {
- "name": "CURRENT_STACK",
- "value": "[parameters('currentStack')]"
- }
- ],
- "phpVersion": "[parameters('phpVersion')]",
- "alwaysOn": "[parameters('alwaysOn')]"
- },
- "serverFarmId": "[concat('/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
- "clientAffinityEnabled": true
+ "name": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ },
+ "hostingPlanName": {
+ "type": "string"
+ },
+ "serverFarmResourceGroup": {
+ "type": "string"
+ },
+ "alwaysOn": {
+ "type": "bool"
+ },
+ "phpVersion": {
+ "type": "string"
+ },
+ "errorLink": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2021-03-01",
+ "name": "[parameters('name')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey]"
+ },
+ {
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString]"
+ },
+ {
+ "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
+ "value": "~2"
+ },
+ {
+ "name": "XDT_MicrosoftApplicationInsights_Mode",
+ "value": "default"
+ },
+ {
+ "name": "ANCM_ADDITIONAL_ERROR_PAGE_LINK",
+ "value": "[parameters('errorLink')]"
}
+ ],
+ "phpVersion": "[parameters('phpVersion')]",
+ "alwaysOn": "[parameters('alwaysOn')]"
},
- {
- "apiVersion": "2015-05-01",
- "name": "web-app-name-01",
- "type": "microsoft.insights/components",
- "location": "centralus",
- "tags": {},
- "properties": {
- "ApplicationId": "[parameters('name')]",
- "Request_Source": "IbizaWebAppExtensionCreate"
- }
- }
- ]
+ "serverFarmId": "[format('/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.Web/serverfarms/{2}', parameters('subscriptionId'), parameters('serverFarmResourceGroup'), parameters('hostingPlanName'))]",
+ "clientAffinityEnabled": true
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', 'web-app-name-01')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Insights/components",
+ "apiVersion": "2020-02-02",
+ "name": "web-app-name-01",
+ "location": "[parameters('location')]",
+ "kind": "web",
+ "properties": {
+ "Application_Type": "web",
+ "Request_Source": "rest"
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "subscriptionId": {
The following sample creates a basic Azure App Service web app with the .NET Cor
## ASP.NET runtime
-The following sample creates a basic Azure App Service web app with the ASP.NET runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
+The following sample creates a basic Azure App Service web app with the ASP.NET runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+param subscriptionId string
+param name string
+param location string
+param hostingPlanName string
+param serverFarmResourceGroup string
+param alwaysOn bool
+param phpVersion string
+param netFrameworkVersion string
+
+resource sites 'Microsoft.Web/sites@2021-03-01' = {
+ name: name
+ location: location
+ properties: {
+ siteConfig: {
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey
+ }
+ {
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString
+ }
+ {
+ name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
+ value: '~2'
+ }
+ {
+ name: 'XDT_MicrosoftApplicationInsights_Mode'
+ value: 'default'
+ }
+ ]
+ phpVersion: phpVersion
+ netFrameworkVersion: netFrameworkVersion
+ alwaysOn: alwaysOn
+ }
+ serverFarmId: '/subscriptions/${subscriptionId}/resourcegroups/${serverFarmResourceGroup}/providers/Microsoft.Web/serverfarms/${hostingPlanName}'
+ clientAffinityEnabled: true
+ }
+ dependsOn: [
+ webApp
+ ]
+}
+
+resource webApp 'Microsoft.Insights/components@2020-02-02' = {
+ name: 'web-app-name-01'
+ location: location
+ kind: 'web'
+ properties: {
+ Application_Type: 'web'
+ Request_Source: 'rest'
+ }
+}
+
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "type": "string"
- },
- "name": {
- "type": "string"
- },
- "location": {
- "type": "string"
- },
- "hostingPlanName": {
- "type": "string"
- },
- "serverFarmResourceGroup": {
- "type": "string"
- },
- "alwaysOn": {
- "type": "bool"
- },
- "currentStack": {
- "type": "string"
- },
- "phpVersion": {
- "type": "string"
- },
- "netFrameworkVersion": {
- "type": "string"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "subscriptionId": {
+ "type": "string"
},
- "resources": [
- {
- "apiVersion": "2018-11-01",
- "name": "[parameters('name')]",
- "type": "Microsoft.Web/sites",
- "location": "[parameters('location')]",
- "tags": {},
- "dependsOn": [
- "microsoft.insights/components/web-app-name-01"
- ],
- "properties": {
- "name": "[parameters('name')]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString]"
- },
- {
- "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
- "value": "~2"
- },
- {
- "name": "XDT_MicrosoftApplicationInsights_Mode",
- "value": "default"
- }
- ],
- "metadata": [
- {
- "name": "CURRENT_STACK",
- "value": "[parameters('currentStack')]"
- }
- ],
- "phpVersion": "[parameters('phpVersion')]",
- "netFrameworkVersion": "[parameters('netFrameworkVersion')]",
- "alwaysOn": "[parameters('alwaysOn')]"
- },
- "serverFarmId": "[concat('/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
- "clientAffinityEnabled": true
+ "name": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ },
+ "hostingPlanName": {
+ "type": "string"
+ },
+ "serverFarmResourceGroup": {
+ "type": "string"
+ },
+ "alwaysOn": {
+ "type": "bool"
+ },
+ "phpVersion": {
+ "type": "string"
+ },
+ "netFrameworkVersion": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2021-03-01",
+ "name": "[parameters('name')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey]"
+ },
+ {
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString]"
+ },
+ {
+ "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
+ "value": "~2"
+ },
+ {
+ "name": "XDT_MicrosoftApplicationInsights_Mode",
+ "value": "default"
}
+ ],
+ "phpVersion": "[parameters('phpVersion')]",
+ "netFrameworkVersion": "[parameters('netFrameworkVersion')]",
+ "alwaysOn": "[parameters('alwaysOn')]"
},
- {
- "apiVersion": "2015-05-01",
- "name": "web-app-name-01",
- "type": "microsoft.insights/components",
- "location": "centralus",
- "tags": {},
- "properties": {
- "ApplicationId": "[parameters('name')]",
- "Request_Source": "IbizaWebAppExtensionCreate"
- }
- }
- ]
+ "serverFarmId": "[format('/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.Web/serverfarms/{2}', parameters('subscriptionId'), parameters('serverFarmResourceGroup'), parameters('hostingPlanName'))]",
+ "clientAffinityEnabled": true
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', 'web-app-name-01')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Insights/components",
+ "apiVersion": "2020-02-02",
+ "name": "web-app-name-01",
+ "location": "[parameters('location')]",
+ "kind": "web",
+ "properties": {
+ "Application_Type": "web",
+ "Request_Source": "rest"
+ }
+ }
+ ]
} ``` ++ ### Parameters file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "subscriptionId": {
The following sample creates a basic Azure App Service web app with the ASP.NET
## Node.js runtime (Linux)
-The following sample creates a basic Azure App Service Linux web app with the Node.js runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
+The following sample creates a basic Azure App Service Linux web app with the Node.js runtime and a [classic Application Insights resource](../app/create-new-resource.md) with monitoring enabled.
### Template file
+# [Bicep](#tab/bicep)
+
+```bicep
+param subscriptionId string
+param name string
+param location string
+param hostingPlanName string
+param serverFarmResourceGroup string
+param alwaysOn bool
+param linuxFxVersion string
+
+resource site 'Microsoft.Web/sites@2021-03-01' = {
+ name: name
+ location: location
+ properties: {
+ siteConfig: {
+ appSettings: [
+ {
+ name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
+ value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey
+ }
+ {
+ name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
+ value: reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString
+ }
+ {
+ name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
+ value: '~2'
+ }
+ {
+ name: 'XDT_MicrosoftApplicationInsights_Mode'
+ value: 'default'
+ }
+ ]
+ linuxFxVersion: linuxFxVersion
+ alwaysOn: alwaysOn
+ }
+ serverFarmId: '/subscriptions/${subscriptionId}/resourcegroups/${serverFarmResourceGroup}/providers/Microsoft.Web/serverfarms/${hostingPlanName}'
+ clientAffinityEnabled: false
+ }
+ dependsOn: [
+ webApp
+ ]
+}
+
+resource webApp 'Microsoft.Insights/components@2020-02-02' = {
+ name: 'web-app-name-01'
+ location: location
+ kind: 'web'
+ properties: {
+ Application_Type: 'web'
+ Request_Source: 'rest'
+ }
+}
+```
+
+# [JSON](#tab/json)
+ ```json {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "subscriptionId": {
- "type": "string"
- },
- "name": {
- "type": "string"
- },
- "location": {
- "type": "string"
- },
- "hostingPlanName": {
- "type": "string"
- },
- "serverFarmResourceGroup": {
- "type": "string"
- },
- "alwaysOn": {
- "type": "bool"
- },
- "linuxFxVersion": {
- "type": "string"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "subscriptionId": {
+ "type": "string"
},
- "resources": [
- {
- "apiVersion": "2018-11-01",
- "name": "[parameters('name')]",
- "type": "Microsoft.Web/sites",
- "location": "[parameters('location')]",
- "tags": {},
- "dependsOn": [
- "microsoft.insights/components/web-app-name-01"
- ],
- "properties": {
- "name": "[parameters('name')]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey]"
- },
- {
- "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
- "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString]"
- },
- {
- "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
- "value": "~2"
- },
- {
- "name": "XDT_MicrosoftApplicationInsights_Mode",
- "value": "default"
- }
- ],
- "linuxFxVersion": "[parameters('linuxFxVersion')]",
- "alwaysOn": "[parameters('alwaysOn')]"
- },
- "serverFarmId": "[concat('/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
- "clientAffinityEnabled": false
+ "name": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ },
+ "hostingPlanName": {
+ "type": "string"
+ },
+ "serverFarmResourceGroup": {
+ "type": "string"
+ },
+ "alwaysOn": {
+ "type": "bool"
+ },
+ "linuxFxVersion": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2021-03-01",
+ "name": "[parameters('name')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').InstrumentationKey]"
+ },
+ {
+ "name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
+ "value": "[reference('microsoft.insights/components/web-app-name-01', '2015-05-01').ConnectionString]"
+ },
+ {
+ "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
+ "value": "~2"
+ },
+ {
+ "name": "XDT_MicrosoftApplicationInsights_Mode",
+ "value": "default"
}
+ ],
+ "linuxFxVersion": "[parameters('linuxFxVersion')]",
+ "alwaysOn": "[parameters('alwaysOn')]"
},
- {
- "apiVersion": "2015-05-01",
- "name": "web-app-name-01",
- "type": "microsoft.insights/components",
- "location": "centralus",
- "tags": null,
- "properties": {
- "ApplicationId": "[parameters('name')]",
- "Request_Source": "IbizaWebAppExtensionCreate"
- }
- }
- ]
+ "serverFarmId": "[format('/subscriptions/{0}/resourcegroups/{1}/providers/Microsoft.Web/serverfarms/{2}', parameters('subscriptionId'), parameters('serverFarmResourceGroup'), parameters('hostingPlanName'))]",
+ "clientAffinityEnabled": false
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', 'web-app-name-01')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Insights/components",
+ "apiVersion": "2020-02-02",
+ "name": "web-app-name-01",
+ "location": "[parameters('location')]",
+ "kind": "web",
+ "properties": {
+ "Application_Type": "web",
+ "Request_Source": "rest"
+ }
+ }
+ ]
} ``` ++ ### Parameter file ```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": { "subscriptionId": {
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-function-app.md
Below is an example of the `host.json` updated with the US Government Cloud agen
Below are the supported overrides of the Snapshot Debugger agent endpoint:
-|Property | US Government Cloud | China Cloud |
+|Property | US Government Cloud | China Cloud |
|||-| |AgentEndpoint | `https://snapshot.monitor.azure.us` | `https://snapshot.monitor.azure.cn` |
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
Last updated 05/11/2020
# Application Insights for Worker Service applications (non-HTTP applications)
-[Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is a new SDK which is best suited for non-HTTP workloads like messaging, background tasks, console applications etc. These types of applications don't have the notion of an incoming HTTP request like a traditional ASP.NET/ASP.NET Core Web Application, and hence using Application Insights packages for [ASP.NET](asp-net.md) or [ASP.NET Core](asp-net-core.md) applications is not supported.
+[Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is a new SDK, which is best suited for non-HTTP workloads like messaging, background tasks, console applications etc. These types of applications don't have the notion of an incoming HTTP request like a traditional ASP.NET/ASP.NET Core Web Application, and hence using Application Insights packages for [ASP.NET](asp-net.md) or [ASP.NET Core](asp-net-core.md) applications isn't supported.
-The new SDK does not do any telemetry collection by itself. Instead, it brings in other well known Application Insights auto collectors like [DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector/), [PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector/), [ApplicationInsightsLoggingProvider](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights) etc. This SDK exposes extension methods on `IServiceCollection` to enable and configure telemetry collection.
+The new SDK doesn't do any telemetry collection by itself. Instead, it brings in other well known Application Insights auto collectors like [DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector/), [PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector/), [ApplicationInsightsLoggingProvider](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights) etc. This SDK exposes extension methods on `IServiceCollection` to enable and configure telemetry collection.
## Supported scenarios
The [Application Insights SDK for Worker Service](https://www.nuget.org/packages
## Prerequisites
-A valid Application Insights instrumentation key. This key is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get an instrumentation key, see [Create an Application Insights resource](./create-new-resource.md).
+A valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
A valid Application Insights instrumentation key. This key is required to send a
</ItemGroup> ```
-1. Call `AddApplicationInsightsTelemetryWorkerService(string instrumentationKey)` extension method on `IServiceCollection`, providing the instrumentation key. This method should be called at the beginning of the application. The exact location depends on the type of application.
+1. Configure the connection string in the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable or in configuration. (`appsettings.json`)
+ 1. Retrieve an `ILogger` instance or `TelemetryClient` instance from the Dependency Injection (DI) container by calling `serviceProvider.GetRequiredService<TelemetryClient>();` or using Constructor Injection. This step will trigger setting up of `TelemetryConfiguration` and auto collection modules.
-Specific instructions for each type of application is described in the following sections.
+Specific instructions for each type of application are described in the following sections.
## .NET Core 3.0 worker service application
Full example is shared [here](https://github.com/microsoft/ApplicationInsights-d
} ```
-6. Set up the instrumentation key.
+6. Set up the connection string.
+
- Although you can provide the instrumentation key as an argument to `AddApplicationInsightsTelemetryWorkerService`, we recommend that you specify the instrumentation key in configuration. The following code sample shows how to specify an instrumentation key in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
+> [!NOTE]
+> We recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
```json { "ApplicationInsights": {
- "InstrumentationKey": "putinstrumentationkeyhere"
+ "ConnectionString" : "InstrumentationKey=00000000-0000-0000-0000-000000000000;"
}, "Logging": {
Full example is shared [here](https://github.com/microsoft/ApplicationInsights-d
} ```
-Alternatively, specify the instrumentation key in either of the following environment variables.
-`APPINSIGHTS_INSTRUMENTATIONKEY` or `ApplicationInsights:InstrumentationKey`
-
-For example:
-`SET ApplicationInsights:InstrumentationKey=putinstrumentationkeyhere`
-OR `SET APPINSIGHTS_INSTRUMENTATIONKEY=putinstrumentationkeyhere`
+Alternatively, specify the connection string in the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable.
-Typically, `APPINSIGHTS_INSTRUMENTATIONKEY` specifies the instrumentation key for applications deployed to Web Apps as Web Jobs.
+Typically `APPLICATIONINSIGHTS_CONNECTION_STRING` specifies the connection string for applications deployed to Web Apps as Web Jobs.
> [!NOTE]
-> An instrumentation key specified in code wins over the environment variable `APPINSIGHTS_INSTRUMENTATIONKEY`, which wins over other options.
+> A connection string specified in code takes precedence over the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, which takes precedence over other options.
## ASP.NET Core background tasks with hosted services
Full example is shared [here](https://github.com/microsoft/ApplicationInsights-d
services.AddLogging(); services.AddHostedService<TimedHostedService>();
- // instrumentation key is read automatically from appsettings.json
+ // connection string is read automatically from appsettings.json
services.AddApplicationInsightsTelemetryWorkerService(); }) .UseConsoleLifetime()
Following is the code for `TimedHostedService` where the background task logic r
} ```
-3. Set up the instrumentation key.
+3. Set up the connection string.
Use the same `appsettings.json` from the .NET Core 3.0 Worker Service example above. ## .NET Core/.NET Framework Console application
Full example is shared [here](https://github.com/microsoft/ApplicationInsights-d
IServiceCollection services = new ServiceCollection(); // Being a regular console app, there is no appsettings.json or configuration providers enabled by default.
- // Hence instrumentation key and any changes to default logging level must be specified here.
+ // Hence connection string and any changes to default logging level must be specified here.
services.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("Category", LogLevel.Information));
- services.AddApplicationInsightsTelemetryWorkerService("instrumentationkeyhere");
+ services.AddApplicationInsightsTelemetryWorkerService("connection string here");
// Build ServiceProvider. IServiceProvider serviceProvider = services.BuildServiceProvider();
This console application also uses the same default `TelemetryConfiguration`, an
## Run your application
-Run your application. The example workers from all of the above makes an http call every second to bing.com, and also emits few logs using `ILogger`. These lines are wrapped inside `StartOperation` call of `TelemetryClient`, which is used to create an operation (in this example `RequestTelemetry` named "operation"). Application Insights will collect these ILogger logs (warning or above by default) and dependencies, and they will be correlated to the `RequestTelemetry` with parent-child relationship. The correlation also works cross process/network boundary. For example, if the call was made to another monitored component, then it will be correlated to this parent as well.
+Run your application. The example workers from all of the above makes an http call every second to bing.com, and also emits few logs using `ILogger`. These lines are wrapped inside `StartOperation` call of `TelemetryClient`, which is used to create an operation (in this example `RequestTelemetry` named "operation"). Application Insights will collect these ILogger logs (warning or above by default) and dependencies, and they'll be correlated to the `RequestTelemetry` with parent-child relationship. The correlation also works cross process/network boundary. For example, if the call was made to another monitored component, then it will be correlated to this parent as well.
-This custom operation of `RequestTelemetry` can be thought of as the equivalent of an incoming web request in a typical Web Application. While it is not necessary to use an Operation, it fits best with the [Application Insights correlation data model](./correlation.md) - with `RequestTelemetry` acting as the parent operation, and every telemetry generated inside the worker iteration being treated as logically belonging to the same operation. This approach also ensures all the telemetry generated (automatic and manual) will have the same `operation_id`. As sampling is based on `operation_id`, sampling algorithm either keeps or drops all of the telemetry from a single iteration.
+This custom operation of `RequestTelemetry` can be thought of as the equivalent of an incoming web request in a typical Web Application. While it isn't necessary to use an Operation, it fits best with the [Application Insights correlation data model](./correlation.md) - with `RequestTelemetry` acting as the parent operation, and every telemetry generated inside the worker iteration being treated as logically belonging to the same operation. This approach also ensures all the telemetry generated (automatic and manual) will have the same `operation_id`. As sampling is based on `operation_id`, sampling algorithm either keeps or drops all of the telemetry from a single iteration.
The following lists the full telemetry automatically collected by Application Insights.
Dependency collection is enabled by default. [This](asp-net-dependencies.md#auto
`EventCounterCollectionModule` is enabled by default, and it will collect a default set of counters from .NET Core 3.0 apps. The [EventCounter](eventcounters.md) tutorial lists the default set of counters collected. It also has instructions on customizing the list.
-### Manually tracking additional telemetry
+### Manually tracking other telemetry
-While the SDK automatically collects telemetry as explained above, in most cases user will need to send additional telemetry to Application Insights service. The recommended way to track additional telemetry is by obtaining an instance of `TelemetryClient` from Dependency Injection, and then calling one of the supported `TrackXXX()` [API](api-custom-events-metrics.md) methods on it. Another typical use case is [custom tracking of operations](custom-operations-tracking.md). This approach is demonstrated in the Worker examples above.
+While the SDK automatically collects telemetry as explained above, in most cases user will need to send other telemetry to Application Insights service. The recommended way to track other telemetry is by obtaining an instance of `TelemetryClient` from Dependency Injection, and then calling one of the supported `TrackXXX()` [API](api-custom-events-metrics.md) methods on it. Another typical use case is [custom tracking of operations](custom-operations-tracking.md). This approach is demonstrated in the Worker examples above.
## Configure the Application Insights SDK
public void ConfigureServices(IServiceCollection services)
} ```
-Note that `ApplicationInsightsServiceOptions` in this SDK is in the namespace `Microsoft.ApplicationInsights.WorkerService` as opposed to `Microsoft.ApplicationInsights.AspNetCore.Extensions` in the ASP.NET Core SDK.
+The `ApplicationInsightsServiceOptions` in this SDK is in the namespace `Microsoft.ApplicationInsights.WorkerService` as opposed to `Microsoft.ApplicationInsights.AspNetCore.Extensions` in the ASP.NET Core SDK.
Commonly used settings in `ApplicationInsightsServiceOptions`
Commonly used settings in `ApplicationInsightsServiceOptions`
|EnableAdaptiveSampling | Enable/Disable Adaptive Sampling | true |EnableHeartbeat | Enable/Disable Heartbeats feature, which periodically (15-min default) sends a custom metric named 'HeartBeatState' with information about the runtime like .NET Version, Azure Environment information, if applicable, etc. | true |AddAutoCollectedMetricExtractor | Enable/Disable AutoCollectedMetrics extractor, which is a TelemetryProcessor that sends pre-aggregated metrics about Requests/Dependencies before sampling takes place. | true
-|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling this will cause the following settings to be ignored; `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, `EnableAppServicesHeartbeatTelemetryModule` | true
+|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling this setting will cause the following settings to be ignored; `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, `EnableAppServicesHeartbeatTelemetryModule` | true
See the [configurable settings in `ApplicationInsightsServiceOptions`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs) for the most up-to-date list.
See the [configurable settings in `ApplicationInsightsServiceOptions`](https://g
The Application Insights SDK for Worker Service supports both fixed-rate and adaptive sampling. Adaptive sampling is enabled by default. Sampling can be disabled by using `EnableAdaptiveSampling` option in [ApplicationInsightsServiceOptions](#using-applicationinsightsserviceoptions)
-To configure additional sampling settings, the following example can be used.
+To configure other sampling settings, the following example can be used.
```csharp using Microsoft.ApplicationInsights.Extensibility;
Add any new `TelemetryInitializer` to the `DependencyInjection` container and SD
### Removing TelemetryInitializers
-Telemetry initializers are present by default. To remove all or specific telemetry initializers, use the following sample code *after* you call `AddApplicationInsightsTelemetryWorkerService()`.
+Telemetry initializers are present by default. To remove all or specific telemetry initializers, use the following sample code *after* calling `AddApplicationInsightsTelemetryWorkerService()`.
```csharp public void ConfigureServices(IServiceCollection services)
The following automatic-collection modules are enabled by default. These modules
* `DependencyTrackingTelemetryModule` * `PerformanceCollectorModule` * `QuickPulseTelemetryModule`
-* `AppServicesHeartbeatTelemetryModule` - (There is currently an issue involving this telemetry module. For a temporary workaround see [GitHub Issue 1689](https://github.com/microsoft/ApplicationInsights-dotnet/issues/1689
+* `AppServicesHeartbeatTelemetryModule` - (There's currently an issue involving this telemetry module. For a temporary workaround, see [GitHub Issue 1689](https://github.com/microsoft/ApplicationInsights-dotnet/issues/1689
).) * `AzureInstanceMetadataTelemetryModule`
Visual Studio IDE onboarding is currently supported only for ASP.NET/ASP.NET Cor
No, [Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) currently supports ASP.NET 4.x only.
-### If I run my application in Linux, are all features supported?
+### Are all features supported if I run my application in Linux?
Yes. Feature support for this SDK is the same in all platforms, with the following exceptions:
-* Performance counters are supported only in Windows with the exception of Process CPU/Memory shown in Live Metrics.
+* Performance counters are supported only in Windows except for Process CPU/Memory shown in Live Metrics.
* Even though `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel: ```csharp
using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
## Sample applications [.NET Core Console Application](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp)
-Use this sample if you are using a Console Application written in either .NET Core (2.0 or higher) or .NET Framework (4.7.2 or higher)
+Use this sample if you're using a Console Application written in either .NET Core (2.0 or higher) or .NET Framework (4.7.2 or higher)
[ASP.NET Core background tasks with HostedServices](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService) Use this sample if you are in ASP.NET Core 2.1/2.2, and creating background tasks as per official guidance [here](/aspnet/core/fundamentals/host/hosted-services)
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
The Activity log is a [platform log](./platform-logs-overview.md) in Azure that provides insight into subscription-level events. Activity log includes such information as when a resource is modified or when a virtual machine is started. You can view the Activity log in the Azure portal or retrieve entries with PowerShell and CLI. This article provides details on viewing the Activity log and sending it to different destinations. For more functionality, you should create a diagnostic setting to send the Activity log to one or more of these locations for the following reasons: -- to [Azure Monitor Logs](../logs/data-platform-logs.md) for more complex querying and alerting, and longer retention (up to 2 years)
+- to [Azure Monitor Logs](../logs/data-platform-logs.md) for more complex querying and alerting, and longer retention (up to two years)
- to Azure Event Hubs to forward outside of Azure - to Azure Storage for cheaper, long-term archiving
See [Create diagnostic settings to send platform logs and metrics to different d
## Retention Period
-Activity log events are retained in Azure for **90 days** and then deleted. There is no charge for entries during this time regardless of volume. For more functionality such as longer retention, you should create a diagnostic setting and route the entires to another location based on your needs. See the criteria in the earlier section of this article.
+Activity log events are retained in Azure for **90 days** and then deleted. There's no charge for entries during this time regardless of volume. For more functionality such as longer retention, you should create a diagnostic setting and route the entires to another location based on your needs. See the criteria in the earlier section of this article.
## View the Activity log You can access the Activity log from most menus in the Azure portal. The menu that you open it from determines its initial filter. If you open it from the **Monitor** menu, then the only filter will be on the subscription. If you open it from a resource's menu, then the filter is set to that resource. You can always change the filter though to view all other entries. Select **Add Filter** to add more properties to the filter.
For some events, you can view the Change history, which shows what changes happe
![Change history list for an event](media/activity-log/change-history-event.png)
-If there are any associated changes with the event, you will see a list of changes that you can select. This opens up the **Change history (Preview)** page. On this page, you see the changes to the resource. In the following example, you can see not only that the VM changed sizes, but what the previous VM size was before the change and what it was changed to. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
+If there are any associated changes with the event, you'll see a list of changes that you can select. This opens up the **Change history (Preview)** page. On this page, you see the changes to the resource. In the following example, you can see not only that the VM changed sizes, but what the previous VM size was before the change and what it was changed to. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
![Change history page showing differences](media/activity-log/change-history-event-details.png)
Following is sample output data from Event Hubs for an Activity log:
``` ## Send to Azure storage
-Send the Activity Log to an Azure Storage Account if you want to retain your log data longer than 90 days for audit, static analysis, or backup. If you only must retain your events for 90 days or less you do not need to set up archival to a Storage Account, since Activity Log events are retained in the Azure platform for 90 days.
+Send the Activity Log to an Azure Storage Account if you want to retain your log data longer than 90 days for audit, static analysis, or backup. If you only must retain your events for 90 days or less you don't need to set up archival to a Storage Account, since Activity Log events are retained in the Azure platform for 90 days.
When you send the Activity log to Azure, a storage container is created in the Storage Account as soon as an event occurs. The blobs in the container use the following naming convention:
If a log profile already exists, you first must remove the existing log profile
| Category |No |Comma-separated list of event categories that should be collected. Possible values are _Write_, _Delete_, and _Action_. | ### Example script
-Following is a sample PowerShell script to create a log profile that writes the Activity Log to both a Storage Account and Event Hub.
+Following is a sample PowerShell script to create a log profile that writes the Activity Log to both a Storage Account and an Event Hub.
```powershell # Settings needed for the new log profile
To disable the setting, perform the same procedure and select **Disconnect** to
### Data structure changes The Export activity logs experience, sends the same data as the legacy method used to send the Activity log with some changes to the structure of the *AzureActivity* table.
-The columns in the following table have been deprecated in the updated schema. They still exist in *AzureActivity* but they have no data. The replacements for these columns are not new, but they contain the same data as the deprecated column. They are in a different format, so you might need to modify log queries that use them.
+The columns in the following table have been deprecated in the updated schema. They still exist in *AzureActivity* but they have no data. The replacements for these columns aren't new, but they contain the same data as the deprecated column. They are in a different format, so you might need to modify log queries that use them.
|Activity Log JSON | Log Analytics column name<br/>*(older deprecated)* | New Log Analytics column name | Notes | |:|:|:|:|
The following columns have been added to *AzureActivity* in the updated schema:
- Claims_d - Properties_d
-## Activity Logs Insights
+## Activity log insights (Preview)
+ Activity log insights let you view information about changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to view Activity log insights in the Azure portal.
-## Activity Log Analytics monitoring solution
-> [!Note]
-> The Azure Log Analytics monitoring solution will be deprecated soon and replaced by a workbook using the updated schema in the Log Analytics workspace. You can still use the solution if you already have it enabled, but it can only be used if you're collecting the Activity log using legacy settings.
+Before using Activity log insights, you'll have to [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
+### How does Activity log insights work?
+Activity logs you send to a [Log Analytics workspace](/articles/azure-monitor/logs/log-analytics-workspace-overview.md) are stored in a table called AzureActivity.
-### Use the solution
-Monitoring solutions are accessed from the **Monitor** menu in the Azure portal. Select **More** in the **Insights** section to open the **Overview** page with the solution tiles. The **Azure Activity Logs** tile displays a count of the number of **AzureActivity** records in your workspace.
+Activity log insights are a curated [Log Analytics workbook](/articles/azure-monitor/visualize/workbooks-overview.md) with dashboards that visualize the data in the AzureActivity table. For example, which administrators deleted, updated or created resources, and whether the activities failed or succeeded.
-![Azure Activity Logs tile](media/activity-log/azure-activity-logs-tile.png)
+### View Activity log insights - Resource group / Subscription level
-Select the **Azure Activity Logs** tile to open the **Azure Activity Logs** view. The view includes the visualization parts in the table. Each part lists up to 10 items that matches that part's criteria for the specified time range. You can run a log query that returns all matching records by clicking **See all** at the bottom of the part.
+To view Activity log insights on a resource group or a subscription level:
-![Azure Activity Logs dashboard](media/activity-log/activity-log-dash.png)
+1. In the Azure portal, select **Monitor** > **Workbooks**.
+1. Select **Activity Logs Insights** in the **Insights** section.
+ :::image type="content" source="media/activity-log/open-activity-log-insights-workbook.png" lightbox= "media/activity-log/open-activity-log-insights-workbook.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a scale level.":::
-### Enable the solution for new subscriptions
-> [!NOTE]
->You will soon no longer be able to add the Activity Logs Analytics solution to your subscription with the Azure portal. You can add it using the following procedure with a Resource Manager template.
-
-1. Copy the following json into a file called *ActivityLogTemplate*.json.
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "workspaceName": {
- "type": "String",
- "defaultValue": "my-workspace",
- "metadata": {
- "description": "Specifies the name of the workspace."
- }
- },
- "location": {
- "type": "String",
- "allowedValues": [
- "east us",
- "west us",
- "australia central",
- "west europe"
- ],
- "defaultValue": "australia central",
- "metadata": {
- "description": "Specifies the location in which to create the workspace."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.OperationalInsights/workspaces",
- "name": "[parameters('workspaceName')]",
- "apiVersion": "2015-11-01-preview",
- "location": "[parameters('location')]",
- "properties": {
- "features": {
- "searchVersion": 2
- }
- }
- },
- {
- "type": "Microsoft.OperationsManagement/solutions",
- "apiVersion": "2015-11-01-preview",
- "name": "[concat('AzureActivity(', parameters('workspaceName'),')')]",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspaceName'))]"
- ],
- "plan": {
- "name": "[concat('AzureActivity(', parameters('workspaceName'),')')]",
- "promotionCode": "",
- "product": "OMSGallery/AzureActivity",
- "publisher": "Microsoft"
- },
- "properties": {
- "workspaceResourceId": "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspaceName'))]",
- "containedResources": [
- "[concat(resourceId('microsoft.operationalinsights/workspaces', parameters('workspaceName')), '/views/AzureActivity(',parameters('workspaceName'))]"
- ]
- }
- },
- {
- "type": "Microsoft.OperationalInsights/workspaces/datasources",
- "kind": "AzureActivityLog",
- "name": "[concat(parameters('workspaceName'), '/', subscription().subscriptionId)]",
- "apiVersion": "2015-11-01-preview",
- "location": "[parameters('location')]",
- "dependsOn": [
- "[parameters('WorkspaceName')]"
- ],
- "properties": {
- "linkedResourceId": "[concat(subscription().Id, '/providers/microsoft.insights/eventTypes/management')]"
- }
- }
- ]
- }
- ```
+1. At the top of the **Activity Logs Insights** page, select:
+ 1. One or more subscriptions from the **Subscriptions** dropdown.
+ 1. Resources and resource groups from the **CurrentResource** dropdown.
+ 1. A time range for which to view data from the **TimeRange** dropdown.
+### View Activity log insights on any Azure resource
-2. Deploy the template using the following PowerShell commands:
+>[!Note]
+> * Currently Applications Insights resources are not supported for this workbook.
- ```PowerShell
- Connect-AzAccount
- Select-AzSubscription <SubscriptionName>
- New-AzResourceGroupDeployment -Name activitysolution -ResourceGroupName <ResourceGroup> -TemplateFile <Path to template file>
- ```
+To view Activity log insights on a resource level:
+1. In the Azure portal, go to your resource, select **Workbooks**.
+1. Select **Activity Logs Insights** in the **Activity Logs Insights** section.
+ :::image type="content" source="media/activity-log/activity-log-resource-level.png" lightbox= "media/activity-log/activity-log-resource-level.png" alt-text="A screenshot showing how to locate and open the Activity logs insights workbook on a resource level.":::
+1. At the top of the **Activity Logs Insights** page, select:
+
+ 1. A time range for which to view data from the **TimeRange** dropdown.
+ * **Azure Activity Log Entries** shows the count of Activity log records in each [activity log category](/articles/azure-monitor/essentials/activity-log-schema#categories).
+
+ :::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Screenshot of Azure Activity Logs by Category Value":::
+
+ * **Activity Logs by Status** shows the count of Activity log records in each status.
+
+ :::image type="content" source="media/activity-log/activity-logs-insights-status.png" lightbox= "media/activity-log/activity-logs-insights-status.png" alt-text="Screenshot of Azure Activity Logs by Status":::
+
+ * At the subscription and resource group level, **Activity Logs by Resource** and **Activity Logs by Resource Provider** show the count of Activity log records for each resource and resource provider.
+
+ :::image type="content" source="media/activity-log/activity-logs-insights-resource.png" lightbox= "media/activity-log/activity-logs-insights-resource.png" alt-text="Screenshot of Azure Activity Logs by Resource":::
+
## Next steps * [Read an overview of platform logs](./platform-logs-overview.md) * [Review Activity log event schema](activity-log-schema.md)
azure-monitor Activity Logs Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-logs-insights.md
Activity logs insights let you view information about changes to resources and r
## Enable Activity log insights The only requirement to enable Activity log insights is to [configure the Activity log to export to a Log Analytics workspace](activity-log.md#send-to-log-analytics-workspace). Pre-built [workbooks](../visualize/workbooks-overview.md) curate this data, which is stored in the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) table in the workspace. ## View Activity logs insights - Resource group / Subscription level
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Dedicated clusters are managed with an Azure resource that represents Azure Moni
Once a cluster is created, workspaces can be linked to it, and new ingested data to them is stored on the cluster. Workspaces can be unlinked from a cluster at any time and new data then stored on shared Log Analytics clusters. The link and unlink operation doesn't affect your queries and access to data before, and after the operation. The Cluster and workspaces must be in the same region.
-All operations on the cluster level require the `Microsoft.OperationalInsights/clusters/write` action permission on the cluster. This permission could be granted via the Owner or Contributor that contains the `*/write` action or via the Log Analytics Contributor role that contains the `Microsoft.OperationalInsights/*` action. For more information on Log Analytics permissions, see [Manage access to log data and workspaces in Azure Monitor](./manage-access.md).
+Operations on the cluster level require Microsoft.OperationalInsights/clusters/write action permission. Linking workspaces to a cluster requires both Microsoft.OperationalInsights/clusters/write and Microsoft.OperationalInsights/workspaces/write actions. Permission could be granted by the Owner or Contributor that have `*/write` action, or by the Log Analytics Contributor role that have `Microsoft.OperationalInsights/*` action. For more information on Log Analytics permissions, see [Manage access to log data and workspaces in Azure Monitor](./manage-access.md).
## Cluster pricing model
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
description: Overview of Microsoft services and functionalities that contribute
Previously updated : 11/01/2021 Last updated : 04/27/2022
Just a few examples of what you can do with Azure Monitor include:
- Create visualizations with Azure [dashboards](visualize/tutorial-logs-dashboards.md) and [workbooks](visualize/workbooks-overview.md). - Collect data from [monitored resources](./monitor-reference.md) using [Azure Monitor Metrics](./essentials/data-platform-metrics.md).
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4qXeL]
-- [!INCLUDE [azure-lighthouse-supported-service](../../includes/azure-lighthouse-supported-service.md)] ## Overview The following diagram gives a high-level view of Azure Monitor. At the center of the diagram are the data stores for metrics and logs, which are the two fundamental types of data used by Azure Monitor. On the left are the [sources of monitoring data](agents/data-sources.md) that populate these [data stores](data-platform.md). On the right are the different functions that Azure Monitor performs with this collected data. This includes such actions as analysis, alerting, and streaming to external systems.
-![Azure Monitor overview](media/overview/overview.png)
+
+The video below uses an earlier version of the diagram above, but its explanations are still relevant.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4qXeL]
## Monitoring data platform
azure-resource-manager Bicep Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-date.md
description: Describes the functions to use in a Bicep file to work with dates.
Previously updated : 09/30/2021 Last updated : 05/02/2022 # Date functions for Bicep
resource scheduler 'Microsoft.Automation/automationAccounts/schedules@2015-10-31
} ```
+## dateTimeFromEpoch
+
+`dateTimeFromEpoch(epochTime)`
+
+Converts an epoch time integer value to an ISO 8601 datetime.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| epochTime | Yes | int | The epoch time to convert to a datetime string. |
+
+### Return value
+
+An ISO 8601 datetime string.
+
+### Remarks
+
+This function requires **Bicep version 0.5.6 or later**.
+
+### Example
+
+The following example shows output values for the epoch time functions.
+
+```bicep
+param convertedEpoch int = dateTimeToEpoch(dateTimeAdd(utcNow(), 'P1Y'))
+
+var convertedDatetime = dateTimeFromEpoch(convertedEpoch)
+
+output epochValue int = convertedEpoch
+output datetimeValue string = convertedDatetime
+```
+
+The output is:
+
+| Name | Type | Value |
+| - | - | -- |
+| datetimeValue | String | 2023-05-02T15:16:13Z |
+| epochValue | Int | 1683040573 |
+
+## dateTimeToEpoch
+
+`dateTimeToEpoch(dateTime)`
+
+Converts an ISO 8601 datetime string to an epoch time integer value.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| dateTime | Yes | string | The datetime string to convert to an epoch time. |
+
+### Return value
+
+An integer that represents the number of seconds from midnight on January 1, 1970.
+
+### Remarks
+
+This function requires **Bicep version 0.5.6 or later**.
+
+### Examples
+
+The following example shows output values for the epoch time functions.
+
+```bicep
+param convertedEpoch int = dateTimeToEpoch(dateTimeAdd(utcNow(), 'P1Y'))
+
+var convertedDatetime = dateTimeFromEpoch(convertedEpoch)
+
+output epochValue int = convertedEpoch
+output datetimeValue string = convertedDatetime
+```
+
+The output is:
+
+| Name | Type | Value |
+| - | - | -- |
+| datetimeValue | String | 2023-05-02T15:16:13Z |
+| epochValue | Int | 1683040573 |
+
+The next example uses the epoch time value to set the expiration for a key in a key vault.
++ ## utcNow `utcNow(format)`
azure-resource-manager Bicep Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions.md
Title: Bicep functions description: Describes the functions to use in a Bicep file to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 04/12/2022 Last updated : 05/02/2022 # Bicep functions
The following functions are available for working with arrays. All of these func
The following functions are available for working with dates. All of these functions are in the `sys` namespace. * [dateTimeAdd](./bicep-functions-date.md#datetimeadd)
+* [dateTimeFromEpoch](./bicep-functions-date.md#datetimefromepoch)
+* [dateTimeToEpoch](./bicep-functions-date.md#datetimetoepoch)
* [utcNow](./bicep-functions-date.md#utcnow) ## Deployment value functions
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> | privateendpointredirectmaps | No | No | No | > | privateendpoints | Yes | Yes | Yes | > | privatelinkservices | No | No | No |
-> | publicipaddresses | Yes - Basic SKU<br>Yes - Standard SKU | Yes - Basic SKU<br>No - Standard SKU | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
+> | publicipaddresses | Yes | Yes | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
> | publicipprefixes | Yes | Yes | No | > | routefilters | No | No | No | > | routetables | Yes | Yes | No |
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy (in preview)
+ Title: Back up Azure VMs with Enhanced policy
description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 04/12/2022 Last updated : 04/30/2022
-# Back up an Azure VM using Enhanced policy (in preview)
+# Back up an Azure VM using Enhanced policy
-This article explains how to use _Enhanced policy_ to configure _Multiple Backups Per Day_ and back up [Trusted Launch VMs](../virtual-machines/trusted-launch.md) with Azure Backup service. _Enhanced policy_ for VM backup is in preview.
+This article explains how to use _Enhanced policy_ to configure _Multiple Backups Per Day_ and back up [Trusted Launch VMs](../virtual-machines/trusted-launch.md) with Azure Backup service.
Azure Backup now supports _Enhanced policy_ that's needed to support new Azure offerings. For example, [Trusted Launch VM](../virtual-machines/trusted-launch.md) is supported with _Enhanced policy_ only.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 04/12/2022 Last updated : 04/30/2022
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM. Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported (in preview) <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br> <ul><li> Backup is supported in all regions where Trusted Launch VM is available. </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-vm). </li></ul>
+<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br> <ul><li> Backup is supported in all regions where Trusted Launch VM is available. </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-vm). </li></ul>
## VM storage support
chaos-studio Chaos Studio Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-region-availability.md
+
+ Title: Regional availability of Chaos Studio
+description: Understand how Azure Chaos Studio makes chaos experiments and chaos targets available in Azure regions.
++++ Last updated : 4/29/2022+++
+# Regional availability of Azure Chaos Studio
+
+This article describes the regional availability model for Azure Chaos Studio, specifically the difference between a region where experiments can be deployed and one where resources can be targeted. It also provides an overview of Chaos Studio's high availability model.
+
+Chaos Studio is a regional Azure service, which means that the service is deployed and run within an Azure region. However, Chaos Studio has two regional components - the region where an experiment is deployed and the region where a resource is targeted. A chaos experiment can target a resource in a different region than the experiment (cross-region targeting), but in order to enable chaos experimentation on targets in more regions, Chaos Studio's set of regions in which you can do **resource targeting** is a superset of the regions in which you can create and manage **experiments**. You can [view the list of regions where Chaos Studio and resource targeting are available here](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio).
+
+## Regional availability of chaos experiments
+A [chaos experiment](chaos-studio-chaos-experiments.md) is an Azure resource that describes the faults that should be run and the resources those faults should be run against. An experiment is deployed to a single region and the following information and operations stay within that region:
+* The experiment definition, which includes the hierarchy of steps, branches, and actions, the faults and parameters defined, and the resource IDs of target resources. Open-ended properties in the experiment resource JSON including the step name, branch name, and any fault parameters are stored in region and treated as system metadata.
+* The experiment execution each time an experiment is run, or the activity that orchestrates the execution of steps, branches, and actions.
+* The experiment history, which includes details such as the step, branch, and action timestamps, status, IDs, and any error messages for each historical experiment run. This data is treated as system metadata.
+
+Any experiment data stored in Chaos Studio is deleted when an experiment is deleted.
+
+## Regional availability of chaos targets (resource targeting)
+A [chaos target](chaos-studio-targets-capabilities.md) enables Chaos Studio to interact with an Azure resource. Faults in a chaos experiment run against a chaos target, but the target resource can be in a different region than the experiment. A resource can only be onboarded as a chaos target if Chaos Studio resource targeting is available in that region. The list of regions where resource targeting is available is a superset of the regions where experiments can be created. A chaos target is deployed to the same region as the target resource and the following information and operations stay in that region:
+* The target definition, which includes basic metadata about the target. Agent-based targets have one user-configurable property: the [identity that will be used to connect the agent to the chaos agent service](chaos-studio-permissions-security.md#agent-authentication).
+* The capability definitions, which include basic metadata about the capabilities enabled on a target.
+* The action execution. When an experiment runs a fault, the fault itself (for example, shutting down a VM) happens within the target region.
+
+Any target or capability metadata is deleted when a target is deleted.
+
+## High availability with Chaos Studio
+
+Chaos Studio is a regional, zone-redundant service (in regions that support availability zones). In the case of an availability zone outage, any chaos operation may fail, but experiment metadata, history, and details should remain available and the service should not see a full outage.
+
+## Next steps
+Now that you understand the region availability model for Chaos Studio, you are ready to:
+- [Review the availability of Chaos Studio per region](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio)
+- [Create and run your first experiment](chaos-studio-tutorial-service-direct-portal.md)
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
Captioning can accompany real time or pre-recorded speech. Whether you're showin
For real time captioning, use a microphone or audio input stream instead of file input. For examples of how to recognize speech from a microphone, see the [Speech to text quickstart](get-started-speech-to-text.md) and [How to recognize speech](how-to-recognize-speech.md) documentation. For more information about streaming, see [How to use the audio input stream](how-to-use-audio-input-streams.md).
-For captioning of a prerecoding, send file input to the Speech service. For more information, see [How to use compressed input audio](how-to-use-codec-compressed-audio-input-streams.md).
+For captioning of a prerecording, send file input to the Speech service. For more information, see [How to use compressed input audio](how-to-use-codec-compressed-audio-input-streams.md).
## Caption and speech synchronization
There are some situations where [training a custom model](custom-speech-overview
## Next steps * [Get started with speech to text](get-started-speech-to-text.md)
-* [Get speech recognition results](get-speech-recognition-results.md)
+* [Get speech recognition results](get-speech-recognition-results.md)
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/create-sas-tokens.md
Last updated 04/26/2022
# Create SAS tokens for your storage containers
-In this article, you'll learn how to create shared access signature (SAS) tokens using the Azure Storage Explorer or the Azure portal. An SAS token provides secure, delegated access to resources in your Azure storage account.
+In this article, you'll learn how to create shared access signature (SAS) tokens using the Azure Storage Explorer or the Azure portal. A SAS token provides secure, delegated access to resources in your Azure storage account.
## Create your SAS tokens with Azure Storage Explorer
In this article, you'll learn how to create shared access signature (SAS) tokens
1. Select **Get Shared Access Signature...** from options menu. 1. In the **Shared Access Signature** window, make the following selections: * Select your **Access policy** (the default is none).
- * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, an SAS can't be revoked.
+ * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked.
* Select the **Time zone** for the Start and Expiry date and time (default is Local). * Define your container **Permissions** by checking and/or clearing the appropriate check box. * Review and select **Create**. 1. A new window will appear with the **Container** name, **URI**, and **Query string** for your container. 1. **Copy and paste the container, URI, and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.**
-1. To construct an SAS URL, append the SAS token (URI) to the URL for a storage service.
+1. To construct a SAS URL, append the SAS token (URI) to the URL for a storage service.
### [SAS tokens for blobs](#tab/blobs)
In this article, you'll learn how to create shared access signature (SAS) tokens
1. Select **Get Shared Access Signature...** from options menu. 1. In the **Shared Access Signature** window, make the following selections: * Select your **Access policy** (the default is none).
- * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, an SAS can't be revoked.
+ * Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked.
* Select the **Time zone** for the Start and Expiry date and time (default is Local). * Define your container **Permissions** by checking and/or clearing the appropriate check box. * Review and select **Create**. 1. A new window will appear with the **Blob** name, **URI**, and **Query string** for your blob. 1. **Copy and paste the blob, URI, and query string values in a secure location. They will only be displayed once and cannot be retrieved once the window is closed.**
-1. To construct an SAS URL, append the SAS token (URI) to the URL for a storage service.
+1. To construct a SAS URL, append the SAS token (URI) to the URL for a storage service.
Go to the [Azure portal](https://portal.azure.com/#home) and navigate as follows
1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.**
-1. To construct an SAS URL, append the SAS token (URI) to the URL for a storage service.
+1. To construct a SAS URL, append the SAS token (URI) to the URL for a storage service.
## Learn more
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
Title: DCasv5 and ECasv5 series confidential VMs (preview) description: Learn about Azure DCasv5, DCadsv5, ECasv5, and ECadsv5 series confidential virtual machines (confidential VMs). These series are for tenants with high security and confidentiality requirements.-++ Last updated 3/27/2022- # DCasv5 and ECasv5 series confidential VMs (preview)
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
Title: Azure Confidential virtual machine options on AMD processors (preview) description: Azure Confidential Computing offers multiple options for confidential virtual machines that run on AMD processors backed by SEV-SNP technology.-++ Last updated 11/15/2021- # Azure Confidential VM options on AMD (preview)
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
To add a Blob action to a logic app workflow in multi-tenant Azure Logic Apps, f
To add an Azure Blob action to a logic app workflow in single-tenant Azure Logic Apps, follow these steps:
-1. In the [Azure portal](https://portal.azure.com), pen your workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
1. If your workflow is blank, add any trigger that you want.
connectors Connectors Create Api Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-cosmos-db.md
Previously updated : 12/10/2021 Last updated : 05/02/2022 tags: connectors
From your workflow in Azure Logic Apps, you can connect to Azure Cosmos DB and w
You can connect to Azure Cosmos DB from both **Logic App (Consumption)** and **Logic App (Standard)** resource types by using the [*managed connector*](managed.md) operations. For **Logic App (Standard)**, Azure Cosmos DB also provides [*built-in*](built-in.md) operations, which are currently in preview and offer different functionality, better performance, and higher throughput. For example, if you're working with the **Logic App (Standard)** resource type, you can use the built-in trigger to respond to changes in an Azure Cosmos DB container. You can combine Azure Cosmos DB operations with other actions and triggers in your logic app workflows to enable scenarios such as event sourcing and general data processing.
-> [!NOTE]
-> Currently, only stateful workflows in a **Logic App (Standard)** resource can use both the managed
-> connector operations and built-in operations. Stateless workflows can use only built-in operations.
+## Limitations
+
+- Currently, only stateful workflows in a **Logic App (Standard)** resource can use both the managed connector operations and built-in operations. Stateless workflows can use only built-in operations.
+
+- The Azure Cosmos DB connector supports only Azure Cosmos DB accounts created with the [Core (SQL) API](../cosmos-db/choose-api.md#coresql-api).
## Prerequisites
cosmos-db Manage Data Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet-core.md
ms.devlang: csharp Previously updated : 10/01/2020 Last updated : 05/02/2020
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] Alternatively, you can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. In addition, you need:
-* If you don't already have Visual Studio 2019 installed, you can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
* Install [Git](https://www.git-scm.com/) to clone the example.
-<a id="create-account"></a>
## Create a database account [!INCLUDE [cosmos-db-create-dbaccount-cassandra](../includes/cosmos-db-create-dbaccount-cassandra.md)]
In addition, you need:
## Clone the sample application
-Now let's switch to working with code. Let's clone a Cassandra API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Now let's switch to working with code. Let's clone a Cassandra API app from GitHub, set the connection string, and run it. You'll see how easily you can work with data programmatically.
1. Open a command prompt. Create a new folder named `git-samples`. Then, close the command prompt.
Now go back to the Azure portal to get your connection string information and co
## Next steps
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import additional data to your Cosmos DB account.
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import other data to your Cosmos DB account.
> [!div class="nextstepaction"] > [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet.md
ms.devlang: csharp Previously updated : 10/01/2020 Last updated : 05/02/2022
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] Alternatively, you can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. In addition, you need:
-* If you don't already have Visual Studio 2019 installed, you can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
* Install [Git](https://www.git-scm.com/) to clone the example.
-<a id="create-account"></a>
## Create a database account [!INCLUDE [cosmos-db-create-dbaccount-cassandra](../includes/cosmos-db-create-dbaccount-cassandra.md)]
In addition, you need:
## Clone the sample application
-Now let's switch to working with code. Let's clone a Cassandra API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Now let's switch to working with code. Let's clone a Cassandra API app from GitHub, set the connection string, and run it. You'll see how easily you can work with data programmatically.
1. Open a command prompt. Create a new folder named `git-samples`. Then, close the command prompt.
Now go back to the Azure portal to get your connection string information and co
## Next steps
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import additional data to your Cosmos DB account.
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import other data to your Cosmos DB account.
> [!div class="nextstepaction"] > [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Bulk Executor Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/bulk-executor-graph-dotnet.md
description: Learn how to use the bulk executor library to massively import grap
Previously updated : 05/28/2019 Last updated : 05/02/2020 ms.devlang: csharp
The [bulk executor library](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.gr
The following process outlines how data migration can be used for a Gremlin API container: 1. Retrieve records from the data source.
-2. Construct `GremlinVertex` and `GremlinEdge` objects from the obtained records and add them into an `IEnumerable` data structure. In this part of the application the logic to detect and add relationships should be implemented, in case the data source is not a graph database.
+2. Construct `GremlinVertex` and `GremlinEdge` objects from the obtained records and add them into an `IEnumerable` data structure. In this part of the application the logic to detect and add relationships should be implemented, in case the data source isn't a graph database.
3. Use the [Graph BulkImportAsync method](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.graph.graphbulkexecutor.bulkimportasync) to insert the graph objects into the collection. This mechanism will improve the data migration efficiency as compared to using a Gremlin client. This improvement is experienced because inserting data with Gremlin will require the application send a query at a time that will need to be validated, evaluated, and then executed to create the data. The bulk executor library will handle the validation in the application and send multiple graph objects at a time for each network request.
catch (Exception e)
} ```
-For more information on the parameters of the bulk executor library, refer to the [BulkImportData to Azure Cosmos DB topic](../bulk-executor-dot-net.md#bulk-import-data-to-an-azure-cosmos-account).
+For more information about the parameters of the bulk executor library, see [BulkImportData to Azure Cosmos DB article](../bulk-executor-dot-net.md#bulk-import-data-to-an-azure-cosmos-account).
-The payload needs to be instantiated into `GremlinVertex` and `GremlinEdge` objects. Here is how these objects can be created:
+The payload needs to be instantiated into `GremlinVertex` and `GremlinEdge` objects. Here's how these objects can be created:
**Vertices**: ```csharp
e.AddProperty("customProperty", "value");
## Sample application ### Prerequisites
-* Visual Studio 2019 with the Azure development workload. You can get started with the [Visual Studio 2019 Community Edition](https://visualstudio.microsoft.com/downloads/) for free.
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
* An Azure subscription. You can create [a free Azure account here](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cosmos-db). Alternatively, you can create a Cosmos database account with [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. * An Azure Cosmos DB Gremlin API database with an **unlimited collection**. This guide shows how to get started with [Azure Cosmos DB Gremlin API in .NET](./create-graph-dotnet.md). * Git. For more information check out the [Git Downloads page](https://git-scm.com/downloads).
In the `App.config` file, the following are the configuration values that can be
Setting|Description |
-`EndPointUrl`|This is **your .NET SDK endpoint** found in the Overview blade of your Azure Cosmos DB Gremlin API database account. This has the format of `https://your-graph-database-account.documents.azure.com:443/`
+`EndPointUrl`|This is **your .NET SDK endpoint** found in the Overview page of your Azure Cosmos DB Gremlin API database account. This has the format of `https://your-graph-database-account.documents.azure.com:443/`
`AuthorizationKey`|This is the Primary or Secondary key listed under your Azure Cosmos DB account. Learn more about [Securing Access to Azure Cosmos DB data](../secure-access-to-data.md#primary-keys)
-`DatabaseName`, `CollectionName`|These are the **target database and collection names**. When `ShouldCleanupOnStart` is set to `true` these values, along with `CollectionThroughput`, will be used to drop them and create a new database and collection. Similarly, if `ShouldCleanupOnFinish` is set to `true`, they will be used to delete the database as soon as the ingestion is over. Note that the target collection must be **an unlimited collection**.
+`DatabaseName`, `CollectionName`|These are the **target database and collection names**. When `ShouldCleanupOnStart` is set to `true` these values, along with `CollectionThroughput`, will be used to drop them and create a new database and collection. Similarly, if `ShouldCleanupOnFinish` is set to `true`, they'll be used to delete the database as soon as the ingestion is over. The target collection must be **an unlimited collection**.
`CollectionThroughput`|This is used to create a new collection if the `ShouldCleanupOnStart` option is set to `true`. `ShouldCleanupOnStart`|This will drop the database account and collections before the program is run, and then create new ones with the `DatabaseName`, `CollectionName` and `CollectionThroughput` values. `ShouldCleanupOnFinish`|This will drop the database account and collections with the specified `DatabaseName` and `CollectionName` after the program is run.
Setting|Description
### Run the sample application
-1. Add your specific database configuration parameters in `App.config`. This will be used to create a DocumentClient instance. If the database and container have not been created yet, they will be created automatically.
-2. Run the application. This will call `BulkImportAsync` two times, one to import Vertices and one to import Edges. If any of the objects generates an error when they're inserted, they will be added to either `.\BadVertices.txt` or `.\BadEdges.txt`.
+1. Add your specific database configuration parameters in `App.config`. This will be used to create a DocumentClient instance. If the database and container haven't been created yet, they'll be created automatically.
+2. Run the application. This will call `BulkImportAsync` two times, one to import Vertices and one to import Edges. If any of the objects generates an error when they're inserted, they'll be added to either `.\BadVertices.txt` or `.\BadEdges.txt`.
3. Evaluate the results by querying the graph database. If the `ShouldCleanupOnFinish` option is set to true, then the database will automatically be deleted. ## Next steps
cosmos-db Create Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-dotnet.md
ms.devlang: csharp Previously updated : 02/21/2020 Last updated : 05/02/2020 # Quickstart: Build a .NET Framework or Core application using the Azure Cosmos DB Gremlin API account
This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](grap
## Prerequisites
-If you don't already have Visual Studio 2019 installed, you can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
+Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
If you don't already have Visual Studio 2019 installed, you can download and use
## Clone the sample application
-Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it's to work with data programmatically.
1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
Now let's clone a Gremlin API app from GitHub, set the connection string, and ru
4. Then open Visual Studio and open the solution file.
-5. Restore the NuGet packages in the project. This should include the Gremlin.Net driver, as well as the Newtonsoft.Json package.
+5. Restore the NuGet packages in the project. This should include the Gremlin.Net driver, and the Newtonsoft.Json package.
-6. You can also install the Gremlin.Net@v3.4.6 driver manually using the Nuget package manager, or the [nuget command-line utility](/nuget/install-nuget-client-tools):
+6. You can also install the Gremlin.Net@v3.4.6 driver manually using the NuGet package manager, or the [NuGet command-line utility](/nuget/install-nuget-client-tools):
```bash nuget install Gremlin.NET -Version 3.4.6
Now go back to the Azure portal to get your connection string information and co
1. Next, navigate to the **Keys** tab and copy the **PRIMARY KEY** value from the Azure portal.
-1. After you have copied the URI and PRIMARY KEY of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a command prompt window, and run the following command. Make sure to replace <Your_Azure_Cosmos_account_URI> and <Your_Azure_Cosmos_account_PRIMARY_KEY> values.
+1. After you've copied the URI and PRIMARY KEY of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a command prompt window, and run the following command. Make sure to replace <Your_Azure_Cosmos_account_URI> and <Your_Azure_Cosmos_account_PRIMARY_KEY> values.
```console setx Host "<your Azure Cosmos account name>.gremlin.cosmosdb.azure.com"
You've now updated your app with all the info it needs to communicate with Azure
## Run the console app
-Click CTRL + F5 to run the application. The application will print both the Gremlin query commands and results in the console.
+Select CTRL + F5 to run the application. The application will print both the Gremlin query commands and results in the console.
The console window displays the vertexes and edges being added to the graph. When the script completes, press ENTER to close the console window.
Click CTRL + F5 to run the application. The application will print both the Grem
You can now go back to Data Explorer in the Azure portal and browse and query your new graph data.
-1. In Data Explorer, the new database appears in the Graphs pane. Expand the database and container nodes, and then click **Graph**.
+1. In Data Explorer, the new database appears in the Graphs pane. Expand the database and container nodes, and then select **Graph**.
-2. Click the **Apply Filter** button to use the default query to view all the vertices in the graph. The data generated by the sample app is displayed in the Graphs pane.
+2. Select the **Apply Filter** button to use the default query to view all the vertices in the graph. The data generated by the sample app is displayed in the Graphs pane.
- You can zoom in and out of the graph, you can expand the graph display space, add additional vertices, and move vertices on the display surface.
+ You can zoom in and out of the graph, you can expand the graph display space, add extra vertices, and move vertices on the display surface.
:::image type="content" source="./media/create-graph-dotnet/graph-explorer.png" alt-text="View the graph in Data Explorer in the Azure portal":::
cosmos-db Create Mongodb Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-dotnet.md
ms.devlang: csharp Previously updated : 4/26/2022 Last updated : 05/02/2020
This quickstart demonstrates how to:
## Prerequisites to run the sample app
-* [Visual Studio](https://www.visualstudio.com/downloads/)
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
* [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0) * An Azure account with an active subscription. [Create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You can also [try Azure Cosmos DB](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments.
-If you don't already have Visual Studio, download [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** workload installed with setup.
-
-<a id="create-account"></a>
## Create a database account [!INCLUDE [cosmos-db-create-dbaccount](../includes/cosmos-db-create-dbaccount-mongodb.md)]
You've now updated your app with all the info it needs to communicate with Cosmo
## Load sample data
-[Download](https://www.mongodb.com/try/download/database-tools) [mongoimport](https://docs.mongodb.com/database-tools/mongoimport/#mongodb-binary-bin.mongoimport), a CLI tool that easily imports small amounts of JSON, CSV, or TSV data. We will use mongoimport to load the sample product data provided in the `Data` folder of this project.
+[Download](https://www.mongodb.com/try/download/database-tools) [mongoimport](https://docs.mongodb.com/database-tools/mongoimport/#mongodb-binary-bin.mongoimport), a CLI tool that easily imports small amounts of JSON, CSV, or TSV data. We'll use mongoimport to load the sample product data provided in the `Data` folder of this project.
From the Azure portal, copy the connection information and enter it in the command below:
mongoimport --host <HOST>:<PORT> -u <USERNAME> -p <PASSWORD> --db cosmicworks --
From Visual Studio, select CTRL + F5 to run the app. The default browser is launched with the app.
-If you prefer the CLI, run the following command in a command window to start the sample app. This command will also install project dependencies and build the project, but will not automatically launch the browser.
+If you prefer the CLI, run the following command in a command window to start the sample app. This command will also install project dependencies and build the project, but won't automatically launch the browser.
```bash dotnet run
Enter any necessary parameters and select "Execute."
## Next steps
-In this quickstart, you've learned how to create an API for MongoDB account, create a database and a collection with code, and run a web API app. You can now import additional data to your database.
+In this quickstart, you've learned how to create an API for MongoDB account, create a database and a collection with code, and run a web API app. You can now import other data to your database.
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
cosmos-db Nodejs Console App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/nodejs-console-app.md
This example shows you how to build a console app using Node.js and Azure Cosmos
To use this example, you must:
-* [Create](create-mongodb-dotnet.md#create-account) a Cosmos account configured to use Azure Cosmos DB's API for MongoDB.
+* [Create](create-mongodb-dotnet.md#create-a-database-account) a Cosmos account configured to use Azure Cosmos DB's API for MongoDB.
* Retrieve your [connection string](connect-mongodb-account.md) information. ## Create the app
cosmos-db Bulk Executor Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-dot-net.md
ms.devlang: csharp Previously updated : 03/23/2020 Last updated : 05/02/2020
> If you are currently using the bulk executor library and planning to migrate to bulk support on the newer SDK, use the steps in the [Migration guide](how-to-migrate-from-bulk-executor-library.md) to migrate your application.
-This tutorial provides instructions on using the bulk executor .NET library to import and update documents to an Azure Cosmos container. To learn about the bulk executor library and how it helps you leverage massive throughput and storage, see the [bulk executor library overview](../bulk-executor-overview.md) article. In this tutorial, you will see a sample .NET application that bulk imports randomly generated documents into an Azure Cosmos container. After importing, it shows you how you can bulk update the imported data by specifying patches as operations to perform on specific document fields.
+This tutorial provides instructions on using the bulk executor .NET library to import and update documents to an Azure Cosmos container. To learn about the bulk executor library and how it helps you use massive throughput and storage, see the [bulk executor library overview](../bulk-executor-overview.md) article. In this tutorial, you'll see a sample .NET application that bulk imports randomly generated documents into an Azure Cosmos container. After importing the data, the library shows you how you can bulk update the imported data by specifying patches as operations to perform on specific document fields.
Currently, bulk executor library is supported by the Azure Cosmos DB SQL API and Gremlin API accounts only. This article describes how to use the bulk executor .NET library with SQL API accounts. To learn about using the bulk executor .NET library with Gremlin API accounts, see [perform bulk operations in the Azure Cosmos DB Gremlin API](../graph/bulk-executor-graph-dotnet.md). ## Prerequisites
-* If you don't already have Visual Studio 2019 installed, you can download and use the [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable "Azure development" during the Visual Studio setup.
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. * You can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
-* Create an Azure Cosmos DB SQL API account by using the steps described in [create database account](create-sql-api-dotnet.md#create-account) section of the .NET quickstart article.
+* Create an Azure Cosmos DB SQL API account by using the steps described in the [create a database account](create-sql-api-dotnet.md#create-account) section of the .NET quickstart article.
## Clone the sample application
git clone https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-st
The cloned repository contains two samples "BulkImportSample" and "BulkUpdateSample". You can open either of the sample applications, update the connection strings in App.config file with your Azure Cosmos DB account's connection strings, build the solution, and run it.
-The "BulkImportSample" application generates random documents and bulk imports them to your Azure Cosmos account. The "BulkUpdateSample" application bulk updates the imported documents by specifying patches as operations to perform on specific document fields. In the next sections, you will review the code in each of these sample apps.
+The "BulkImportSample" application generates random documents and bulk imports them to your Azure Cosmos account. The "BulkUpdateSample" application bulk updates the imported documents by specifying patches as operations to perform on specific document fields. In the next sections, you'll review the code in each of these sample apps.
## Bulk import data to an Azure Cosmos account
The "BulkImportSample" application generates random documents and bulk imports t
connectionPolicy) ```
-4. The BulkExecutor object is initialized with a high retry value for wait time and throttled requests. And then they are set to 0 to pass congestion control to BulkExecutor for its lifetime.
+4. The BulkExecutor object is initialized with a high retry value for wait time and throttled requests. And then they're set to 0 to pass congestion control to BulkExecutor for its lifetime.
```csharp // Set retry options high during initialization (default values).
The "BulkImportSample" application generates random documents and bulk imports t
|**Parameter** |**Description** | |||
- |enableUpsert | A flag to enable upsert operations on the documents. If a document with the given ID already exists, it's updated. By default, it is set to false. |
- |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it is set to true. |
+ |enableUpsert | A flag to enable upsert operations on the documents. If a document with the given ID already exists, it's updated. By default, it's set to false. |
+ |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it's set to true. |
|maxConcurrencyPerPartitionKeyRange | The maximum degree of concurrency per partition key range, setting to null will cause library to use a default value of 20. | |maxInMemorySortingBatchSize | The maximum number of documents that are pulled from the document enumerator, which is passed to the API call in each stage. For in-memory sorting phase that happens before bulk importing, setting this parameter to null will cause library to use default minimum value (documents.count, 1000000). | |cancellationToken | The cancellation token to gracefully exit the bulk import operation. |
The "BulkImportSample" application generates random documents and bulk imports t
|NumberOfDocumentsImported (long) | The total number of documents that were successfully imported out of the total documents supplied to the bulk import API call. | |TotalRequestUnitsConsumed (double) | The total request units (RU) consumed by the bulk import API call. | |TotalTimeTaken (TimeSpan) | The total time taken by the bulk import API call to complete the execution. |
- |BadInputDocuments (List\<object>) | The list of bad-format documents that were not successfully imported in the bulk import API call. Fix the documents returned and retry import. Bad-formatted documents include documents whose ID value is not a string (null or any other datatype is considered invalid). |
+ |BadInputDocuments (List\<object>) | The list of bad-format documents that weren't successfully imported in the bulk import API call. Fix the documents returned and retry import. Bad-formatted documents include documents whose ID value isn't a string (null or any other datatype is considered invalid). |
## Bulk update data in your Azure Cosmos account
-You can update existing documents by using the BulkUpdateAsync API. In this example, you will set the `Name` field to a new value and remove the `Description` field from the existing documents. For the full set of supported update operations, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
+You can update existing documents by using the BulkUpdateAsync API. In this example, you'll set the `Name` field to a new value and remove the `Description` field from the existing documents. For the full set of supported update operations, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
1. Navigate to the "BulkUpdateSample" folder and open the "BulkUpdateSample.sln" file.
-2. Define the update items along with the corresponding field update operations. In this example, you will use `SetUpdateOperation` to update the `Name` field and `UnsetUpdateOperation` to remove the `Description` field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
+2. Define the update items along with the corresponding field update operations. In this example, you'll use `SetUpdateOperation` to update the `Name` field and `UnsetUpdateOperation` to remove the `Description` field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
```csharp SetUpdateOperation<string> nameUpdate = new SetUpdateOperation<string>("Name", "UpdatedDoc");
Consider the following points for better performance when using the bulk executo
* For best performance, run your application from an Azure virtual machine that is in the same region as your Azure Cosmos account's write region.
-* It is recommended that you instantiate a single `BulkExecutor` object for the whole application within a single virtual machine that corresponds to a specific Azure Cosmos container.
+* It's recommended that you instantiate a single `BulkExecutor` object for the whole application within a single virtual machine that corresponds to a specific Azure Cosmos container.
-* Since a single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO (This happens by spawning multiple tasks internally). Avoid spawning multiple concurrent tasks within your application process that execute bulk operation API calls. If a single bulk operation API call that is running on a single virtual machine is unable to consume the entire container's throughput (if your container's throughput > 1 million RU/s), it's preferred to create separate virtual machines to concurrently execute the bulk operation API calls.
+* A single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO (This happens by spawning multiple tasks internally). Avoid spawning multiple concurrent tasks within your application process that execute bulk operation API calls. If a single bulk operation API call that is running on a single virtual machine is unable to consume the entire container's throughput (if your container's throughput > 1 million RU/s), it's preferred to create separate virtual machines to concurrently execute the bulk operation API calls.
* Ensure the `InitializeAsync()` method is invoked after instantiating a BulkExecutor object to fetch the target Cosmos container's partition map.
cosmos-db Dynamo To Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/dynamo-to-cosmos.md
Previously updated : 04/29/2020 Last updated : 05/02/2020
git clone https://github.com/Azure-Samples/DynamoDB-to-CosmosDB
### Pre-requisites - .NET Framework 4.7.2-- Visual Studio 2019
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
- Access to Azure Cosmos DB SQL API Account - Local installation of Amazon DynamoDB - Java 8
client_documentDB = new CosmosClient("your connectionstring from the Azure porta
With Azure Cosmos DB, you can use the following options to optimize your connection:
-* **ConnectionMode** - Use direct connection mode to connect to the data nodes in the Azure Cosmos DB service. Use gateway mode only to initialize and cache the logical addresses and refresh on updates. See the [connectivity modes](sql-sdk-connection-modes.md) article for more details.
+* **ConnectionMode** - Use direct connection mode to connect to the data nodes in the Azure Cosmos DB service. Use gateway mode only to initialize and cache the logical addresses and refresh on updates. For more information, see [connectivity modes](sql-sdk-connection-modes.md).
-* **ApplicationRegion** - This option is used to set the preferred geo-replicated region that is used to interact with Azure Cosmos DB. To learn more see the [global distribution](../distribute-data-globally.md) article.
+* **ApplicationRegion** - This option is used to set the preferred geo-replicated region that is used to interact with Azure Cosmos DB. For more information, see [global distribution](../distribute-data-globally.md).
-* **ConsistencyLevel** - This option is used to override default consistency level. To learn more, see the [Consistency levels](../consistency-levels.md) article.
+* **ConsistencyLevel** - This option is used to override default consistency level. For more information, see [consistency levels](../consistency-levels.md).
-* **BulkExecutionMode** - This option is used to execute bulk operations by setting the *AllowBulkExecution* property to true. To learn more see the [Bulk import](tutorial-sql-api-dotnet-bulk-import.md) article.
+* **BulkExecutionMode** - This option is used to execute bulk operations by setting the *AllowBulkExecution* property to true. For more information, see [bulk import](tutorial-sql-api-dotnet-bulk-import.md).
```csharp client_cosmosDB = new CosmosClient(" Your connection string ",new CosmosClientOptions()
With Azure Cosmos DB, you can use the following options to optimize your connect
}); ```
-### Provision the container
+### Create the container
**DynamoDB**:
-To store the data into Amazon DynamoDB you need to create the table first. In this process you define the schema, key type, and attributes as shown in the following code:
+To store the data into Amazon DynamoDB, you need to create the table first. In the table creation process; you define the schema, key type, and attributes as shown in the following code:
```csharp // movies_key_schema
for( int i = 0, j = 99; i < n; i++ )
**Azure Cosmos DB**:
-In Azure Cosmos DB, you can opt for stream and write with `moviesContainer.CreateItemStreamAsync()`. However, in this sample, the JSON will be deserialized into the *MovieModel* type to demonstrate type-casting feature. The code is multi-threaded, which will use Azure Cosmos DB's distributed architecture and speed-up the loading:
+In Azure Cosmos DB, you can opt for stream and write with `moviesContainer.CreateItemStreamAsync()`. However, in this sample, the JSON will be deserialized into the *MovieModel* type to demonstrate type-casting feature. The code is multi-threaded, which will use Azure Cosmos DB's distributed architecture and speed up the loading:
```csharp List<Task> concurrentTasks = new List<Task>();
Primitive range = new Primitive(title, false);
**Azure Cosmos DB**:
-However, with Azure Cosmos DB the query is natural (linq):
+However, with Azure Cosmos DB the query is natural (LINQ):
```csharp IQueryable<MovieModel> movieQuery = moviesContainer.GetItemLinqQueryable<MovieModel>(true)
cosmos-db How To Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-configure-cross-origin-resource-sharing.md
ms.devlang: javascript
# Configure Cross-Origin Resource Sharing (CORS) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-Cross-Origin Resource Sharing (CORS) is an HTTP feature that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as same-origin policy that prevents a web page from calling APIs in a different domain. However, CORS provides a secure way to allow the origin domain to call APIs in another domain. The Core (SQL) API in Azure Cosmos DB now supports Cross-Origin Resource Sharing (CORS) by using the ΓÇ£allowedOriginsΓÇ¥ header. After you enable the CORS support for your Azure Cosmos account, only authenticated requests are evaluated to determine whether they are allowed according to the rules you have specified.
+Cross-Origin Resource Sharing (CORS) is an HTTP feature that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as same-origin policy that prevents a web page from calling APIs in a different domain. However, CORS provides a secure way to allow the origin domain to call APIs in another domain. The Core (SQL) API in Azure Cosmos DB now supports Cross-Origin Resource Sharing (CORS) by using the ΓÇ£allowedOriginsΓÇ¥ header. After you enable the CORS support for your Azure Cosmos account, only authenticated requests are evaluated to determine whether they're allowed according to the rules you've specified.
-You can configure the Cross-origin resource sharing (CORS) setting from the Azure portal or from an Azure Resource Manager template. For Cosmos accounts using the Core (SQL) API, Azure Cosmos DB supports a JavaScript library that works in both Node.js and browser-based environments. This library can now take advantage of CORS support when using Gateway mode. There is no client-side configuration needed to use this feature. With CORS support, resources from a browser can directly access Azure Cosmos DB through the [JavaScript library](https://www.npmjs.com/package/@azure/cosmos) or directly from the [REST API](/rest/api/cosmos-db/) for simple operations.
+You can configure the Cross-origin resource sharing (CORS) setting from the Azure portal or from an Azure Resource Manager template. For Cosmos accounts using the Core (SQL) API, Azure Cosmos DB supports a JavaScript library that works in both Node.js and browser-based environments. This library can now take advantage of CORS support when using Gateway mode. There's no client-side configuration needed to use this feature. With CORS support, resources from a browser can directly access Azure Cosmos DB through the [JavaScript library](https://www.npmjs.com/package/@azure/cosmos) or directly from the [REST API](/rest/api/cosmos-db/) for simple operations.
> [!NOTE] > CORS support is only applicable and supported for the Azure Cosmos DB Core (SQL) API. It is not applicable to the Azure Cosmos DB APIs for Cassandra, Gremlin, or MongoDB, as these protocols do not use HTTP for client-server communication. ## Enable CORS support from Azure portal
-Use the following steps to enable Cross-Origin Resource Sharing by using Azure portal:
+Follow these steps to enable Cross-Origin Resource Sharing by using Azure portal:
-1. Navigate to your Azure cosmos DB account. Open the **CORS** blade.
+1. Navigate to your Azure Cosmos DB account. Open the **CORS** page.
2. Specify a comma-separated list of origins that can make cross-origin calls to your Azure Cosmos DB account. For example, `https://www.mydomain.com`, `https://mydomain.com`, `https://api.mydomain.com`. You can also use a wildcard ΓÇ£\*ΓÇ¥ to allow all origins and select **Submit**.
Use the following steps to enable Cross-Origin Resource Sharing by using Azure p
## Enable CORS support from Resource Manager template
-To enable CORS by using a Resource Manager template, add the ΓÇ£corsΓÇ¥ section with ΓÇ£allowedOriginsΓÇ¥ property to any existing template. The following JSON is an example of a template that creates a new Azure Cosmos account with CORS enabled.
+To enable CORS by using a Resource Manager template, add the ΓÇ£corsΓÇ¥ section with ΓÇ£allowedOriginsΓÇ¥ property to any existing template. This JSON is an example of a template that creates a new Azure Cosmos account with CORS enabled.
```json {
To enable CORS by using a Resource Manager template, add the ΓÇ£corsΓÇ¥ section
"databaseAccountOfferType": "Standard", "cors": [ {
- "allowedOrigins": "*"
+ "allowedOrigins": "https://contoso.com"
} ] }
To enable CORS by using a Resource Manager template, add the ΓÇ£corsΓÇ¥ section
## Using the Azure Cosmos DB JavaScript library from a browser
-Today, the Azure Cosmos DB JavaScript library only has the CommonJS version of the library shipped with its package. To use this library from the browser, you have to use a tool such as Rollup or Webpack to create a browser compatible library. Certain Node.js libraries should have browser mocks for them. The following is an example of a webpack config file that has the necessary mock settings.
+Today, the Azure Cosmos DB JavaScript library only has the CommonJS version of the library shipped with its package. To use this library from the browser, you have to use a tool such as Rollup or Webpack to create a browser compatible library. Certain Node.js libraries should have browser mocks for them. This is an example of a webpack config file that has the necessary mock settings.
```javascript const path = require("path");
module.exports = {
}; ```
-Here is a [code sample](https://github.com/christopheranderson/cosmos-browser-sample) that uses TypeScript and Webpack with the Azure Cosmos DB JavaScript SDK library to build a Todo app that sends real time updates when new items are created.
-As a best practice, do not use the primary key to communicate with Azure Cosmos DB from the browser. Instead, use resource tokens to communicate. For more information about resource tokens, see [Securing access to Azure Cosmos DB](../secure-access-to-data.md#resource-tokens) article.
+Here's a [code sample](https://github.com/christopheranderson/cosmos-browser-sample) that uses TypeScript and Webpack with the Azure Cosmos DB JavaScript SDK library. The sample builds a Todo app that sends real time updates when new items are created.
+
+As a best practice, don't use the primary key to communicate with Azure Cosmos DB from the browser. Instead, use resource tokens to communicate. For more information about resource tokens, see [Securing access to Azure Cosmos DB](../secure-access-to-data.md#resource-tokens) article.
## Next steps
cosmos-db Serverless Computing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/serverless-computing-database.md
Previously updated : 03/25/2022 Last updated : 05/02/2020
In retail implementations, when a user adds an item to their basket you now have
**Implementation:** Multiple Azure Functions triggers for Cosmos DB listening to one container
-1. You can create multiple Azure Functions by adding Azure Functions triggers for Cosmos DB to each - all of which listen to the same change feed of shopping cart data. Note that when multiple functions listen to the same change feed, a new lease collection is required for each function. For more information about lease collections, see [Understanding the Change Feed Processor library](change-feed-processor.md).
+1. You can create multiple Azure Functions by adding Azure Functions triggers for Cosmos DB to each - all of which listen to the same change feed of shopping cart data. When multiple functions listen to the same change feed, a new lease collection is required for each function. For more information about lease collections, see [Understanding the Change Feed Processor library](change-feed-processor.md).
2. Whenever a new item is added to a users shopping cart, each function is independently invoked by the change feed from the shopping cart container. * One function may use the contents of the current basket to change the display of other items the user might be interested in. * Another function may update inventory totals.
In all of these use cases, because the function has decoupled the app itself, yo
## Tooling
-Native integration between Azure Cosmos DB and Azure Functions is available in the Azure portal and in Visual Studio 2019.
+Native integration between Azure Cosmos DB and Azure Functions is available in the Azure portal and in Visual Studio.
* In the Azure Functions portal, you can create a trigger. For quickstart instructions, see [Create an Azure Functions trigger for Cosmos DB in the Azure portal](../../azure-functions/functions-create-cosmos-db-triggered-function.md). * In the Azure Cosmos DB portal, you can add an Azure Functions trigger for Cosmos DB to an existing Azure Function app in the same resource group.
-* In Visual Studio 2019, you can create the trigger using the [Azure Functions Tools](../../azure-functions/functions-develop-vs.md):
+* In Visual Studio, you can create the trigger using the [Azure Functions Tools](../../azure-functions/functions-develop-vs.md):
> >[!VIDEO https://aka.ms/docs.change-feed-azure-functions]
Azure Functions provides the ability to create scalable units of work, or concis
Azure Cosmos DB is the recommended database for your serverless computing architecture for the following reasons:
-* **Instant access to all your data**: You have granular access to every value stored because Azure Cosmos DB [automatically indexes](../index-policy.md) all data by default, and makes those indexes immediately available. This means you are able to constantly query, update, and add new items to your database and have instant access via Azure Functions.
+* **Instant access to all your data**: You have granular access to every value stored because Azure Cosmos DB [automatically indexes](../index-policy.md) all data by default, and makes those indexes immediately available. This means you're able to constantly query, update, and add new items to your database and have instant access via Azure Functions.
-* **Schemaless**. Azure Cosmos DB is schemaless - so it's uniquely able to handle any data output from an Azure Function. This "handle anything" approach makes it straightforward to create a variety of Functions that all output to Azure Cosmos DB.
+* **Schemaless**. Azure Cosmos DB is schemaless - so it's uniquely able to handle any data output from an Azure Function. This "handle anything" approach makes it straightforward to create various Functions that all output to Azure Cosmos DB.
* **Scalable throughput**. Throughput can be scaled up and down instantly in Azure Cosmos DB. If you have hundreds or thousands of Functions querying and writing to the same container, you can scale up your [RU/s](../request-units.md) to handle the load. All functions can work in parallel using your allocated RU/s and your data is guaranteed to be [consistent](../consistency-levels.md).
If you're looking to integrate with Azure Functions to store data and don't need
Benefits of Azure Functions:
-* **Event-driven**. Azure Functions are event-driven and can listen to a change feed from Azure Cosmos DB. This means you don't need to create listening logic, you just keep an eye out for the changes you're listening for.
+* **Event-driven**. Azure Functions is event-driven and can listen to a change feed from Azure Cosmos DB. This means you don't need to create listening logic, you just keep an eye out for the changes you're listening for.
* **No limits**. Functions execute in parallel and the service spins up as many as you need. You set the parameters.
cosmos-db Sql Api Dotnet Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-application.md
ms.devlang: csharp Previously updated : 08/26/2021 Last updated : 05/02/2020
This tutorial covers:
> [!TIP] > This tutorial assumes that you have prior experience using ASP.NET Core MVC and Azure App Service. If you are new to ASP.NET Core or the [prerequisite tools](#prerequisites), we recommend you to download the complete sample project from [GitHub][GitHub], add the required NuGet packages, and run it. Once you build the project, you can review this article to gain insight on the code in the context of the project.
-## <a name="prerequisites"></a>Prerequisites
+## Prerequisites
Before following the instructions in this article, make sure that you have the following resources:
Before following the instructions in this article, make sure that you have the f
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
-* Visual Studio 2019. [!INCLUDE [cosmos-db-emulator-vs](../includes/cosmos-db-emulator-vs.md)]
+* Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
All the screenshots in this article are from Microsoft Visual Studio Community 2019. If you use a different version, your screens and options may not match entirely. The solution should work if you meet the prerequisites.
-## <a name="create-an-azure-cosmos-account"></a>Step 1: Create an Azure Cosmos account
+## Step 1: Create an Azure Cosmos account
-Let's start by creating an Azure Cosmos account. If you already have an Azure Cosmos DB SQL API account or if you're using the Azure Cosmos DB Emulator, skip to [Step 2: Create a new ASP.NET MVC application](#create-a-new-mvc-application).
+Let's start by creating an Azure Cosmos account. If you already have an Azure Cosmos DB SQL API account or if you're using the Azure Cosmos DB Emulator, skip to [Step 2: Create a new ASP.NET MVC application](#step-2-create-a-new-aspnet-core-mvc-application).
[!INCLUDE [create-dbaccount](../includes/cosmos-db-create-dbaccount.md)]
Let's start by creating an Azure Cosmos account. If you already have an Azure Co
In the next section, you create a new ASP.NET Core MVC application.
-## <a name="create-a-new-mvc-application"></a>Step 2: Create a new ASP.NET Core MVC application
+## Step 2: Create a new ASP.NET Core MVC application
1. Open Visual Studio and select **Create a new project**.
In the next section, you create a new ASP.NET Core MVC application.
1. Select **Debug** > **Start Debugging** or F5 to run your ASP.NET application locally.
-## <a name="add-nuget-packages"></a>Step 3: Add Azure Cosmos DB NuGet package to the project
+## Step 3: Add Azure Cosmos DB NuGet package to the project
Now that we have most of the ASP.NET Core MVC framework code that we need for this solution, let's add the NuGet packages required to connect to Azure Cosmos DB.
Now that we have most of the ASP.NET Core MVC framework code that we need for th
Install-Package Microsoft.Azure.Cosmos ```
-## <a name="set-up-the-mvc-application"></a>Step 4: Set up the ASP.NET Core MVC application
+## Step 4: Set up the ASP.NET Core MVC application
Now let's add the models, the views, and the controllers to this MVC application.
-### <a name="add-a-model"></a> Add a model
+### Add a model
1. In **Solution Explorer**, right-click the **Models** folder, select **Add** > **Class**.
Now let's add the models, the views, and the controllers to this MVC application
Azure Cosmos DB uses JSON to move and store data. You can use the `JsonProperty` attribute to control how JSON serializes and deserializes objects. The `Item` class demonstrates the `JsonProperty` attribute. This code controls the format of the property name that goes into JSON. It also renames the .NET property `Completed`.
-### <a name="add-views"></a>Add views
+### Add views
Next, let's add the following views. * A create item view * A delete item view
-* A view to get an item details
+* A view to get an item detail
* An edit item view * A view to list all the items
-#### <a name="AddNewIndexView"></a>Create item view
+#### Create item view
1. In **Solution Explorer**, right-click the **Views** folder and select **Add** > **New Folder**. Name the folder *Item*.
Next, let's add the following views.
:::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Create.cshtml":::
-#### <a name="AddEditIndexView"></a>Delete item view
+#### Delete item view
1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
Next, let's add the following views.
:::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Delete.cshtml":::
-#### <a name="AddItemIndexView"></a>Add a view to get an item details
+#### Add a view to get item details
1. In **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
Next, let's add the following views.
:::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Details.cshtml":::
-#### <a name="AddEditIndexView"></a>Add an edit item view
+#### Add an edit item view
1. From the **Solution Explorer**, right-click the **Item** folder again, select **Add** > **View**.
Next, let's add the following views.
:::code language="csharp" source="~/samples-cosmosdb-dotnet-core-web-app/src/Views/Item/Edit.cshtml":::
-#### <a name="AddEditIndexView"></a>Add a view to list all the items
+#### Add a view to list all the items
And finally, add a view to get all the items with the following steps:
And finally, add a view to get all the items with the following steps:
Once you complete these steps, close all the *cshtml* documents in Visual Studio.
-### <a name="initialize-services"></a>Declare and initialize services
+### Declare and initialize services
First, we'll add a class that contains the logic to connect to and use Azure Cosmos DB. For this tutorial, we'll encapsulate this logic into a class called `CosmosDbService` and an interface called `ICosmosDbService`. This service does the CRUD operations. It also does read feed operations such as listing incomplete items, creating, editing, and deleting the items.
First, we'll add a class that contains the logic to connect to and use Azure Cos
:::code language="json" source="~/samples-cosmosdb-dotnet-core-web-app/src/appsettings.json":::
-### <a name="add-a-controller"></a>Add a controller
+### Add a controller
1. In **Solution Explorer**, right-click the **Controllers** folder, select **Add** > **Controller**.
The **ValidateAntiForgeryToken** attribute is used here to help protect this app
We also use the **Bind** attribute on the method parameter to help protect against over-posting attacks. For more information, see [Tutorial: Implement CRUD Functionality with the Entity Framework in ASP.NET MVC][Basic CRUD Operations in ASP.NET MVC].
-## <a name="run-the-application"></a>Step 5: Run the application locally
+## Step 5: Run the application locally
To test the application on your local computer, use the following steps:
To test the application on your local computer, use the following steps:
1. Once you've tested the app, select Ctrl+F5 to stop debugging the app. You're ready to deploy!
-## <a name="deploy-the-application-to-azure"></a>Step 6: Deploy the application
+## Step 6: Deploy the application
Now that you have the complete application working correctly with Azure Cosmos DB we're going to deploy this web app to Azure App Service.
cosmos-db Sql Api Dotnet V2sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v2sdk-samples.md
Previously updated : 08/26/2021 Last updated : 05/02/2020
For .NET SDK Version 3.0 (Preview) code samples, see the latest samples in the [
## Prerequisites
-Visual Studio 2019 with the Azure development workflow installed
-- You can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
+Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
The [Microsoft.Azure.DocumentDB NuGet package](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/)
cosmos-db Sql Api Dotnet V3sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-v3sdk-samples.md
Previously updated : 02/23/2022 Last updated : 05/02/2020
If you're familiar with the previous version of the .NET SDK, you might be used
## Prerequisites -- Visual Studio 2019 with the Azure development workflow installed. You can download and use the free [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
+- Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
- The [Microsoft.Azure.cosmos NuGet package](https://www.nuget.org/packages/Microsoft.Azure.cosmos/).
cosmos-db Sql Api Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-get-started.md
Now let's get started!
## Prerequisites
-* An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/free/).
+An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/free/).
- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
-* [!INCLUDE [cosmos-db-emulator-vs](../includes/cosmos-db-emulator-vs.md)]
+Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
## Step 1: Create an Azure Cosmos DB account
cosmos-db Sql Api Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-nodejs-get-started.md
Title: Node.js tutorial for the SQL API for Azure Cosmos DB description: A Node.js tutorial that demonstrates how to connect to and query Azure Cosmos DB using the SQL API-++ ms.devlang: javascript Previously updated : 08/26/2021- Last updated : 05/02/2022
-#Customer intent: As a developer, I want to build a Node.js console application to access and manage SQL API account resources in Azure Cosmos DB, so that customers can better use the service.
- + # Tutorial: Build a Node.js console app with the JavaScript SDK to manage Azure Cosmos DB SQL API data+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
+>
> * [.NET](sql-api-get-started.md) > * [Java](./create-sql-api-java.md) > * [Async Java](./create-sql-api-java.md) > * [Node.js](sql-api-nodejs-get-started.md)
->
+>
As a developer, you might have applications that use NoSQL document data. You can use a SQL API account in Azure Cosmos DB to store and access this document data. This tutorial shows you how to build a Node.js console application to create Azure Cosmos DB resources and query them. In this tutorial, you will: > [!div class="checklist"]
+>
> * Create and connect to an Azure Cosmos DB account. > * Set up your application. > * Create a database. > * Create a container. > * Add items to the container. > * Perform basic operations on the items, container, and database.
+>
-## Prerequisites
+## Prerequisites
Make sure you have the following resources:
-* An active Azure account. If you don't have one, you can sign up for a [Free Azure Trial](https://azure.microsoft.com/pricing/free-trial/).
+* An active Azure account. If you don't have one, you can sign up for a [Free Azure Trial](https://azure.microsoft.com/pricing/free-trial/).
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
Make sure you have the following resources:
## Create Azure Cosmos DB account
-Let's create an Azure Cosmos DB account. If you already have an account you want to use, you can skip ahead to [Set up your Node.js application](#SetupNode). If you are using the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](../local-emulator.md) to set up the emulator and skip ahead to [Set up your Node.js application](#SetupNode).
+Let's create an Azure Cosmos DB account. If you already have an account you want to use, you can skip ahead to [Set up your Node.js application](#set-up-your-nodejs-application). If you're using the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](../local-emulator.md) to set up the emulator and skip ahead to [Set up your Node.js application](#set-up-your-nodejs-application).
[!INCLUDE [cosmos-db-create-dbaccount](../includes/cosmos-db-create-dbaccount.md)]
-## <a id="SetupNode"></a>Set up your Node.js application
+## Set up your Node.js application
+
+Before you start writing code to build the application, you can build the scaffolding for your app. Open your favorite terminal and locate the folder or directory where you'd like to save your Node.js application. Create placeholder JavaScript files with the following commands for your Node.js application:
+
+### [Windows](#tab/windows)
+
+```powershell
+fsutil file createnew app.js 0
+
+fsutil file createnew config.js 0
+
+md data
+
+fsutil file createnew data\databaseContext.js 0
+```
+
+### [Linux / macOS](#tab/linux+macos)
+
+```bash
+touch app.js
-Before you start writing code to build the application, you can build the framework for your app. Run the following steps to set up your Node.js application that has the framework code:
+touch config.js
-1. Open your favorite terminal.
-2. Locate the folder or directory where you'd like to save your Node.js application.
-3. Create empty JavaScript files with the following commands:
+mkdir data
- * Windows:
- * `fsutil file createnew app.js 0`
- * `fsutil file createnew config.js 0`
- * `md data`
- * `fsutil file createnew data\databaseContext.js 0`
+touch data/databaseContext.js
+```
++
- * Linux/OS X:
- * `touch app.js`
- * `touch config.js`
- * `mkdir data`
- * `touch data/databaseContext.js`
+1. Create and initialize a `package.json` file. Use the following command:
-4. Create and initialize a `package.json` file. Use the following command:
- * ```npm init -y```
+ ```bash
+ npm init -y
+ ```
-5. Install the @azure/cosmos module via npm. Use the following command:
- * ```npm install @azure/cosmos --save```
+1. Install the ``@azure/cosmos`` module via **npm**. Use the following command:
-## <a id="Config"></a>Set your app's configurations
+ ```bash
+ npm install @azure/cosmos --save
+ ```
+
+## Set your app's configurations
Now that your app exists, you need to make sure it can talk to Azure Cosmos DB. By updating a few configuration settings, as shown in the following steps, you can set your app to talk to Azure Cosmos DB: 1. Open the *config.js* file in your favorite text editor.
-1. Copy and paste the following code snippet into the *config.js* file and set the properties `endpoint` and `key` to your Azure Cosmos DB endpoint URI and primary key. The database, container names are set to **Tasks** and **Items**. The partition key you will use for this application is **/category**.
+1. Copy and paste the following code snippet into the *config.js* file and set the properties `endpoint` and `key` to your Azure Cosmos DB endpoint URI and primary key. The database, container names are set to **Tasks** and **Items**. The partition key you'll use for this application is **/category**.
:::code language="javascript" source="~/cosmosdb-nodejs-get-started/config.js":::
The JavaScript SDK uses the generic terms *container* and *item*. A container ca
:::code language="javascript" source="~/cosmosdb-nodejs-get-started/data/databaseContext.js" id="createDatabaseAndContainer":::
- A database is the logical container of items partitioned across containers. You create a database by using either the `createIfNotExists` or create function of the **Databases** class. A container consists of items which in the case of the SQL API is JSON documents. You create a container by using either the `createIfNotExists` or create function from the **Containers** class. After creating a container, you can store and query the data.
+ A database is the logical container of items partitioned across containers. You create a database by using either the `createIfNotExists` or create function of the **Databases** class. A container consists of items, which, in the SQL API, are actually JSON documents. You create a container by using either the `createIfNotExists` or create function from the **Containers** class. After creating a container, you can store and query the data.
> [!WARNING] > Creating a container has pricing implications. Visit our [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) so you know what to expect.
The JavaScript SDK uses the generic terms *container* and *item*. A container ca
1. Open the *app.js* file in your favorite text editor.
-1. Copy and paste the code below to import the `@azure/cosmos` module, the configuration, and the databaseContext that you defined in the previous steps.
+1. Copy and paste the code below to import the `@azure/cosmos` module, the configuration, and the databaseContext that you defined in the previous steps.
:::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="ImportConfiguration":::
+## Create an asynchronous function
+
+In the *app.js* file, copy and paste the following code to create an asynchronous function named **main** and immediately execute the function.
++ ## Connect to the Azure Cosmos account
-In the *app.js* file, copy and paste the following code to use the previously saved endpoint and key to create a new CosmosClient object.
+Within the **main** method, copy and paste the following code to use the previously saved endpoint and key to create a new CosmosClient object.
:::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="CreateClientObjectDatabaseContainer"::: > [!Note] > If connecting to the **Cosmos DB Emulator**, disable TLS verification for your node process:
+>
> ```javascript > process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0"; > const client = new CosmosClient({ endpoint, key }); > ```
+>
+
+Now that you have the code to initialize the Azure Cosmos DB client, add a try/catch block that you'll use for your code performing point operations.
+
-Now that you have the code to initialize the Azure Cosmos DB client, let's take a look at how to work with Azure Cosmos DB resources.
+Let's take a look at how to work with Azure Cosmos DB resources.
-## <a id="QueryItem"></a>Query items
+## Query items
-Azure Cosmos DB supports rich queries against JSON items stored in each container. The following sample code shows a query that you can run against the items in your container.You can query the items by using the query function of the `Items` class. Add the following code to the *app.js* file to query the items from your Azure Cosmos account:
+Azure Cosmos DB supports rich queries against JSON items stored in each container. The following sample code shows a query that you can run against the items in your container. You can query the items by using the query function of the `Items` class.
+
+Add the following code to the **try** block to query the items from your Azure Cosmos account:
:::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="QueryItems":::
-## <a id="CreateItem"></a>Create an item
+## Create an item
An item can be created by using the create function of the `Items` class. When you're using the SQL API, items are projected as documents, which are user-defined (arbitrary) JSON content. In this tutorial, you create a new item within the tasks database.
-1. In the app.js file, define the item definition:
+1. In the *app.js* file, outside of the **main** method, define the item definition:
:::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="DefineNewItem":::
-1. Add the following code to create the previously defined item:
+1. Back within the **try** block of the **main** method, add the following code to create the previously defined item:
:::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="CreateItem":::
-## <a id="ReplaceItem"></a>Update an item
+## Update an item
-Azure Cosmos DB supports replacing the contents of items. Copy and paste the following code to *app.js* file. This code gets an item from the container and updates the *isComplete* field to true.
+Azure Cosmos DB supports replacing the contents of items. Copy and paste the following code to **try** block. This code gets an item from the container and updates the *isComplete* field to true.
:::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="UpdateItem":::
-## <a id="DeleteItem"></a>Delete an item
+## Delete an item
-Azure Cosmos DB supports deleting JSON items. The following code shows how to get an item by its ID and delete it. Copy and paste the following code to *app.js* file:
+Azure Cosmos DB supports deleting JSON items. The following code shows how to get an item by its ID and delete it. Copy and paste the following code to the **try** block:
:::code language="javascript" source="~/cosmosdb-nodejs-get-started/app.js" id="DeleteItem":::
-## <a id="Run"></a>Run your Node.js application
+## Run your Node.js application
Altogether, your code should look like this:
Altogether, your code should look like this:
In your terminal, locate your ```app.js``` file and run the command:
-```bash
+```bash
node app.js ``` You should see the output of your get started app. The output should match the example text below.
-```
+```bash
Created database: Tasks
Updated isComplete to true
Deleted item with id: 3 ```
-## <a id="GetSolution"></a>Get the complete Node.js tutorial solution
+## Get the complete Node.js tutorial solution
If you didn't have time to complete the steps in this tutorial, or just want to download the code, you can get it from [GitHub](https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-getting-started ).
-To run the getting started solution that contains all the code in this article, you will need:
+To run the getting started solution that contains all the code in this article, you'll need:
* An [Azure Cosmos DB account][create-account]. * The [Getting Started](https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-getting-started) solution available on GitHub. Install the project's dependencies via npm. Use the following command:
-* ```npm install```
+* ```npm install```
-Next, in the ```config.js``` file, update the config.endpoint and config.key values as described in [Step 3: Set your app's configurations](#Config).
+Next, in the ```config.js``` file, update the config.endpoint and config.key values as described in [Step 3: Set your app's configurations](#set-your-apps-configurations).
Then in your terminal, locate your ```app.js``` file and run the command:
When these resources are no longer needed, you can delete the resource group, Az
## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+
+* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) > [!div class="nextstepaction"] > [Monitor an Azure Cosmos DB account](../monitor-cosmos-db.md)
-[create-account]: create-sql-api-dotnet.md#create-account
+[create-account]: create-sql-api-dotnet.md#create-account
cosmos-db Sql Query Lower https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-lower.md
# LOWER (Azure Cosmos DB) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
- Returns a string expression after converting uppercase character data to lowercase.
+Returns a string expression after converting uppercase character data to lowercase.
-The LOWER system function does not utilize the index. If you plan to do frequent case insensitive comparisons, the LOWER system function may consume a significant amount of RU's. If this is the case, instead of using the LOWER system function to normalize data each time for comparisons, you can normalize the casing upon insertion. Then a query such as SELECT * FROM c WHERE LOWER(c.name) = 'bob' simply becomes SELECT * FROM c WHERE c.name = 'bob'.
+> [!NOTE]
+> This function uses culture-independent (invariant) casing rules when returning the converted string expression.
+
+The LOWER system function doesn't utilize the index. If you plan to do frequent case insensitive comparisons, the LOWER system function may consume a significant number of RUs. If so, instead of using the LOWER system function to normalize data each time for comparisons, you can normalize the casing upon insertion. Then a query such as SELECT * FROM c WHERE LOWER(c.name) = 'username' simply becomes SELECT * FROM c WHERE c.name = 'username'.
## Syntax
LOWER(<str_expr>)
## Return types
- Returns a string expression.
+Returns a string expression.
## Examples
- The following example shows how to use `LOWER` in a query.
+The following example shows how to use `LOWER` in a query.
```sql SELECT LOWER("Abc") AS lower ```
- Here is the result set.
+ Here's the result set.
```json
-[{"lower": "abc"}]
-
+[{"lower": "abc"}]
``` ## Remarks
-This system function will not [use indexes](../index-overview.md#index-usage).
+This system function won't [use indexes](../index-overview.md#index-usage).
## Next steps
cosmos-db Sql Query Upper https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-upper.md
# UPPER (Azure Cosmos DB) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
- Returns a string expression after converting lowercase character data to uppercase.
+Returns a string expression after converting lowercase character data to uppercase.
-The UPPER system function does not utilize the index. If you plan to do frequent case insensitive comparisons, the UPPER system function may consume a significant amount of RU's. If this is the case, instead of using the UPPER system function to normalize data each time for comparisons, you can normalize the casing upon insertion. Then a query such as SELECT * FROM c WHERE UPPER(c.name) = 'BOB' simply becomes SELECT * FROM c WHERE c.name = 'BOB'.
+> [!NOTE]
+> This function uses culture-independent (invariant) casing rules when returning the converted string expression.
+
+The UPPER system function doesn't utilize the index. If you plan to do frequent case insensitive comparisons, the UPPER system function may consume a significant number of RUs. If so, instead of using the UPPER system function to normalize data each time for comparisons, you can normalize the casing upon insertion. Then a query such as SELECT * FROM c WHERE UPPER(c.name) = 'USERNAME' simply becomes SELECT * FROM c WHERE c.name = 'USERNAME'.
## Syntax
UPPER(<str_expr>)
## Return types
- Returns a string expression.
+Returns a string expression.
## Examples
- The following example shows how to use `UPPER` in a query
+The following example shows how to use `UPPER` in a query
```sql SELECT UPPER("Abc") AS upper ```
- Here is the result set.
+Here's the result set.
```json
-[{"upper": "ABC"}]
+[{"upper": "ABC"}]
``` ## Remarks
-This system function will not [use indexes](../index-overview.md#index-usage).
+This system function won't [use indexes](../index-overview.md#index-usage).
## Next steps
cosmos-db Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/storage-explorer.md
- Title: Manage Azure Cosmos DB resources by using Azure Storage Explorer
-description: Learn how to connect to Azure Cosmos DB and manage its resources by using Azure Storage Explorer.
---- Previously updated : 10/23/2020--
-# Manage Azure Cosmos DB resources by using Azure Storage Explorer
-
-You can use Azure Storage explorer to connect to Azure Cosmos DB. It lets you connect to Azure Cosmos DB accounts hosted on Azure and sovereign clouds from Windows, macOS, or Linux.
-
-Use the same tool to manage your different Azure entities in one place. You can manage Azure Cosmos DB entities, manipulate data, update stored procedures and triggers along with other Azure entities like storage blobs and queues. Azure Storage Explorer supports Cosmos accounts configured for SQL, MongoDB, Gremlin, and Table APIs.
-
-> [!NOTE]
-> The Azure Cosmos DB integration with Storage Explorer has been deprecated. Any existing functionality will not be removed for a minimum of one year from this release. You should use the [Azure Portal](https://portal.azure.com/), [Azure Portal desktop app](https://portal.azure.com/App/Download) or the standalone [Azure Cosmos DB Explorer](data-explorer.md) instead. The alternative options contain many new features that aren't currently supported in Storage Explorer.
-
-## Prerequisites
-
-A Cosmos account with a SQL API or an Azure Cosmos DB API for MongoDB. If you don't have an account, you can create one in the Azure portal. See [Azure Cosmos DB: Build a SQL API web app with .NET and the Azure portal](create-sql-api-dotnet.md) for more information.
-
-## Installation
-
-To install the newest Azure Storage Explorer bits, see [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). We support Windows, Linux, and macOS versions.
-
-## Connect to an Azure subscription
-
-1. After you install **Azure Storage Explorer**, select the **plug-in** icon on the left pane.
-
- :::image type="content" source="./media/storage-explorer/plug-in-icon.png" alt-text="Screenshot showing the plug-in icon on the left pane.":::
-
-1. Select **Add an Azure Account**, and then select **Sign-in**.
-
- :::image type="content" source="./media/storage-explorer/connect-to-azure-subscription.png" alt-text="Screenshot of the Connect to Azure Storage window showing the Add an Azure Account radio button selected, and the Azure Environment drop-down menu.":::
-
-1. In the **Azure Sign-in** dialog box, select **Sign in**, and then enter your Azure credentials.
-
- :::image type="content" source="./media/storage-explorer/sign-in.png" alt-text="Screenshot of the Sign in window showing where to enter your credentials for your Azure subscription.":::
-
-1. Select your subscription from the list, and then select **Apply**.
-
- :::image type="content" source="./media/storage-explorer/apply-subscription.png" alt-text="Screenshot of the Account Management pane, showing a list of subscriptions and the Apply button.":::
-
- The Explorer pane updates and shows the accounts in the selected subscription.
-
- :::image type="content" source="./media/storage-explorer/account-list.png" alt-text="Screenshot of the Explorer pane, updated to show the accounts in the selected subscription.":::
-
- Your **Cosmos DB account** is connected to your Azure subscription.
-
-## Use a connection string to connect to Azure Cosmos DB
-
-You can use a connection string to connect to an Azure Cosmos DB. This method only supports SQL and Table APIs. Follow these steps to connect with a connection string:
-
-1. Find **Local and Attached** in the left tree, right-click **Cosmos DB Accounts**, and then select **Connect to Cosmos DB**.
-
- :::image type="content" source="./media/storage-explorer/connect-to-db-by-connection-string.png" alt-text="Screenshot showing the drop-down menu after you right-click, with Connect to Azure Cosmos D B highlighted.":::
-
-2. In the **Connect to Cosmos DB** window:
- 1. Select the API from the drop-down menu.
- 1. Paste your connection string in the **Connection string** box. For how to retrieve the primary connection string, see [Get the connection string](manage-with-powershell.md#list-keys).
- 1. Enter an **Account label**, and then select **Next** to check the summary.
- 1. Select **Connect** to connect the Azure Cosmos DB account.
-
- :::image type="content" source="./media/storage-explorer/connection-string.png" alt-text="Screenshot of the Connect to Cosmos D B window, showing the API drop-down menu, the Connection String box and the Account label box.":::
-
-> [!NOTE]
-> If the Azure Storage Explorer shows that the Azure Cosmos DB connection string is in an invalid format, make sure that the connection string has a semicolon (`;`) at the end. An example of a valid Azure Cosmos DB connection string would be: `AccountEndpoint=https://accountname.documents.azure.com:443;AccountKey=accountkey==;`
-
-## Use a local emulator to connect to Azure Cosmos DB
-
-Use the following steps to connect to an Azure Cosmos DB with an emulator. This method only supports SQL accounts.
-
-1. Install Cosmos DB Emulator, and then open it. For how to install the emulator, see
- [Cosmos DB Emulator](./local-emulator.md).
-
-1. Find **Local and Attached** in the left tree, right-click **Cosmos DB Accounts**, and then select **Connect to Cosmos DB Emulator**.
-
- :::image type="content" source="./media/storage-explorer/emulator-entry.png" alt-text="Screenshot showing the menu that displays after you right-click, with Connect to Azure Cosmos D B Emulator highlighted.":::
-
-1. In the **Connect to Cosmos DB** window:
- 1. Paste your connection string in the **Connection string** box. For information on retrieving the primary connection string, see [Get the connection string](manage-with-powershell.md#list-keys).
- 1. Enter an **Account label**, and then select **Next** to check the summary.
- 1. Select **Connect** to connect the Azure Cosmos DB account.
-
- :::image type="content" source="./media/storage-explorer/emulator-dialog.png" alt-text="Screenshot of the Connect to Cosmos D B window, showing the Connection String box and the Account label box.":::
-
-## Azure Cosmos DB resource management
-
-Use the following operations to manage an Azure Cosmos DB account:
-
-* Open the account in the Azure portal.
-* Add the resource to the Quick Access list.
-* Search and refresh resources.
-* Create and delete databases.
-* Create and delete collections.
-* Create, edit, delete, and filter documents.
-* Manage stored procedures, triggers, and user-defined functions.
-
-### Quick access tasks
-
-You can right-click a subscription on the Explorer pane to perform many quick action tasks, for example:
-
-* Right-click an Azure Cosmos DB account or database, and then select **Open in Portal** to manage the resource in the browser on the Azure portal.
-
- :::image type="content" source="./media/storage-explorer/open-in-portal.png" alt-text="Screenshot showing the menu that displays after you right-click, with Open in Portal highlighted.":::
-
-* Right-click an Azure Cosmos DB account, database, or collection, and then select **Add to Quick Access** to add it to the Quick Access menu.
-
-* Select **Search from Here** to enable keyword search under the selected path.
-
- :::image type="content" source="./media/storage-explorer/search-from-here.png" alt-text="Screenshot showing the search box highlighted.":::
-
-### Database and collection management
-
-#### Create a database
-
-1. Right-click the Azure Cosmos DB account, and then select **Create Database**.
-
- :::image type="content" source="./media/storage-explorer/create-database.png" alt-text="Screenshot showing the menu that displays after you right-click, with Create Database highlighted.":::
-
-1. Enter the database name, and then press **Enter** to complete.
-
-#### Delete a database
-
-1. Right-click the database, and then select **Delete Database**.
-
- :::image type="content" source="./media/storage-explorer/delete-database1.png" alt-text="Screenshot showing the menu that displays after you right-click, with Delete Database highlighted.":::
-
-1. Select **Yes** in the pop-up window. The database node is deleted, and the Azure Cosmos DB account refreshes automatically.
-
- :::image type="content" source="./media/storage-explorer/delete-database2.png" alt-text="Screenshot of the confirmation window with the Yes button highlighted.":::
-
-#### Create a collection
-
-1. Right-click your database, and then select **Create Collection**.
-
- :::image type="content" source="./media/storage-explorer/create-collection.png" alt-text="Screenshot showing the menu that displays after you right-click, with Create Collection highlighted.":::
-
-1. In the Create Collection window, enter the requested information, like **Collection ID** and **Storage capacity**, and so on. Select **OK** to finish.
-
- :::image type="content" source="./media/storage-explorer/create-collection2.png" alt-text="Screenshot of the Create Collection window, showing the Collection I D box and the Storage capacity buttons.":::
-
-1. Select **Unlimited** so you can specify a partition key, then select **OK** to finish.
-
- > [!NOTE]
- > If a partition key is used when you create a collection, once creation is completed, you can't change the partition key value on the collection.
-
- :::image type="content" source="./media/storage-explorer/partitionkey.png" alt-text="Screenshot of the Create Collection window, showing Unlimited selected for Storage Capacity, and the Partition key box highlighted.":::
-
-#### Delete a collection
--- Right-click the collection, select **Delete Collection**, and then select **Yes** in the pop-up window.-
- The collection node is deleted, and the database refreshes automatically.
-
- :::image type="content" source="./media/storage-explorer/delete-collection.png" alt-text="Screenshot showing the menu that displays after you right-click, with Delete Collection highlighted.":::
-
-### Document management
-
-#### Create and modify documents
--- Open **Documents** on the left pane, select **New Document**, edit the contents on the right pane, and then select **Save**.-- You can also update an existing document, and then select **Save**. To discard changes, select **Discard**.-
- :::image type="content" source="./media/storage-explorer/document.png" alt-text="Screenshot showing Documents highlighted on the left pane. On the right pane, New Document, Save and Discard are highlighted.":::
-
-#### Delete a document
-
-* Select the **Delete** button to delete the selected document.
-
-#### Query for documents
-
-* To edit the document filter, enter a [SQL query](./sql-query-getting-started.md), and then select **Apply**.
-
- :::image type="content" source="./media/storage-explorer/document-filter.png" alt-text="Screenshot of the right pane, showing Filter and Apply buttons, the ID number, and the query box highlighted.":::
-
-### Graph management
-
-#### Create and modify a vertex
-
-* To create a new vertex, open **Graph** from the left pane, select **New Vertex**, edit the contents, and then select **OK**.
-* To modify an existing vertex, select the pen icon on the right pane.
-
- :::image type="content" source="./media/storage-explorer/vertex.png" alt-text="Screenshot showing Graph selected on the left pane, and showing New Vertex and the pen icon highlighted on the right pane.":::
-
-#### Delete a graph
-
-* To delete a vertex, select the recycle bin icon beside the vertex name.
-
-#### Filter for graph
-
-* To edit the graph filter, enter a [gremlin query](gremlin-support.md), and then select **Apply Filter**.
-
- :::image type="content" source="./media/storage-explorer/graph-filter.png" alt-text="Screenshot showing Graph selected on the left pane, and showing Apply Filter and the query box highlighted on the right pane.":::
-
-### Table management
-
-#### Create and modify a table
-
-* To create a new table:
- 1. On the left pane, open **Entities**, and then select **Add**.
- 1. In the **Add Entity** dialog box, edit the content.
- 1. Select the **Add Property** button to add a property.
- 1. Select **Insert**.
-
- :::image type="content" source="./media/storage-explorer/table.png" alt-text="Screenshot showing Entities highlighted on the left pane, and showing Add, Edit, Add Property, and Insert highlighted on the right pane.":::
-
-* To modify a table, select **Edit**, modify the content, and then select **Update**.
-
-
-
-#### Import and export table
-
-* To import, select the **Import** button, and then choose an existing table.
-* To export, select the **Export** button, and then choose a destination.
-
- :::image type="content" source="./media/storage-explorer/table-import-export.png" alt-text="Screenshot showing the Import and Export buttons highlighted on the right pane.":::
-
-#### Delete entities
-
-* Select the entities, and then select the **Delete** button.
-
- :::image type="content" source="./media/storage-explorer/table-delete.png" alt-text="Screenshot showing the Delete button highlighted on the right pane, and a confirmation pop-up window with Yes highlighted.":::
-
-#### Query a table
--- Select the **Query** button, input a query condition, and then select the **Execute Query** button. To close the query pane, select the **Close Query** button.-
- :::image type="content" source="./media/storage-explorer/table-query.png" alt-text="Screenshot of the right pane, showing the Execute Query button and the Close Query button highlighted.":::
-
-### Manage stored procedures, triggers, and UDFs
-
-* To create a stored procedure:
- 1. In the left tree, right-click **Stored Procedures**, and then select **Create Stored Procedure**.
-
- :::image type="content" source="./media/storage-explorer/stored-procedure.png" alt-text="Screenshot of the left pane, showing the menu that displays after you right-click, with Create Stored Procedure highlighted.":::
-
- 1. Enter a name in the left, enter the stored procedure scripts on the right pane, and then select **Create**.
-
-* To edit an existing stored procedure, double-click the procedure, make the update, and then select **Update** to save. You can also select **Discard** to cancel the change.
-
-* The operations for **Triggers** and **UDF** are similar to **Stored Procedures**.
-
-## Troubleshooting
-
-The following are solutions to common issues that arise when you use Azure Cosmos DB in Storage Explorer.
-
-### Sign in issues
-
-First, restart your application to see if that fixes the problem. If the problem persists, continue troubleshooting.
-
-#### Self-signed certificate in certificate chain
-
-There are a few reasons you might be seeing this error, the two most common ones are:
-
-* You're behind a *transparent proxy*. Someone, like your IT department, intercepts HTTPS traffic, decrypts it, and then encrypts it by using a self-signed certificate.
-
-* You're running software, such as antivirus software. The software injects a self-signed TLS/SSL certificate into the HTTPS messages you receive.
-
-When Storage Explorer finds a self-signed certificate, it doesn't know if the HTTPS message it receives is tampered with. If you have a copy of the self-signed certificate, you can tell Storage Explorer to trust it. If you're unsure of who injected the certificate, then you can follow these steps to try to find out:
-
-1. Install OpenSSL:
-
- - [Windows](https://slproweb.com/products/Win32OpenSSL.html): Any of the light versions are OK.
- - macOS and Linux: Should be included with your operating system.
-
-1. Run OpenSSL:
- * Windows: Go to the install directory, then **/bin/**, then double-click **openssl.exe**.
- * Mac and Linux: Execute **openssl** from a terminal.
-1. Execute `s_client -showcerts -connect microsoft.com:443`.
-1. Look for self-signed certificates. If you're unsure, which are self-signed, then look for anywhere that the subject ("s:") and issuer ("i:") are the same.
-1. If you find any self-signed certificates, copy and paste everything from and including **--BEGIN CERTIFICATE--** to **--END CERTIFICATE--** to a new .CER file for each one.
-1. Open Storage Explorer, and then go to **Edit** > **SSL Certificates** > **Import Certificates**. Use the file picker to find, select, and then open the .CER files you created.
-
-If you don't find any self-signed certificates, you can send feedback for more help.
-
-#### Unable to retrieve subscriptions
-
-If you're unable to retrieve your subscriptions after you sign in, try these suggestions:
-
-* Verify that your account has access to the subscriptions. To do this, sign in to the [Azure portal](https://portal.azure.com/).
-* Make sure you signed in to the correct environment:
- * [Azure](https://portal.azure.com/)
- * [Azure China](https://portal.azure.cn/)
- * [Azure Germany](https://portal.microsoftazure.de/)
- * [Azure US Government](https://portal.azure.us/)
- * Custom Environment/Azure Stack
-* If you're behind a proxy, make sure that the Storage Explorer proxy is properly configured.
-* Remove the account, and then add it again.
-* Delete the following files from your home directory (such as: C:\Users\ContosoUser), and then add the account again:
- * .adalcache
- * .devaccounts
- * .extaccounts
-* Press the F12 key to open the developer console. Watch the console for any error messages when you sign in.
-
- :::image type="content" source="./media/storage-explorer/console.png" alt-text="Screenshot of the developer tools console, showing Console highlighted.":::
-
-#### Unable to see the authentication page
-
-If you're unable to see the authentication page:
-
-* Depending on the speed of your connection, it might take a while for the sign-in page to load. Wait at least one minute before you close the authentication dialog box.
-* If you're behind a proxy, make sure that the Storage Explorer proxy is properly configured.
-* On the developer tools console (F12), watch the responses to see if you can find any clue for why authentication isn't working.
-
-#### Can't remove an account
-
-If you're unable to remove an account, or if the reauthenticate link doesn't do anything:
-
-* Delete the following files from your home directory, and then add the account again:
- * .adalcache
- * .devaccounts
- * .extaccounts
-
-* If you want to remove SAS attached Storage resources, delete:
- * %AppData%/StorageExplorer folder for Windows
- * /Users/<your_name>/Library/Application SUpport/StorageExplorer for macOS
- * ~/.config/StorageExplorer for Linux
-
- > [!NOTE]
- > If you delete these files, **you must reenter all your credentials**.
-
-### HTTP/HTTPS proxy issue
-
-You can't list Azure Cosmos DB nodes in the left tree when you configure an HTTP/HTTPS proxy in ASE. You can use Azure Cosmos DB data explorer in the Azure portal as a work-around.
-
-### "Development" node under "Local and Attached" node issue
-
-There's no response after you select the **Development** node under the **Local and Attached** node in the left tree. The behavior is expected.
--
-### Attach an Azure Cosmos DB account in the **Local and Attached** node error
-
-If you see the following error after you attach an Azure Cosmos DB account in **Local and Attached** node, then make sure you're using the correct connection string.
--
-### Expand Azure Cosmos DB node error
-
-You might see the following error when you try to expand nodes in the left tree.
--
-Try these suggestions:
-
-* Check if the Azure Cosmos DB account is in provision progress. Try again when the account is being created successfully.
-* If the account is under the **Quick Access** or **Local and Attached** nodes, check if the account is deleted. If so, you need to manually remove the node.
-
-## Next steps
-
-* Watch this video to see how to use Azure Cosmos DB in Azure Storage Explorer: [Use Azure Cosmos DB in Azure Storage Explorer](https://www.youtube.com/watch?v=iNIbg1DLgWo&feature=youtu.be).
-* Learn more about Storage Explorer and connect more services in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
cost-management-billing Ea Portal Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-troubleshoot.md
The first work or school account added to the enrollment determines the _default
To update the Authentication Level:
-1. Sign in to the Azure EA portal as an Enterprise Administrator.
+1. Sign in to the Azure [EA portal](https://ea.azure.com/) as an Enterprise Administrator.
2. Click **Manage** on the left navigation panel. 3. Click the **Enrollment** tab. 4. Under **Enrollment Details**, select **Auth Level**.
data-factory Data Transformation Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-transformation-functions.md
Last updated 02/02/2022
[!INCLUDE[data-flow-preamble](includes/data-flow-preamble.md)]
-Data transformation expressions in Azure Data Factory and Azure Synapse Analytics allow you to transform expressions in many ways, and are a powerful tool enabling you customize the behavior of your pipelines in almost every setting and property - anywhere you find a text field that shows the **Add dynamic content** or **Open expression builder** links within your pipeline.
+Data transformation expressions in Azure Data Factory and Azure Synapse Analytics allow you to transform expressions in many ways, and are a powerful tool enabling you to customize the behavior of your pipelines in almost every setting and property - anywhere you find a text field that shows the **Add dynamic content** or **Open expression builder** links within your pipeline.
## Transformation expression function list
data-lake-store Data Lake Store Service To Service Authenticate Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-service-to-service-authenticate-java.md
In this article, you learn about how to use the Java SDK to do service-to-servic
Replace **FILL-IN-HERE** with the actual values for the Azure Active Directory Web application. ```java
- private static String clienttId = "FILL-IN-HERE";
+ private static String clientId = "FILL-IN-HERE";
private static String tenantId = "FILL-IN-HERE"; private static String clientSecret = "FILL-IN-HERE";
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Reset Password Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal.md
+
+ Title: Reset the password on VMs for your Azure Stack Edge Pro GPU device via the Azure portal
+description: Describes how to reset the password on virtual machines (VMs) on an Azure Stack Edge Pro GPU device via the Azure portal.
++++++ Last updated : 04/29/2022+
+#Customer intent: As an IT admin, I need to understand how reset or change the password on virtual machines (VMs) on my Azure Stack Edge Pro GPU device via the Azure portal.
+
+# Reset VM password for your Azure Stack Edge Pro GPU device via the Azure portal
++
+This article covers steps to reset the password on both Windows and Linux VMs using the Azure portal. To reset a password using PowerShell and local Azure Resource Manager templates, see [Install the VM password reset extension](azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension.md).
+
+## Reset Windows VM password
+
+Use the following steps to reset the VM password for your Azure Stack Edge Pro GPU device:
+
+1. In the Azure portal, go to the Azure Stack Edge resource for your device, then go to **Edge services** > **Virtual machines**.
+
+ ![Screenshot of the Azure portal showing your Azure Stack Edge resource for your device, and how to navigate to your Windows V M.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/portal-navigate-to-vms.png)
+
+1. From the Azure portal VM list view, select the VM name with the password you would like to reset.
+
+ ![Screenshot of the Azure portal Windows V M list view.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/portal-vm-list-view-windows.png)
+
+1. Select **Reset password**.
+
+ ![Screenshot of the Azure portal Windows V M change password tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/my-windows-vm-change-password-tab.png)
+
+1. Specify the username and the new password. Confirm the new password, and then select **Save**.
+
+ For more information about Windows VM password requirements, see [Password requirements for a Windows VM](/azure/virtual-machines/windows/faq#what-are-the-password-requirements-when-creating-a-vm-).
+
+ ![Screenshot of the Azure portal Windows V M change password control.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/my-windows-vm-specify-new-password.png)
+
+1. While the operation is in progress, you can view the notification that shows the status of the operation. Select **Refresh** to update status of the operation.
+
+ ![Screenshot of the Azure portal Windows V M change password progress.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/my-windows-vm-change-password-progress.png)
+
+1. When the operation is complete, you can see that the *windowsVMAccessExt* extension is installed for the VM.
+
+ ![Screenshot of the Azure portal Windows V M change password confirmation.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/my-windows-vm-change-password-success.png)
+
+1. Connect to the VM with the new password.
+
+## Reset Linux VM password
+
+Use the following steps to reset the VM password for your Azure Stack Edge Pro GPU device:
+
+1. In the Azure portal, go to the Azure Stack Edge resource for your device, then go to **Edge services** > **Virtual machines**.
+
+ ![Screenshot of the Azure portal, Azure Stack Edge resource for your device, and how to navigate to your Windows V M.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/portal-navigate-to-vms.png)
+
+1. From the Azure portal VM list view, select the VM name with the password you would like to reset.
+
+ ![Screenshot of the Azure portal Linux V M list view.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/portal-vm-list-view-linux.png)
+
+1. Select **Reset password**.
+
+ ![Screenshot of the Azure portal Linux V M change password tab.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/my-linux-vm-change-password-tab.png)
+
+1. Specify the username and the new password. Confirm the new password, and then select **Save**.
+
+ For more information about Linux VM password requirements, see [Password requirements for a Linux VM](/azure/virtual-machines/linux/faq#what-are-the-password-requirements-when-creating-a-vm-).
+
+ ![Screenshot of the Azure portal Linux V M change password control.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/my-linux-vm-specify-new-password.png)
+
+1. While the operation is in progress, you can view the notification that shows the status of the operation. Select **Refresh** to update status of the operation.
+
+ ![Screenshot of the Azure portal Linux V M change password progress.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/my-linux-vm-change-password-progress.png)
+
+1. When the operation is complete, you can see that the *linuxVMAccessExt* extension is installed for the VM.
+
+ ![Screenshot of the Azure portal Linux V M change password confirmation.](media/azure-stack-edge-gpu-deploy-virtual-machine-reset-password-portal/my-linux-vm-change-password-success.png)
+
+1. Connect to the VM with the new password.
+
+## Next steps
+
+ - Learn about [Deploy VMs on your Azure Stack Edge Pro GPU device via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
+
+ - Learn about [Install the password reset extension on VMs for your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension.md).
databox-online Azure Stack Edge Gpu Virtual Machine Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-overview.md
Previously updated : 07/09/2021 Last updated : 04/21/2022
This article provides a brief overview of virtual machines (VMs) running on your
Azure Stack Edge solution provides purpose-built hardware-as-a-service devices from Microsoft that can be used to deploy edge computing workloads and get quick actionable insights at the edge where the data is generated.
-Depending on your environment and the type of applications you are running, you can deploy one of the following edge computing workloads on these devices:
+Depending on your environment and the type of applications you're running, you can deploy one of the following edge computing workloads on these devices:
- **Containerized** - Use IoT Edge or Kubernetes to run your containerized applications. - **Non-containerized** - Deploy both Windows and Linux virtual machines on your devices to run non-containerized applications.
Before you begin, review the following considerations about your VM:
### VM size
-You need to be aware of VM sizes if you are planning to deploy VMs. There are multiple sizes available for the VMs that you can use to run apps and workloads on your device. The size that you choose then determines factors such as processing power, memory, and storage capacity. For more information, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+You need to be aware of VM sizes if you're planning to deploy VMs. There are multiple sizes available for the VMs that you can use to run apps and workloads on your device. The size that you choose then determines factors such as processing power, memory, and storage capacity. For more information, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
-To figure out the size and the number of VMs that you can deploy on your device, factor in the usable compute on your device and other workloads that you are running. If running Kubernetes, consider the compute requirements for the Kubernetes master and worker VMs as well.
+To figure out the size and the number of VMs that you can deploy on your device, factor in the usable compute on your device and other workloads that you're running. If running Kubernetes, consider the compute requirements for the Kubernetes master and worker VMs as well.
|Kubernetes VM type|CPU and memory requirement| |||
The images that you use to create VM images can be generalized or specialized. W
### Extensions
-Custom script extensions are available for the VMs on your device that help configure workloads by running your script when the VM is provisioned.
+The following extensions are available for the VMs on your device.
-For more information, see [Deploy Custom Script Extension on VMs running on your device](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md)
-
-You can also use GPU extensions for your VM if you want to install GPU drivers when the GPU VMs are provisioned. For more information, see [Create GPU VMs](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#create-gpu-vms) and [Install GPU extensions](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md).
+|Extension|Description|Learn more|
+||||
+|Custom script extensions|Use custom script extensions to configure workloads.|[Deploy Custom Script Extension on VMs running on your device](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md)|
+|GPU extensions |Use GPU extensions to install GPU drivers.|[Create GPU VMs](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#create-gpu-vms) and [Install GPU extensions](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md)|
+|Reset VM password extensions|Reset a VM password using PowerShell.|[Install the VM password reset extension](azure-stack-edge-gpu-deploy-virtual-machine-install-password-reset-extension.md)|
## Create a VM
You can manage the VMs on your device via the Azure portal, via the PowerShell i
To get more information about your VM via the Azure portal, follow these steps: 1. Go to Azure Stack Edge resource for your device and then go to **Virtual machines > Overview**.
-1. In the **Overview** page, go to **Virtual machines** and select the virtual machine that you are interested in. You can then view the details of the VM.
+1. In the **Overview** page, go to **Virtual machines** and select the virtual machine that you're interested in. You can then view the details of the VM.
### Connect to your VM
Depending on the OS that your VM runs, you can connect to the VM as follows:
### Start, stop, delete VMs
-You can [turn on the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#turn-on-the-vm), [suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm). Finally, you can [delete the VMs](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#delete-the-vm) after you are done using them.
+You can [turn on the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#turn-on-the-vm), [suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm). Finally, you can [delete the VMs](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#delete-the-vm) after you're done using them.
### Manage network interfaces, virtual switches
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
Last updated 11/09/2021
# Use adaptive application controls to reduce your machines' attack surfaces + Learn about the benefits of Microsoft Defender for Cloud's adaptive application controls and how you can enhance your security with this data-driven, intelligent feature.
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
Last updated 11/09/2021
# Improve your network security posture with adaptive network hardening - Adaptive network hardening is an agentless feature of Microsoft Defender for Cloud - nothing needs to be installed on your machines to benefit from this network hardening tool. This page explains how to configure and manage adaptive network hardening in Defender for Cloud.
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
Last updated 12/12/2021
# Alert validation in Microsoft Defender for Cloud - This document helps you learn how to verify if your system is properly configured for Microsoft Defender for Cloud alerts. ## What are security alerts?
defender-for-cloud Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md
Last updated 11/09/2021
# Security alerts and incidents in Microsoft Defender for Cloud - Defender for Cloud generates alerts for resources deployed on your Azure, on-premises, and hybrid cloud environments. Security alerts are triggered by advanced detections and are available only with enhanced security features enabled. You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). A free 30-day trial is available. For pricing details in your currency of choice and according to your region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Last updated 03/30/2022
# Security alerts - a reference guide - This article lists the security alerts you might get from Microsoft Defender for Cloud and any Microsoft Defender plans you've enabled. The alerts shown in your environment depend on the resources and services you're protecting, as well as your customized configuration. At the bottom of this page, there's a table describing the Microsoft Defender for Cloud kill chain aligned with version 9 of the [MITRE ATT&CK matrix](https://attack.mitre.org/versions/v9/).
Microsoft Defender for Containers provides security alerts on the cluster level
| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | |--|--|:-:|--|
+| **Attempt to create a new Linux namespace from a container detected (Preview)**<br>(K8S.NODE_NamespaceCreation) | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium |
| **A file was downloaded and executed (Preview)**<br>(K8S.NODE_LinuxSuspiciousActivity) | Analysis of processes running within a container indicates that a file has been downloaded to the container, given execution privileges and then executed. | Execution | Medium | | **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container indicates that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium | | **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiAcitivty) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium |
defender-for-cloud Alerts Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-schemas.md
Last updated 11/09/2021
# Security alerts schemas - If your subscription has enhanced security features enabled, you'll receive security alerts when Defender for Cloud detects threats to their resources. You can view these security alerts in Microsoft Defender for Cloud's pages - [overview dashboard](overview-page.md), [alerts](tutorial-security-incident.md), [resource health pages](investigate-resource-health.md), or [workload protections dashboard](workload-protections-dashboard.md) - and through external tools such as:
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
# Suppress alerts from Microsoft Defender for Cloud - This page explains how you can use alerts suppression rules to suppress false positives or other unwanted security alerts from Defender for Cloud. ## Availability
defender-for-cloud Apply Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md
Last updated 11/09/2021
# Apply Azure security baselines to machines - To reduce a machine's attack surface and avoid known risks, it's important to configure the operating system (OS) as securely as possible. The Azure Security Benchmark has guidance for OS hardening which has led to security baseline documents for [Windows](../governance/policy/samples/guest-configuration-baseline-windows.md) and [Linux](../governance/policy/samples/guest-configuration-baseline-linux.md).
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
# Use asset inventory to manage your resources' security posture - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. Defender for Cloud periodically analyzes the security state of resources connected to your subscriptions to identify potential security vulnerabilities. It then provides you with recommendations on how to remediate those vulnerabilities.
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Last updated 11/09/2021
# Automatically configure vulnerability assessment for your machines - Defender for Cloud collects data from your machines using agents and extensions. Those agents and extensions *can* be installed manually (see [Manual installation of the Log Analytics agent](enable-data-collection.md#manual-agent)). However, **auto provisioning** reduces management overhead by installing all required agents and extensions on existing - and new - machines to ensure faster security coverage for all supported resources. Learn more in [Configure auto provisioning for agents and extensions from Microsoft Defender for Cloud](enable-data-collection.md). To assess your machines for vulnerabilities, you can use one of the following solutions:
defender-for-cloud Configure Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-email-notifications.md
# Configure email notifications for security alerts - Security alerts need to reach the right people in your organization. By default, Microsoft Defender for Cloud emails subscription owners whenever a high-severity alert is triggered for their subscription. This page explains how to customize these notifications. Use Defender for Cloud's **Email notifications** settings page to define preferences for notification emails including:
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
Last updated 12/09/2021
# Continuously export Microsoft Defender for Cloud data - Microsoft Defender for Cloud generates detailed security alerts and recommendations. You can view them in the portal or through programmatic tools. You might also need to export some or all of this information for tracking with other monitoring tools in your environment. You fully customize *what* will be exported, and *where* it will go with **continuous export**. For example, you can configure it so that:
defender-for-cloud Cross Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/cross-tenant-management.md
Last updated 11/09/2021
# Cross-tenant management in Defender for Cloud - Cross-tenant management enables you to view and manage the security posture of multiple tenants in Defender for Cloud by leveraging [Azure Lighthouse](../lighthouse/overview.md). Manage multiple tenants efficiently, from a single view, without having to sign in to each tenant's directory. - Service providers can manage the security posture of resources, for multiple customers, from within their own tenant.
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Last updated 01/23/2022
# Create rich, interactive reports of Defender for Cloud data - [Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences. Workbooks provide a rich set of capabilities for visualizing your Azure data. For detailed examples of each visualization type, see the [visualizations examples and documentation](../azure-monitor/visualize/workbooks-text-visualizations.md).
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
zone_pivot_groups: manage-asc-initiatives
# Create custom security initiatives and policies - To help secure your systems and environment, Microsoft Defender for Cloud generates security recommendations. These recommendations are based on industry best practices, which are incorporated into the generic, default security policy supplied to all customers. They can also come from Defender for Cloud's knowledge of industry and regulatory standards. With this feature, you can add your own *custom* initiatives. Although custom initiatives are not included in the secure score, you'll receive recommendations if your environment doesn't follow the policies you create. Any custom initiatives you create are shown in the list of all recommendations and you can filter by initiative to see the recommendations for your initiative. They are also shown with the built-in initiatives in the regulatory compliance dashboard, as described in the tutorial [Improve your regulatory compliance](regulatory-compliance-dashboard.md).
defender-for-cloud Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md
Last updated 11/09/2021
# Microsoft Defender for Cloud data security - To help customers prevent, detect, and respond to threats, Microsoft Defender for Cloud collects and processes security-related data, including configuration information, metadata, event logs, and more. Microsoft adheres to strict compliance and security guidelinesΓÇöfrom coding to operating a service. This article explains how data is managed and safeguarded in Defender for Cloud.
defender-for-cloud Defender For App Service Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-app-service-introduction.md
# Protect your web apps and APIs - ## Prerequisites Defender for Cloud is natively integrated with App Service, eliminating the need for deployment and onboarding - the integration is transparent.
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Last updated 03/31/2022
# What is Microsoft Defender for Cloud? - Defender for Cloud is a tool for security posture management and threat protection. It strengthens the security posture of your cloud resources, and with its integrated Microsoft Defender plans, Defender for Cloud protects workloads running in Azure, hybrid, and other cloud platforms. Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyber attacks, and streamline security management. Because it's natively integrated, deployment of Defender for Cloud is easy, providing you with simple auto provisioning to secure your resources by default.
defender-for-cloud Defender For Container Registries Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-cicd.md
# Identify vulnerable container images in your CI/CD workflows - This page explains how to scan your Azure Container Registry-based container images with the integrated vulnerability scanner when they're built as part of your GitHub workflows. To set up the scanner, you'll need to enable **Microsoft Defender for container registries** and the CI/CD integration. When your CI/CD workflows push images to your registries, you can view registry scan results and a summary of CI/CD scan results.
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
# Introduction to Microsoft Defender for container registries (deprecated) - Azure Container Registry (ACR) is a managed, private Docker registry service that stores and manages your container images for Azure deployments in a central registry. It's based on the open-source Docker Registry 2.0. To protect the Azure Resource Manager based registries in your subscription, enable **Microsoft Defender for container registries** at the subscription level. Defender for Cloud will then scan all images when theyΓÇÖre pushed to the registry, imported into the registry, or pulled within the last 30 days. YouΓÇÖll be charged for every image that gets scanned ΓÇô once per image.
defender-for-cloud Defender For Container Registries Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-usage.md
Title: How to use Defender for Containers
-description: Learn how to use Defender for Containers to scan Linux images in your Linux-hosted registries
Previously updated : 04/07/2022
+ Title: How to use Defender for Containers to identify vulnerabilities
+description: Learn how to use Defender for Containers to scan images in your registries
++ Last updated : 04/28/2022 -- # Use Defender for Containers to scan your ACR images for vulnerabilities - This page explains how to use the built-in vulnerability scanner to scan the container images stored in your Azure Resource Manager-based Azure Container Registry.
-When **Defender for Containers** is enabled, any image you push to your registry will be scanned immediately. In addition, any image pulled within the last 30 days is also scanned.
- When the scanner, powered by Qualys, reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry. > [!TIP] > You can also scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-container-registries-cicd.md).
+There are four triggers for an image scan:
+
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.
+
+- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
+
+- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+
+- **Continuous scan**- This trigger has two modes:
+
+ - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+
+This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
+
+Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+ ## Identify vulnerabilities in images in Azure container registries To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
To enable vulnerability scans of images stored in your Azure Resource Manager-ba
>[!NOTE] > This feature is charged per image.
-1. Image scans are triggered on every push or import, and if the image has been pulled within the last 30 days.
-
- When the scan completes (typically after approximately 2 minutes, but can be up to 15 minutes), findings are available as Defender for Cloud recommendations.
+ When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
1. [View and remediate findings as explained below](#view-and-remediate-findings).
Yes. If you have an organizational need to ignore a finding, rather than remedia
[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-container-registries-usage.md#disable-specific-findings). ### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?
-Defender for Cloud provides vulnerability assessments for every image pushed or pulled in a registry. Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
+Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
## Next steps
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Last updated 04/28/2022
# Overview of Microsoft Defender for Containers - Microsoft Defender for Containers is the cloud-native solution for securing your containers. On this page, you'll learn how you can use Defender for Containers to improve, monitor, and maintain the security of your clusters, containers, and their applications.
For example, you can mandate that privileged containers shouldn't be created, an
Learn more in [Kubernetes data plane hardening](kubernetes-workload-protections.md). --- ## Vulnerability assessment ### Scanning images in ACR registries
-Defender for Containers includes an integrated vulnerability scanner for scanning images in Azure Container Registry registries.
-
-There are four triggers for an image scan:
--- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.
+Defender for Containers includes an integrated vulnerability scanner for scanning images in Azure Container Registry registries. The vulnerability scanner runs on an image:
-- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
+- When you push the image to your registry
+- Weekly on any image that was pulled within the last 30
+- When you import the image to your Azure Container Registry
+- Continuously in specific situations
-- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).--- **Continuous scan**- This trigger has two modes:-
- - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
-
- - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
-
-This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
-
-Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+Learn more in [Vulnerability assessment](defender-for-container-registries-usage.md).
:::image type="content" source="./media/defender-for-containers/recommendation-acr-images-with-vulnerabilities.png" alt-text="Sample Microsoft Defender for Cloud recommendation about vulnerabilities discovered in Azure Container Registry (ACR) hosted images." lightbox="./media/defender-for-containers/recommendation-acr-images-with-vulnerabilities.png"::: - ### View vulnerabilities for running images
-The recommendation **Running container images should have vulnerability findings resolved** shows vulnerabilities for running images by using the scan results from ACR registeries and information on running images from the Defender security profile/extension. Images that are deployed from a non ACR registry, will appear under the **Not applicable** tab.
+The recommendation **Running container images should have vulnerability findings resolved** shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender security profile/extension. Images that are deployed from a non ACR registry, will appear under the **Not applicable** tab.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable" lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png":::
defender-for-cloud Defender For Databases Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-introduction.md
# Introduction to Microsoft Defender for open-source relational databases - This plan brings threat protections for the following open-source relational databases: - [Azure Database for PostgreSQL](../postgresql/index.yml)
defender-for-cloud Defender For Databases Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-usage.md
# Enable Microsoft Defender for open-source relational databases and respond to alerts - Microsoft Defender for Cloud detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases for the following - [Azure Database for PostgreSQL](../postgresql/index.yml)
defender-for-cloud Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-introduction.md
# Introduction to Microsoft Defender for DNS - Microsoft Defender for DNS provides an additional layer of protection for resources that use Azure DNS's [Azure-provided name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution) capability. From within Azure DNS, Defender for DNS monitors the queries from these resources and detects suspicious activities without the need for any additional agents on your resources.
defender-for-cloud Defender For Key Vault Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-key-vault-introduction.md
# Introduction to Microsoft Defender for Key Vault - Azure Key Vault is a cloud service that safeguards encryption keys and secrets like certificates, connection strings, and passwords. Enable **Microsoft Defender for Key Vault** for Azure-native, advanced threat protection for Azure Key Vault, providing an additional layer of security intelligence.
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
# Introduction to Microsoft Defender for Kubernetes (deprecated) - Defender for Cloud provides real-time threat protection for your Azure Kubernetes Service (AKS) containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the analysis of the Kubernetes audit logs.
defender-for-cloud Defender For Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-introduction.md
# Introduction to Microsoft Defender for Resource Manager - [Azure Resource Manager](../azure-resource-manager/management/overview.md) is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. You use management features, like access control, locks, and tags, to secure and organize your resources after deployment. The cloud management layer is a crucial service connected to all your cloud resources. Because of this, it is also a potential target for attackers. Consequently, we recommend security operations teams monitor the resource management layer closely.
defender-for-cloud Defender For Resource Manager Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-usage.md
# Respond to Microsoft Defender for Resource Manager alerts - When you receive an alert from Microsoft Defender for Resource Manager, we recommend you investigate and respond to the alert as described below. Microsoft Defender for Resource Manager protects all connected resources, so even if you're familiar with the application or user that triggered the alert, it's important to verify the situation surrounding every alert.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
# Introduction to Microsoft Defender for Servers - Microsoft Defender for Servers is one of the enhanced security features of Microsoft Defender for Cloud. Use it to add threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, AWS, GCP, and on-premises environment. To protect machines in hybrid and multi-cloud environments, Defender for Cloud uses [Azure Arc](../azure-arc/index.yml). Connect your hybrid and multi-cloud machines as explained in the relevant quickstart:
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
# Introduction to Microsoft Defender for SQL - Microsoft Defender for SQL includes two Microsoft Defender plans that extend Microsoft Defender for Cloud's [data security package](/azure/azure-sql/database/azure-defender-for-sql) to secure your databases and their data wherever they're located. Microsoft Defender for SQL includes functionalities for discovering and mitigating potential database vulnerabilities, and detecting anomalous activities that could indicate a threat to your databases. ## Availability
defender-for-cloud Defender For Sql On Machines Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-on-machines-vulnerability-assessment.md
Last updated 11/09/2021
# Scan your SQL servers for vulnerabilities - **Microsoft Defender for SQL servers on machines** extends the protections for your Azure-native SQL Servers to fully support hybrid environments and protect SQL servers (all supported version) hosted in Azure, other cloud environments, and even on-premises machines: - [SQL Server on Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/)
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Last updated 11/09/2021
# Enable Microsoft Defender for SQL servers on machines - This Microsoft Defender plan detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. You'll see alerts when there are suspicious database activities, potential vulnerabilities, or SQL injection attacks, and anomalous database access and query patterns.
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
# Introduction to Microsoft Defender for Storage - **Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks. You can enable **Microsoft Defender for Storage** at either the subscription level (recommended) or the resource level.
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
Last updated 11/09/2021
# Deploy a bring your own license (BYOL) vulnerability assessment solution - If you've enabled **Microsoft Defender for Servers**, you're able to use Microsoft Defender for Cloud's built-in vulnerability assessment tool as described in [Integrated Qualys vulnerability scanner for virtual machines](./deploy-vulnerability-assessment-vm.md). This tool is integrated into Defender for Cloud and doesn't require any external licenses - everything's handled seamlessly inside Defender for Cloud. In addition, the integrated scanner supports Azure Arc-enabled machines. Alternatively, you might want to deploy your own privately licensed vulnerability assessment solution from [Qualys](https://www.qualys.com/lp/azure) or [Rapid7](https://www.rapid7.com/products/insightvm/). You can install one of these partner solutions on multiple VMs belonging to the same subscription (but not to Azure Arc-enabled machines).
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
Last updated 03/23/2022
# Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management - [Microsoft's threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt) is a built-in module in Microsoft Defender for Endpoint that can: - Discover vulnerabilities and misconfigurations in near real time
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Last updated 04/13/2022
# Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines - A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Defender for Cloud regularly checks your connected machines to ensure they're running vulnerability assessment tools.
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
# Configure auto provisioning for agents and extensions from Microsoft Defender for Cloud - Microsoft Defender for Cloud collects data from your resources using the relevant agent or extensions for that resource and the type of data collection you've enabled. Use the procedures below to automatically provision the necessary agents and extensions used by Defender for Cloud to your resources. :::image type="content" source="media/enable-data-collection/auto-provisioning-list-of-extensions.png" alt-text="Screenshot of Microsoft Defender for Cloud's extensions that can be auto provisioned.":::
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-enhanced-security.md
# Quickstart: Enable enhanced security features - To learn about the benefits of enhanced security features, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md). ## Prerequisites
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
Last updated 03/08/2022
# Endpoint protection assessment and recommendations in Microsoft Defender for Cloud - Microsoft Defender for Cloud provides health assessments of [supported](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) versions of Endpoint protection solutions. This article explains the scenarios that lead Defender for Cloud to generate the following two recommendations: - [Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439)
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
# Microsoft Defender for Cloud's enhanced security features - The enhanced security features are free for the first 30 days. At the end of 30 days, if you decide to continue using the service, we'll automatically start charging for usage. You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
Last updated 01/02/2022
# Exempting resources and recommendations from your secure score - A core priority of every security team is to ensure analysts can focus on the tasks and incidents that matter to the organization. Defender for Cloud has many features for customizing the experience and making sure your secure score reflects your organization's security priorities. The **exempt** option is one such feature. When you investigate your security recommendations in Microsoft Defender for Cloud, one of the first pieces of information you review is the list of affected resources.
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
Last updated 04/04/2022
# Stream alerts to a SIEM, SOAR, or IT Service Management solution - Microsoft Defender for Cloud can stream your security alerts into the most popular Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), and IT Service Management (ITSM) solutions. Security alerts are notifications that Defender for Cloud generates when it detects threats on your resources.
defender-for-cloud Features Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/features-paas.md
Last updated 02/27/2022
# Feature coverage for Azure PaaS services <a name="paas-services"></a> - The table below shows the availability of Microsoft Defender for Cloud features for the supported Azure PaaS resources. |Service|Recommendations (Free)|Security alerts |Vulnerability assessment|
defender-for-cloud File Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-overview.md
Last updated 11/09/2021
# File integrity monitoring in Microsoft Defender for Cloud - Learn how to configure file integrity monitoring (FIM) in Microsoft Defender for Cloud using this walkthrough.
defender-for-cloud File Integrity Monitoring Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-usage.md
Last updated 11/09/2021
# Compare baselines using File Integrity Monitoring (FIM) - File Integrity Monitoring (FIM) informs you when changes occur to sensitive areas in your resources, so you can investigate and address unauthorized activity. FIM monitors Windows files, Windows registries, and Linux files. This topic explains how to enable FIM on the files and registries. For more information about FIM, see [File Integrity Monitoring in Microsoft Defender for Cloud](file-integrity-monitoring-overview.md).
defender-for-cloud Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/get-started.md
# Quickstart: Set up Microsoft Defender for Cloud - Defender for Cloud provides unified security management and threat protection across your hybrid and multi-cloud workloads. While the free features offer limited security for your Azure resources only, enabling enhanced security features extends these capabilities to on-premises and other clouds. Defender for Cloud helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. You can try the enhanced security features at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). This quickstart section will walk you through all the recommended steps to enable Microsoft Defender for Cloud and the enhanced security features. When you've completed all the quickstart steps, you'll have:
defender-for-cloud Harden Docker Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/harden-docker-hosts.md
Last updated 11/09/2021
# Harden your Docker hosts - Microsoft Defender for Cloud identifies unmanaged containers hosted on IaaS Linux VMs, or other Linux machines running Docker containers. Defender for Cloud continuously assesses the configurations of these containers. It then compares them with the [Center for Internet Security (CIS) Docker Benchmark](https://www.cisecurity.org/benchmark/docker/). Defender for Cloud includes the entire ruleset of the CIS Docker Benchmark and alerts you if your containers don't satisfy any of the controls. When it finds misconfigurations, Defender for Cloud generates security recommendations. Use Defender for Cloud's **recommendations page** to view recommendations and remediate issues.
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
Last updated 11/09/2021
# Implement security recommendations in Microsoft Defender for Cloud - Recommendations give you suggestions on how to better secure your resources. You implement a recommendation by following the remediation steps provided in the recommendation. ## Remediation steps <a name="remediation-steps"></a>
defender-for-cloud Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents.md
Last updated 11/09/2021
# Manage security incidents in Microsoft Defender for Cloud - Triaging and investigating security alerts can be time consuming for even the most skilled security analysts. For many, it's hard to know where to begin. Defender for Cloud uses [analytics](./alerts-overview.md) to connect the information between distinct [security alerts](managing-and-responding-alerts.md). Using these connections, Defender for Cloud can provide a single view of an attack campaign and its related alerts to help you understand the attacker's actions and the affected resources.
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
Last updated 11/09/2021
# Prioritize security actions by data sensitivity - [Microsoft Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Microsoft Purview helps organizations manage and govern data in hybrid and multi-cloud environments. Microsoft Defender for Cloud customers using Microsoft Purview can benefit from an additional vital layer of metadata in alerts and recommendations: information about any potentially sensitive data involved. This knowledge helps solve the triage challenge and ensures security professionals can focus their attention on threats to sensitive data.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
Last updated 03/22/2022
# Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint - Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint security solution. Its main features are: - Risk-based vulnerability management and assessment
defender-for-cloud Investigate Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/investigate-resource-health.md
Last updated 11/09/2021
# Tutorial: Investigate the health of your resources - > [!NOTE] > The resource health page described in this tutorial is a preview release.
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
Last updated 11/09/2021
# Understanding just-in-time (JIT) VM access - This page explains the principles behind Microsoft Defender for Cloud's just-in-time (JIT) VM access feature and the logic behind the recommendation. To learn how to apply JIT to your VMs using the Azure portal (either Defender for Cloud or Azure Virtual Machines) or programmatically, see [How to secure your management ports with JIT](just-in-time-access-usage.md).
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
Last updated 01/06/2022
# Secure your management ports with just-in-time access - Lock down inbound traffic to your Azure Virtual Machines with Microsoft Defender for Cloud's just-in-time (JIT) virtual machine (VM) access feature. This reduces exposure to attacks while providing easy access when you need to connect to a VM. For a full explanation about how JIT works and the underlying logic, see [Just-in-time explained](just-in-time-access-overview.md).
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Last updated 03/08/2022
# Protect your Kubernetes data plane hardening - This page describes how to use Microsoft Defender for Cloud's set of security recommendations dedicated to Kubernetes data plane hardening. > [!TIP]
defender-for-cloud Management Groups Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/management-groups-roles.md
# Organize subscriptions into management groups and assign roles to users - This page explains how to manage your organizationΓÇÖs security posture at scale by applying security policies to all Azure subscriptions linked to your Azure Active Directory tenant. For visibility into the security posture of all subscriptions linked to an Azure AD tenant, you'll need an Azure role with sufficient read permissions assigned on the root management group.
defender-for-cloud Managing And Responding Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/managing-and-responding-alerts.md
Last updated 04/24/2022
# Manage and respond to security alerts in Microsoft Defender for Cloud - This topic shows you how to view and process Defender for Cloud's alerts and protect your resources. Advanced detections that trigger security alerts are only available with Microsoft Defender for Cloud's enhanced security features enabled. A free trial is available. To upgrade, see [Enable enhanced protections](enable-enhanced-security.md).
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
Last updated 11/09/2021
# Manage multi-factor authentication (MFA) enforcement on your subscriptions - If you're only using passwords to authenticate your users, you're leaving an attack vector open. Users often use weak passwords or reuse them for multiple services. With [MFA](https://www.microsoft.com/security/business/identity/mfa) enabled, your accounts are more secure, and users can still authenticate to almost any application with single sign-on (SSO). There are multiple ways to enable MFA for your Azure Active Directory (AD) users based on the licenses that your organization owns. This page provides the details for each in the context of Microsoft Defender for Cloud.
defender-for-cloud Onboard Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboard-management-group.md
Last updated 04/25/2022
# Enable Defender for Cloud on all subscriptions in a management group - You can use Azure Policy to enable Microsoft Defender for Cloud on all the Azure subscriptions within the same management group (MG). This is more convenient than accessing them individually from the portal, and works even if the subscriptions belong to different owners. To onboard a management group and all its subscriptions:
defender-for-cloud Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/os-coverage.md
Last updated 11/09/2021
# Supported platforms - This page shows the platforms and environments supported by Microsoft Defender for Cloud. ## Combinations of environments <a name="vm-server"></a>
defender-for-cloud Other Threat Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/other-threat-protections.md
Last updated 11/09/2021
# Additional threat protections in Microsoft Defender for Cloud - In addition to its built-in [advanced protection plans](defender-for-cloud-introduction.md), Microsoft Defender for Cloud also offers the following threat protection capabilities. > [!TIP]
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
Last updated 11/09/2021
# Integrate security solutions in Microsoft Defender for Cloud - This document helps you to manage security solutions already connected to Microsoft Defender for Cloud and add new ones. ## Integrated Azure security solutions
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
Last updated 01/27/2022
# Permissions in Microsoft Defender for Cloud - Defender for Cloud uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md), which provides [built-in roles](../role-based-access-control/built-in-roles.md) that can be assigned to users, groups, and services in Azure. Defender for Cloud assesses the configuration of your resources to identify security issues and vulnerabilities. In Defender for Cloud, you only see information related to a resource when you are assigned the role of Owner, Contributor, or Reader for the subscription or the resource's resource group.
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
# Azure Policy built-in definitions for Microsoft Defender for Cloud - This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy definitions related to Microsoft Defender for Cloud. The following groupings of policy definitions are available:
defender-for-cloud Powershell Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-onboarding.md
# Automate onboarding of Microsoft Defender for Cloud using PowerShell - You can secure your Azure workloads programmatically, using the Microsoft Defender for Cloud PowerShell module. Using PowerShell enables you to automate tasks and avoid the human error inherent in manual tasks. This is especially useful in large-scale deployments that involve dozens of subscriptions with hundreds and thousands of resources, all of which must be secured from the beginning. Onboarding Microsoft Defender for Cloud using PowerShell enables you to programmatically automate onboarding and management of your Azure resources and add the necessary security controls.
defender-for-cloud Prevent Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/prevent-misconfigurations.md
Last updated 11/09/2021
# Prevent misconfigurations with Enforce/Deny recommendations - Security misconfigurations are a major cause of security incidents. Defender for Cloud can help *prevent* misconfigurations of new resources with regard to specific recommendations. This feature can help keep your workloads secure and stabilize your secure score.
defender-for-cloud Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/privacy.md
Last updated 11/09/2021
# Manage user data in Microsoft Defender for Cloud - This article provides information about how you can manage the user data in Microsoft Defender for Cloud. Managing user data includes the ability to access, delete, or export data. [!INCLUDE [gdpr-intro-sentence.md](../../includes/gdpr-intro-sentence.md)]
defender-for-cloud Protect Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/protect-network-resources.md
Last updated 11/09/2021
# Protect your network resources - Microsoft Defender for Cloud continuously analyzes the security state of your Azure resources for network security best practices. When Defender for Cloud identifies potential security vulnerabilities, it creates recommendations that guide you through the process of configuring the needed controls to harden and protect your resources. For a full list of the recommendations for Networking, see [Networking recommendations](recommendations-reference.md#recs-networking).
defender-for-cloud Quickstart Automation Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-automation-alert.md
Last updated 11/09/2021
# Quickstart: Create an automatic response to a specific security alert using an ARM template - This quickstart describes how to use an Azure Resource Manager template (ARM template) to create a workflow automation that triggers a logic app when specific security alerts are received by Microsoft Defender for Cloud. [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
# Connect your AWS accounts to Microsoft Defender for Cloud - With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
# Connect your GCP projects to Microsoft Defender for Cloud - With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
defender-for-cloud Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md
# Connect your non-Azure machines to Microsoft Defender for Cloud - Defender for Cloud can monitor the security posture of your non-Azure computers, but first you need to connect them to Azure. You can connect your non-Azure computers in any of the following ways:
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
Last updated 04/26/2022
# Tutorial: Improve your regulatory compliance - Microsoft Defender for Cloud helps streamline the process for meeting regulatory compliance requirements, using the **regulatory compliance dashboard**. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards. When you enable Defender for Cloud on an Azure subscription, the [Azure Security Benchmark](/security/benchmark/azure/introduction) is automatically assigned to that subscription. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Last updated 04/11/2022
# Archive for what's new in Defender for Cloud? - The primary [What's new in Defender for Cloud?](release-notes.md) release notes page contains updates for the last six months, while this page contains older items. This page provides you with information about:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Last updated 04/26/2022
# What's new in Microsoft Defender for Cloud? - Defender for Cloud is in active development and receives improvements on an ongoing basis. To stay up to date with the most recent developments, this page provides you with information about new features, bug fixes, and deprecated functionality. This page is updated frequently, so revisit it often.
defender-for-cloud Remediate Vulnerability Findings Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/remediate-vulnerability-findings-vm.md
Last updated 11/09/2021
# View and remediate findings from vulnerability assessment solutions on your VMs - When your vulnerability assessment tool reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific VM. ## View findings from the scans of your virtual machines
defender-for-cloud Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/resource-graph-samples.md
# Azure Resource Graph sample queries for Microsoft Defender for Cloud - This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample queries for Microsoft Defender for Cloud. For a complete list of Azure Resource Graph samples, see [Resource Graph samples by Category](../governance/resource-graph/samples/samples-by-category.md)
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Last updated 04/03/2022
# Review your security recommendations - This article explains how to view and understand the recommendations in Microsoft Defender for Cloud to help you protect your multi-cloud resources. ## View your recommendations <a name="monitor-recommendations"></a>
defender-for-cloud Secure Score Access And Track https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-access-and-track.md
Last updated 11/09/2021
# Access and track your secure score - You can find your overall secure score, as well as your score per subscription, through the Azure portal or programmatically as described in the following sections: > [!TIP]
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Last updated 04/03/2022
# Security posture for Microsoft Defender for Cloud - ## Introduction to secure score Microsoft Defender for Cloud has two main goals:
defender-for-cloud Security Center Planning And Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-center-planning-and-operations-guide.md
Last updated 12/14/2021
# Planning and operations guide - This guide is for information technology (IT) professionals, IT architects, information security analysts, and cloud administrators planning to use Defender for Cloud.
defender-for-cloud Security Center Readiness Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-center-readiness-roadmap.md
Last updated 11/09/2021
# Defender for Cloud readiness roadmap - This document provides you a readiness roadmap that will assist you to get started with Defender for Cloud. ## Understanding Defender for Cloud
defender-for-cloud Security Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-policy-concept.md
Last updated 11/09/2021
# What are security policies, initiatives, and recommendations? - Microsoft Defender for Cloud applies security initiatives to your subscriptions. These initiatives contain one or more security policies. Each of those policies results in a security recommendation for improving your security posture. This page explains each of these ideas in detail.
defender-for-cloud Sql Information Protection Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-information-protection-policy.md
Last updated 11/09/2021
# SQL information protection policy in Microsoft Defender for Cloud - SQL information protection's [data discovery and classification mechanism](/azure/azure-sql/database/data-discovery-and-classification-overview) provides advanced capabilities for discovering, classifying, labeling, and reporting the sensitive data in your databases. It's built into [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview), [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview), and [Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md). The classification mechanism is based on the following two elements:
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
# Defender for Containers feature availability - The **tabs** below show the features that are available, by environment, for Microsoft Defender for Containers. ## Supported features by environment
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
# Feature coverage for machines - The **tabs** below show the features of Microsoft Defender for Cloud that are available for Windows and Linux machines. ## Supported features for virtual machines and servers <a name="vm-server-features"></a>
defender-for-cloud Tenant Wide Permissions Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tenant-wide-permissions-management.md
Last updated 11/09/2021
# Grant and request tenant-wide visibility - A user with the Azure Active Directory (AD) role of **Global Administrator** might have tenant-wide responsibilities, but lack the Azure permissions to view that organization-wide information in Microsoft Defender for Cloud. Permission elevation is required because Azure AD role assignments don't grant access to Azure resources. ## Grant tenant-wide permissions to yourself
defender-for-cloud Threat Intelligence Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/threat-intelligence-reports.md
Last updated 11/09/2021
# Microsoft Defender for Cloud threat intelligence report - This page explains how Microsoft Defender for Cloud's threat intelligence reports can help you learn more about a threat that triggered a security alert. ## What is a threat intelligence report?
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Last updated 12/26/2021
# Microsoft Defender for Cloud Troubleshooting Guide - This guide is for information technology (IT) professionals, information security analysts, and cloud administrators whose organizations need to troubleshoot Defender for Cloud related issues. Defender for Cloud uses the Log Analytics agent to collect and store data. See [Microsoft Defender for Cloud Platform Migration](./enable-data-collection.md) to learn more. The information in this article represents Defender for Cloud functionality after transition to the Log Analytics agent.
defender-for-cloud Tutorial Protect Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-protect-resources.md
Last updated 11/09/2021
# Tutorial: Protect your resources with Microsoft Defender for Cloud - Defender for Cloud limits your exposure to threats by using access and application controls to block malicious activity. Just-in-time (JIT) virtual machine (VM) access reduces your exposure to attacks by enabling you to deny persistent access to VMs. Instead, you provide controlled and audited access to VMs only when needed. Adaptive application controls help harden VMs against malware by controlling which applications can run on your VMs. Defender for Cloud uses machine learning to analyze the processes running in the VM and helps you apply allowlist rules using this intelligence. In this tutorial you'll learn how to:
defender-for-cloud Tutorial Security Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-incident.md
Last updated 11/09/2021
# Tutorial: Triage, investigate, and respond to security alerts - Microsoft Defender for Cloud continuously analyzes your hybrid cloud workloads using advanced analytics and threat intelligence to alert you about potentially malicious activities in your cloud resources. You can also integrate alerts from other security products and services into Defender for Cloud. Once an alert is raised, swift action is needed to investigate and remediate the potential security issue. In this tutorial, you will learn how to:
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
Last updated 01/25/2022
# Manage security policies - This page explains how security policies are configured, and how to view them in Microsoft Defender for Cloud. To understand the relationships between initiatives, policies, and recommendations, see [What are security policies, initiatives, and recommendations?](security-policy-concept.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 04/11/2022 Last updated : 05/02/2022 # Important upcoming changes to Microsoft Defender for Cloud - > [!IMPORTANT] > The information on this page relates to pre-release products or features, which may be substantially modified before they are commercially released, if ever. Microsoft makes no commitments or warranties, express or implied, with respect to the information provided here.
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | May 2022 |
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2022 |
| [Changes to vulnerability assessment](#changes-to-vulnerability-assessment) | May 2022 | | [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | May 2022 |
+| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | June 2022 |
### Changes to recommendations for managing endpoint protection solutions
Learn more:
- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) - [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md)
-### Multiple changes to identity recommendations
-
-**Estimated date for change:** May 2022
-
-Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In December, we'll be making the changes outlined below.
--- **Improved freshness interval** - Currently, the identity recommendations have a freshness interval of 24 hours. This update will reduce that interval to 12 hours.--- **Account exemption capability** - Defender for Cloud has many features for customizing the experience and making sure your secure score reflects your organization's security priorities. The exempt option on security recommendations is one such feature. For a full overview and instructions, see [Exempting resources and recommendations from your secure score](exempt-resource.md). With this update, you'll be able to exempt specific accounts from evaluation by the eight recommendations listed in the following table.-
- Typically, you'd exempt emergency ΓÇ£break glassΓÇ¥ accounts from MFA recommendations, because such accounts are often deliberately excluded from an organization's MFA requirements. Alternatively, you might have external accounts that you'd like to permit access to but which don't have MFA enabled.
-
- > [!TIP]
- > When you exempt an account, it won't be shown as unhealthy and also won't cause a subscription to appear unhealthy.
-
- |Recommendation| Assessment key|
- |-|-|
- |[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/94290b00-4d0c-d7b4-7cea-064a9554e681)|94290b00-4d0c-d7b4-7cea-064a9554e681|
- |[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/151e82c5-5341-a74b-1eb0-bc38d2c84bb5)|151e82c5-5341-a74b-1eb0-bc38d2c84bb5|
- |[MFA should be enabled on accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/57e98606-6b1e-6193-0e3d-fe621387c16b)|57e98606-6b1e-6193-0e3d-fe621387c16b|
- |[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c3b6ae71-f1f0-31b4-e6c1-d5951285d03d)|c3b6ae71-f1f0-31b4-e6c1-d5951285d03d|
- |[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b)|a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b|
- |[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/04e7147b-0deb-9796-2e5c-0336343ceb3d)|04e7147b-0deb-9796-2e5c-0336343ceb3d|
- |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e52064aa-6853-e252-a11e-dffc675689c2)|e52064aa-6853-e252-a11e-dffc675689c2|
- |[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00c6d40b-e990-6acf-d4f3-471e747a27c4)|00c6d40b-e990-6acf-d4f3-471e747a27c4|
-
-
-- **Recommendations rename** - From this update, we're renaming two recommendations. We're also revising their descriptions. The assessment keys will remain unchanged. --
- |Property |Current value | From the update|
- ||||
- |Assessment key | e52064aa-6853-e252-a11e-dffc675689c2 | Unchanged|
- |Name |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e52064aa-6853-e252-a11e-dffc675689c2) |Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions |
- |Description |User accounts that have been blocked from signing in, should be removed from your subscriptions.<br>These accounts can be targets for attackers looking to find ways to access your data without being noticed.|User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed.<br>Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).|
- |Related policy |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions |
--
- |Property |Current value | From the update|
- ||||
- |Assessment key | 00c6d40b-e990-6acf-d4f3-471e747a27c4 | Unchanged|
- |Name |[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00c6d40b-e990-6acf-d4f3-471e747a27c4)|Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions|
- |Description |User accounts that have been blocked from signing in, should be removed from your subscriptions.<br>These accounts can be targets for attackers looking to find ways to access your data without being noticed.|User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed.<br>Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).|
- |Related policy |[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474)|Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions|
- ### Changes to vulnerability assessment **Estimated date for change:** May 2022
The Key Vault recommendations listed here are currently disabled so that they do
| Key Vault secrets should have an expiration date | 14257785-9437-97fa-11ae-898cfb24302b | | Key Vault keys should have an expiration date | 1aabfa0d-7585-f9f5-1d92-ecb40291d9f2 |
+### Multiple changes to identity recommendations
+
+**Estimated date for change:** June 2022
+
+Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In June, we'll be making the changes outlined below.
+
+#### New recommendations in preview
+
+The new release will bring the following capabilities:
+
+- **Extended evaluation scope** ΓÇô Improved coverage to identity accounts without MFA and external accounts on Azure resources (instead of subscriptions only) allowing security admins to view role assignments per account.
+
+- **Improved freshness interval** - Currently, the identity recommendations have a freshness interval of 24 hours. This update will reduce that interval to 12 hours.
+
+- **Account exemption capability** - Defender for Cloud has many features you can use to customize your experience and ensure that your secure score reflects your organization's security priorities. For example, you can [exempt resources and recommendations from your secure score](exempt-resource.md).
+
+ This update will allow you to exempt specific accounts from evaluation with the six recommendations listed in the following table.
+
+ Typically, you'd exempt emergency ΓÇ£break glassΓÇ¥ accounts from MFA recommendations, because such accounts are often deliberately excluded from an organization's MFA requirements. Alternatively, you might have external accounts that you'd like to permit access to but which don't have MFA enabled.
+
+ > [!TIP]
+ > When you exempt an account, it won't be shown as unhealthy and also won't cause a subscription to appear unhealthy.
+
+ |Recommendation| Assessment key|
+ |-|-|
+ |MFA should be enabled on accounts with owner permissions on your subscription|94290b00-4d0c-d7b4-7cea-064a9554e681|
+ |MFA should be enabled on accounts with read permissions on your subscription|151e82c5-5341-a74b-1eb0-bc38d2c84bb5|
+ |MFA should be enabled on accounts with write permissions on your subscription|57e98606-6b1e-6193-0e3d-fe621387c16b|
+ |External accounts with owner permissions should be removed from your subscription|c3b6ae71-f1f0-31b4-e6c1-d5951285d03d|
+ |External accounts with read permissions should be removed from your subscription|a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b|
+ |External accounts with write permissions should be removed from your subscription|04e7147b-0deb-9796-2e5c-0336343ceb3d|
+#### Recommendations rename
+
+This update, will rename two recommendations, and revise their descriptions. The assessment keys will remain unchanged.
+
+| Property | Current value | New update's change |
+|--|--|--|
+|**First recommendation**| - | - |
+|Assessment key | e52064aa-6853-e252-a11e-dffc675689c2 | No change |
+| Name | [Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e52064aa-6853-e252-a11e-dffc675689c2) | Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions. |
+| Description | User accounts that have been blocked from signing in, should be removed from your subscriptions.
+These accounts can be targets for attackers looking to find ways to access your data without being noticed. | User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed. <br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md). |
+| Related policy | [Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2febb62a0c-3560-49e1-89ed-27e074e9f8ad) | Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions. |
+|**Second recommendation**| - | - |
+| Assessment key | 00c6d40b-e990-6acf-d4f3-471e747a27c4 | No change |
+| Name | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00c6d40b-e990-6acf-d4f3-471e747a27c4) | Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions. |
+| Description | User accounts that have been blocked from signing in, should be removed from your subscriptions. <br> These accounts can be targets for attackers looking to find ways to access your data without being noticed. | User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed. <br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md). |
+| Related policy | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474) | Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions. |
## Next steps
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Last updated 11/09/2021
# Customize the set of standards in your regulatory compliance dashboard - Microsoft Defender for Cloud continually compares the configuration of your resources with requirements in industry standards, regulations, and benchmarks. The **regulatory compliance dashboard** provides insights into your compliance posture based on how you're meeting specific compliance requirements. > [!TIP]
defender-for-cloud Windows Admin Center Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/windows-admin-center-integration.md
Last updated 11/09/2021
# Protect Windows Admin Center resources with Microsoft Defender for Cloud - Windows Admin Center is a management tool for your Windows servers. It's a single location for system administrators to access the majority of the most commonly used admin tools. From within Windows Admin Center, you can directly onboard your on-premises servers into Microsoft Defender for Cloud. You can then view a summary of your security recommendations and alerts directly in the Windows Admin Center experience. > [!NOTE]
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Last updated 11/09/2021
# Automate responses to Microsoft Defender for Cloud triggers - Every security program includes multiple workflows for incident response. These processes might include notifying relevant stakeholders, launching a change management process, and applying specific remediation steps. Security experts recommend that you automate as many steps of those procedures as you can. Automation reduces overhead. It can also improve your security by ensuring the process steps are done quickly, consistently, and according to your predefined requirements. This article describes the workflow automation feature of Microsoft Defender for Cloud. This feature can trigger Logic Apps on security alerts, recommendations, and changes to regulatory compliance. For example, you might want Defender for Cloud to email a specific user when an alert occurs. You'll also learn how to create Logic Apps using [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
defender-for-cloud Workload Protections Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workload-protections-dashboard.md
Last updated 11/09/2021
# The workload protections dashboard - This dashboard provides: - Visibility into your Microsoft Defender for Cloud coverage across your different resource types
event-hubs Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-minimum-version.md
To configure the minimum TLS version for an Event Hubs namespace with a template
"contentVersion": "1.0.0.0", "parameters": {}, "variables": {
- "serviceBusNamespaceName": "[concat(uniqueString(subscription().subscriptionId), 'tls')]"
+ "eventHubNamespaceName": "[concat(uniqueString(subscription().subscriptionId), 'tls')]"
}, "resources": [ {
- "name": "[variables('serviceBusNamespaceName')]",
+ "name": "[variables('eventHubNamespaceName')]",
"type": "Microsoft.EventHub/namespaces", "apiVersion": "2022-01-01-preview", "location": "westeurope",
resources
To test that the minimum required TLS version for an Event Hubs namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
-When a client accesses an Event Hubs namespace using a TLS version that does not meet the minimum TLS version configured for the namespace, Azure Event Hubs returns error code 400 error (Bad Request) and a message indicating that the TLS version that was used is not permitted for making requests against this Event Hubs namespace.
+When a client accesses an Event Hubs namespace using a TLS version that does not meet the minimum TLS version configured for the namespace, Azure Event Hubs returns error code 401 (Unauthorized) and a message indicating that the TLS version that was used is not permitted for making requests against this Event Hubs namespace.
> [!NOTE] > Due to limitations in the confluent library, errors coming from an invalid TLS version will not surface when connecting through the Kafka protocol. Instead a general exception will be shown.
See the following documentation for more information.
- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md) - [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)-- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
iot-dps Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/virtual-network-support.md
For several reasons, customers may wish to restrict connectivity to Azure resour
* Enabling a private connectivity experience from your on-premises network assets ensuring that your data and traffic is transmitted directly to Azure backbone network.
-* Preventing exfiltration attacks from sensitive on-premises networks.
+* Preventing exfiltration attacks from sensitive on-premises networks.
* Following established Azure-wide connectivity patterns using [private endpoints](../private-link/private-endpoint-overview.md).
Note the following current limitations for DPS when using private endpoints:
* Enabling one or more private endpoints typically involves [disabling public access](public-network-access.md) to your DPS instance. This means that you can no longer use the Azure portal to manage enrollments. Instead you can manage enrollments using the Azure CLI, PowerShell, or service APIs from machines inside the VNET(s)/private endpoint(s) configured on the DPS instance.
+* When using private endpoints, we recommend deploying DPS in one of the regions that support [Availability Zones](iot-dps-ha-dr.md). Otherwise, DPS instances with private endpoints enabled may see reduced availability in the event of outages.
+ >[!NOTE] >**Data residency consideration:** >
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
Azure IoT Edge for Linux on Windows supports the following architectures:
Azure IoT Edge for Linux on Windows can run in Windows virtual machines. Using a virtual machine as an IoT Edge device is common when customers want to augment existing infrastructure with edge intelligence. In order to run the EFLOW virtual machine inside a Windows VM, the host VM must support nested virtualization. There are two forms of nested virtualization compatible with Azure IoT Edge for Linux on Windows. Users can choose to deploy through a local VM or Azure VM. For more information, see [EFLOW Nested virtualization](./nested-virtualization.md).
+### VMware virtual machine
+Azure IoT Edge for Linux on Windows supports running inside a Windows virtual machine running on top of [VMware ESXi](https://www.vmware.com/products/esxi-and-esx.html) product family. Specific networking and virtualization configurations are needed to support this scenario. For more information about VMware configuration, see [EFLOW Nested virtualization](./nested-virtualization.md).
## Releases
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/nested-virtualization.md
This is the baseline approach for any Windows VM that hosts Azure IoT Edge for L
If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
+## Deployment on Windows VM on VMware ESXi
+Both VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions support nested virtualization needed for hosting Azure IoT Edge for Linux on Windows on top of a Windows virtual machine.
+
+To set up an Azure IoT Edge for Linux on Windows on a VMware ESXi Windows virtual machine, use the following steps:
+1. Create a Windows virtual machine on the VMware ESXi host. For more information about VMware VM deployment, see [VMware - Deploying Virtual Machines](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-39D19B2B-A11C-42AE-AC80-DDA8682AB42C.html).
+>[!NOTE]
+> If you're creating a Windows 11 virtual machine, ensure to meet the minimum requirements by Microsoft to run Windows 11. For more information about Windows 11 VM VMware support, see [Installing Windows 11 as a guest OS on VMware](https://kb.vmware.com/s/article/86207).
+1. Turn off the virtual machine created in previous step.
+1. Select the Windows virtual machine and then **Edit settings**.
+1. Search for _Hardware virtualization_ and turn on _Expose hardware assisted virtualization to the guest OS_.
+1. Select **Save** and start the virtual machine.
+1. Install Hyper-V hypervisor. If you're using Windows client, make sure you [Install Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v). If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
+
+> [!NOTE]
+> For VMware Windows virtual machines, if you plan to use an **external virtual switch** for the EFLOW virtual machine networking, make sure you enable _Promiscious mode_. For more information, see [Configuring promiscuous mode on a virtual switch or portgroup](https://kb.vmware.com/s/article/1004099). Failing to do so will result in EFLOW installation errors.
+ ## Deployment on Azure VMs Azure IoT Edge for Linux on Windows isn't compatible on an Azure VM running the Server SKU unless a script is executed that brings up a default switch. For more information on how to bring up a default switch, see [Create virtual switch for Linux on Windows](how-to-create-virtual-switch.md).
load-balancer Quickstart Load Balancer Standard Internal Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-bicep.md
+
+ Title: 'Quickstart: Create an internal Azure load balancer using Bicep'
+description: This quickstart shows how to create an internal Azure load balancer using Bicep.
++++++ Last updated : 04/29/2022++
+# Quickstart: Create an internal load balancer to load balance VMs by using Bicep
+
+This quickstart describes how to use Bicep to create an internal Azure load balancer.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/2-vms-internal-load-balancer/).
++
+Multiple Azure resources have been defined in the Bicep file:
+
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageaccounts): Virtual machine storage accounts for boot diagnostics.
+- [**Microsoft.Compute/availabilitySets**](/azure/templates/microsoft.compute/availabilitySets): Availability set for virtual machines.
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualNetworks): Virtual network for load balancer and virtual machines.
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkInterfaces): Network interfaces for virtual machines.
+- [**Microsoft.Network/loadBalancers**](/azure/templates/microsoft.network/loadBalancers): Internal load balancer.
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualMachines): Virtual machines.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-user>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-user>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-user\>** with the admin username. You'll also be prompted to enter **adminPassword**.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating a Bicep file, see:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
logic-apps Business Continuity Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/business-continuity-disaster-recovery-guidance.md
ms.suite: integration Previously updated : 03/24/2021 Last updated : 05/02/2022 # Business continuity and disaster recovery for Azure Logic Apps
Each logic app needs to specify the location that you want to use for deployment
This disaster recovery strategy focuses on setting up your primary logic app to [*failover*](https://en.wikipedia.org/wiki/Failover) onto a standby or backup logic app in an alternate location where Azure Logic Apps is also available. That way, if the primary suffers losses, disruptions, or failures, the secondary can take on the work. This strategy requires that your secondary logic app and dependent resources are already deployed and ready in the alternate location.
-If you follow good DevOps practices, you already use [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) to define and deploy your logic apps and their dependent resources. Resource Manager templates give you the capability to use a single deployment definition and then use parameter files to provide the configuration values to use for each deployment destination. This capability means that you can deploy the same logic app to different environments, for example, development, test, and production. You can also deploy the same logic app to different Azure regions or ISEs, which supports disaster recovery strategies that use [paired-regions](../availability-zones/cross-region-replication-azure.md).
+If you follow good DevOps practices, you already use [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) to define and deploy your logic apps and their dependent resources. Resource Manager templates give you the capability to use a single deployment definition and then use parameter files to provide the configuration values to use for each deployment destination. This capability means that you can deploy the same logic app to different environments, for example, development, test, and production. You can also deploy the same logic app to different Azure regions or ISEs, which support disaster recovery strategies that use [paired-regions](../availability-zones/cross-region-replication-azure.md).
For the failover strategy, your logic apps and locations must meet these requirements:
This example shows the active-passive setup where the primary logic app instance
This example shows a combined setup where the primary location has both active logic app instances, while the secondary location has active-passive logic app instances. If the primary location experiences a disruption or failure, the active logic app in the secondary location, which is already handling a partial workload, can take over the entire workload.
-* In the primary location, an active logic app listens to an Azure Service Bus queue for messages, while another active logic app checks for emails by using a Office 365 Outlook polling trigger.
+* In the primary location, an active logic app listens to an Azure Service Bus queue for messages, while another active logic app checks for emails by using an Office 365 Outlook polling trigger.
* In the secondary location, an active logic app works with the logic app in the primary location by listening and competing for messages from the same Service Bus queue. Meanwhile, a passive inactive logic app waits on standby to check for emails when the primary location becomes unavailable but is *disabled* to avoid rereading emails.
For this task, in the secondary location, create a watchdog logic app that perfo
To automatically activate the secondary instance, you can create a logic app that calls the management API such as the [Azure Resource Manager connector](/connectors/arm/) to activate the appropriate logic apps in the secondary location. You can expand your watchdog app to call this activation logic app after a specific number of failures happen.
+<a name="availability-zones"></a>
+
+## Zone redundancy with availability zones
+
+In each Azure region, *availability zones* are physically separate locations that are tolerant to local failures. Such failures can range from software and hardware failures to events such as earthquakes, floods, and fires. These zones achieve tolerance through the redundancy and logical isolation of Azure services.
+
+To provide resiliency and distributed availability, at least three separate availability zones exist in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region.
+
+Currently, this capability is preview and available for new Consumption logic apps in specific regions. For more information, see the following documentation:
+
+* [Protect Consumption logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md)
+* [Azure regions and availability zones](../availability-zones/az-overview.md)
+ <a name="collect-diagnostic-data"></a> ## Collect diagnostic data
logic-apps Quickstart Create First Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-first-logic-app-workflow.md
ms.suite: integration
Previously updated : 03/02/2022 Last updated : 05/02/2022 #Customer intent: As a developer, I want to create my first automated integration workflow that runs in Azure Logic Apps using the Azure portal.
To create and manage a logic app resource using other tools, review these other
| Plan type | Description | |--|-|
- | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). |
+ | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). After you select **Consumption**, the **Zone redundancy** section appears. This section offers the choice to enable availability zones for your Consumption logic app. In this example, keep **Enabled** as the setting value. For more information, see [Protect Consumption logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md). |
| **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). | |||
logic-apps Set Up Zone Redundancy Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md
+
+ Title: Protect logic apps from region failures with zone redundancy
+description: Set up availability zones for logic apps with zone redundancy for business continuity and disaster recovery.
+
+ms.suite: integration
++ Last updated : 05/02/2022+
+#Customer intent: As a developer, I want to protect logic apps from regional failures by setting up availability zones.
++
+# Protect Consumption logic apps from region failures with zone redundancy and availability zones (preview)
+
+In each Azure region, *availability zones* are physically separate locations that are tolerant to local failures. Such failures can range from software and hardware failures to events such as earthquakes, floods, and fires. These zones achieve tolerance through the redundancy and logical isolation of Azure services.
+
+To provide resiliency and distributed availability, at least three separate availability zones exist in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information about availability zones and zone redundancy, review [Azure regions and availability zones](../availability-zones/az-overview.md).
+
+This article provides a brief overview about considerations for using availability zones in Azure Logic Apps and how to enable this capability for your Consumption logic app.
+
+## Considerations
+
+During preview, the following considerations apply:
+
+* The following list includes the Azure regions where you can currently enable availability zones with the list expanding as available:
+
+ - Brazil South
+ - Canada Central
+ - France Central
+
+* Azure Logic Apps currently supports the option to enable availability zones *only for new Consumption logic app workflows* that run in multi-tenant Azure Logic Apps.
+
+ * This option is available *only when you create a Consumption logic app using the Azure portal*. No programmatic tool support, such as Azure PowerShell or Azure CLI, currently exists to enable availability zones.
+
+ * This option is unavailable for existing Consumption logic app workflows and for any Standard logic app workflows.
+
+* Existing Consumption logic app workflows are unaffected until mid-May 2022. After this time, the Azure Logic Apps team will gradually start to move existing Consumption logic app workflows towards using availability zones, several Azure regions at a time. The option to enable availability zones on new Consumption logic app workflows remains available during this time.
+
+* If you use a firewall or restricted environment, you have to allow traffic through all the IP addresses required by Azure Logic Apps, managed connectors, and custom connectors in the Azure region where you create your logic app workflows. New IP addresses that support availability zones are already published for Azure Logic Apps, managed connectors, and custom connectors. For more information, review [Prerequisites](#prerequisites).
+
+## Limitations
+
+With HTTP-based actions, certificates exported or created with AES256 encryption won't work when used for client certificate authentication. The same certificates also won't work when used for OAuth authentication.
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* If you have a firewall or restricted environment, you have to allow traffic through all the IP addresses required by Azure Logic Apps, managed connectors, and any custom connectors in the Azure region where you create your logic app workflows. For more information, review the following documentation:
+
+ * [Firewall configuration: IP addresses and service tags](logic-apps-limits-and-config.md#firewall-ip-configuration)
+
+ * [Inbound IP addresses for Azure Logic Apps](logic-apps-limits-and-config.md#inbound)
+
+ * [Outbound IP addresses for Azure Logic Apps](logic-apps-limits-and-config.md#outbound)
+
+ * [Outbound IP addresses for managed connectors and custom connectors](/connectors/common/outbound-ip-addresses)
+
+## Set up availability zones for Consumption logic app workflows
+
+1. In the [Azure portal](https://portal.azure.com), start creating a Consumption logic app. On the **Create Logic App** page, stop after you select **Consumption** as the plan type for your logic app.
+
+ ![Screenshot showing Azure portal, "Create Logic App" page, logic app details, and the "Consumption" plan type selected.](./media/set-up-zone-redundancy-availability-zones/select-consumption-plan.png)
+
+ For a quick tutorial, review [Quickstart: Create your first integration workflow with multi-tenant Azure Logic Apps and the Azure portal](quickstart-create-first-logic-app-workflow.md).
+
+ After you select **Consumption**, the **Zone redundancy** section and options become available.
+
+1. Under **Zone redundancy**, select **Enabled**.
+
+ At this point, your logic app creation experience appears similar to this example:
+
+ ![Screenshot showing Azure portal, "Create Logic App" page, logic app details, and the "Enabled" option under "Zone redundancy" selected.](./media/set-up-zone-redundancy-availability-zones/enable-zone-redundancy.png)
+
+1. Finish creating your logic app.
+
+1. If you use a firewall and haven't set up access for traffic through the required IP addresses, make sure to complete that [requirement](#prerequisites).
+
+## Next steps
+
+* [Business continuity and disaster recovery for Azure Logic Apps](business-continuity-disaster-recovery-guidance.md)
+* [Connectors in Azure Logic Apps](../connectors/apis-list.md)
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
nthIndexOf('<text>', '<searchText>', <occurrence>)
|--|-||-| | <*text*> | Yes | String | The string that contains the substring to find | | <*searchText*> | Yes | String | The substring to find |
-| <*ocurrence*> | Yes | Integer | A positive number that specifies the *n*th occurrence of the substring to find. |
+| <*ocurrence*> | Yes | Integer | A number that specifies the *n*th occurrence of the substring to find. If *ocurrence* is negative, starts searching from the end. |
||||| | Return value | Type | Description |
Here's the result: `Paris`
## Next steps
-Learn about the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md)
+Learn about the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md)
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
--++ Last updated 03/15/2022
machine-learning Concept Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-ingestion.md
--++ Last updated 10/21/2021
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
--++ Last updated 10/21/2021
machine-learning Concept Distributed Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-distributed-training.md
description: Learn what type of distributed training Azure Machine Learning supports and the open source framework integrations available for distributed training. --++ Last updated 03/27/2020
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
--++ Last updated 10/21/2021
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
--++ Last updated 01/15/2022
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Title: MLflow and Azure Machine Learning
description: Learn about MLflow with Azure Machine Learning to log metrics and artifacts from ML models, and deploy your ML models as a web service. --++ Last updated 10/21/2021
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
--++ Last updated 11/04/2021
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
--++ Last updated 04/05/2022
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
Title: Set up AutoML for time-series forecasting
description: Set up Azure Machine Learning automated ML to train time-series forecasting models with the Azure Machine Learning Python SDK. --++
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-features.md
Title: Featurization with automated machine learning description: Learn the data featurization settings in Azure Machine Learning and how to customize those features for your automated ML experiments.--++
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
Title: Set up AutoML with Python description: Learn how to set up an AutoML training run with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML.--++
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-data-prep-synapse-spark-pool.md
--++ Last updated 10/21/2021
machine-learning How To Debug Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipelines.md
description: How to troubleshoot when you get errors running a machine learning
--++ Last updated 10/21/2021
machine-learning How To Deploy Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-azure-container-instance.md
--++ Last updated 10/21/2021
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
--++ Last updated 10/21/2021
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-fpga-web-service.md
--++ Last updated 10/21/2021
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-inferencing-gpus.md
aks_target.delete()
* [Deploy model on FPGA](how-to-deploy-fpga-web-service.md) * [Deploy model with ONNX](concept-onnx.md#deploy-onnx-models-in-azure)
-* [Train Tensorflow DNN Models](how-to-train-tensorflow.md)
+* [Train TensorFlow DNN Models](how-to-train-tensorflow.md)
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
Title: Deploy MLflow models as web services
description: Set up MLflow with Azure Machine Learning to deploy your ML models as an Azure web service. --++ Last updated 10/25/2021
machine-learning How To Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-pipelines.md
description: Run machine learning workflows with machine learning pipelines and
--++ Last updated 10/21/2021
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
--++ Last updated 10/21/2021
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-move-data-in-out-of-pipelines.md
description: Learn how Azure Machine Learning pipelines ingest data, and how to
--++ Last updated 10/21/2021
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
Title: Train with MLflow Projects
description: Set up MLflow with Azure Machine Learning to log metrics and artifacts from ML models --++ Last updated 06/16/2021
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
description: Learn how Azure Machine Learning enables you to scale out a scikit-
--++ Last updated 03/21/2022
machine-learning How To Trigger Published Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-trigger-published-pipeline.md
description: Triggered pipelines allow you to automate routine, time-consuming t
--++ Last updated 10/21/2021
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
Title: Troubleshoot automated ML experiments description: Learn how to troubleshoot and resolve issues in your automated machine learning experiments.--++
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-understand-automated-ml.md
Title: Evaluate AutoML experiment results
description: Learn how to view and evaluate charts and metrics for each of your automated machine learning experiment runs. --++ Last updated 10/21/2021
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
description: Learn how to set up AutoML training runs without a single line of c
--++ Last updated 11/15/2021
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automlstep-in-pipelines.md
description: The AutoMLStep allows you to use automated machine learning in your
--++ Last updated 10/21/2021
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-labeled-dataset.md
Title: Create and explore datasets with labels description: Learn how to export data labels from your Azure Machine Learning labeling projects and use them for machine learning tasks. --++
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
Title: MLflow Tracking for Azure Databricks ML experiments
description: Set up MLflow with Azure Machine Learning to log metrics and artifacts from Azure Databricks ML experiments. --++
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow.md
Title: MLflow Tracking for models
description: Set up MLflow Tracking with Azure Machine Learning to log metrics and artifacts from ML models. --++ Last updated 10/21/2021
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-synapsesparkstep.md
description: Link your Azure Synapse Analytics workspace to your Azure machine l
--++ Last updated 10/21/2021
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-version-track-datasets.md
description: Learn how to version machine learning datasets and how versioning w
--++ Last updated 10/21/2021
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-azure-machine-learning-cli.md
--++ Last updated 04/02/2021
machine-learning Tutorial Auto Train Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-models.md
--++ Last updated 10/21/2021
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
-+ -+ Last updated 10/21/2021 # Customer intent: As a non-coding data scientist, I want to use automated machine learning to build a demand forecasting model.
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
--++ Last updated 10/21/2021
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
--++ Last updated 01/28/2022
mariadb Quickstart Create Mariadb Server Database Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-bicep.md
+
+ Title: 'Quickstart: Create an Azure DB for MariaDB - Bicep'
+description: In this Quickstart article, learn how to create an Azure Database for MariaDB server using Bicep.
++ Last updated : 04/28/2022+++++
+# Quickstart: Use Bicep to create an Azure Database for MariaDB server
+
+Azure Database for MariaDB is a managed service that you use to run, manage, and scale highly available MariaDB databases in the cloud. In this quickstart, you use Bicep to create an Azure Database for MariaDB server in PowerShell or Azure CLI.
++
+## Prerequisites
+
+You'll need an Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+## Review the Bicep file
+
+You create an Azure Database for MariaDB server with a defined set of compute and storage resources. To learn more, see [Azure Database for MariaDB pricing tiers](concepts-pricing-tiers.md). You create the server within an [Azure resource group](../azure-resource-manager/management/overview.md).
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/managed-mariadb-with-vnet/).
++
+The Bicep file defines five Azure resources:
+
+* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+* [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets)
+* [**Microsoft.DBforMariaDB/servers**](/azure/templates/microsoft.dbformariadb/servers)
+* [**Microsoft.DBforMariaDB/servers/virtualNetworkRules**](/azure/templates/microsoft.dbformariadb/servers/virtualnetworkrules)
+* [**Microsoft.DBforMariaDB/servers/firewallRules**](/azure/templates/microsoft.dbformariadb/servers/firewallrules)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters serverName=<server-name> administratorLogin=<admin-login>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -serverName "<server-name>" -administratorLogin "<admin-login>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<server-name\>** with the name of the server. Replace **\<admin-login\>** with the database administrator login name. The minimum required length is one character. You'll also be prompted to enter **administratorLoginPassword**. The minimum password length is eight characters.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating a Bicep file using Visual Studio Code, see:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
marketplace Azure App Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-offer-listing.md
Previously updated : 03/16/2022 Last updated : 05/02/2022 # Configure your Azure application offer listing details
Last updated 03/16/2022
The information you provide on the **Offer listing** page for your Azure Application offer will be displayed in the Microsoft commercial marketplace online stores. This includes the descriptions of your offer, screenshots, and your marketing assets. To see what this looks like, see [Offer listing details](plan-azure-application-offer.md#offer-listing-details). > [!NOTE]
-> Offer listing content (such as the description, documents, screenshots, and terms of use) is not required to be in English if the offer description begins with the phrase, "This application is available only in [non-English language]." It is also acceptable to provide a _Useful Link_ URL to offer content in a language other than the one used in the offer listing content.
+> Offer listing content (such as the description, documents, screenshots, and terms of use) is not required to be in English if the offer description begins with the phrase, "This application is available only in [non-English language]." It is also acceptable to provide a _Useful Link_ URL to offer content in a language other than the one used in the offer listing content if the offer description begins with the phrase "This application is also available in [non-English language]".
## Marketplace details
marketplace Customer Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/customer-dashboard.md
Title: Customers dashboard in Microsoft commercial marketplace analytics on Partner Center, Azure Marketplace, and Microsoft AppSource
+ Title: Customers dashboard in Microsoft commercial marketplace analytics on Partner Center
description: Learn how to access information about your customers, including growth trends, using the Customers dashboard in commercial marketplace analytics.
Previously updated : 04/18/2022 Last updated : 04/26/2022 # Customers dashboard in commercial marketplace analytics
The [Customers dashboard](https://go.microsoft.com/fwlink/?linkid=2166011) displ
1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Insights** tile.
- [ ![Illustrates the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
+ ![Screenshot showing the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png)
-1. In the left menu, select **Customers**.
+1. In the left-nav menu, select **[Customers](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/customer)**.
+
+ :::image type="content" source="media/customer-dashboard/menu-customer.png" alt-text="Screenshot showing the Customer option in the left-nav menu.":::
## Elements of the Customers dashboard The following sections describe how to use the Customers dashboard and how to read the data.
+### Download
++
+To download a snapshot of the dashboard, select **Download as PDF**. Alternatively, go to the [Downloads](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) dashboard and download the report.
+
+### Share
++
+To email dashboard widgets data, select **Share** and provide the email information. Share report URLs using **Copy link** and **Share to Teams**, or **Copy as image** to send a snapshot of chart data.
++
+### WhatΓÇÖs new
++
+Use this to check on changes and enhancements.
+
+### About data refresh
++
+View the data source and the data refresh details, such as frequency of the data refresh.
+
+### Got feedback?
++
+Submit feedback about the report/dashboard along with an optional screenshot.
++ ### Month range
-You can find a month range selection at the top-right corner of each page. Customize the output of the **Customers** page graphs by selecting a month range based on the past 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
-[ ![Illustrates the month filters on the Customers page.](./media/customer-dashboard/customers-workspace-filters.png) ](./media/customer-dashboard/customers-workspace-filters.png#lightbox)
+A month range selection is at the top-right corner of each page. Customize the output of graphs by selecting a month range based on the last **six** or **12** months, or by selecting a **custom** month range with a maximum duration of 12 months. The default month range is six months.
-> [!NOTE]
-> All metrics in the visualization widgets and export reports honor the computation period selected by the user.
+### Customer page dashboard filters
++
+These filters are applied at the Customers page level. Select multiple filters to render the chart for what you want to see in the **Detailed orders data** export. Filters are applied on the data extracted for the month range you selected on the upper-right corner of the page.
+
+The page has dashboard-level filters for the following:
+
+- Sales Channel
+- Marketplace Subscription Id
+- Customer Id
+- Customer Name
+- Customer Company Name
+- Country
+
+Each filter is expandable with multiple options that you can select. Filter options are dynamic and based on the selected date range.
### Active and churned customersΓÇÖ trend
-In this section, you will find your customers growth trend for the selected computation period. Metrics and growth trends are represented by a line chart and displays the value for each month by hovering over the line on the chart. The percentage value below **Active customers** in the widget represents the amount of growth during the selected computation period. For example, the following screenshot shows a growth of 0.92% for the selected computation period.
+This widget shows your customer's growth trend for the selected computation period. Metrics and growth trends are represented by a line chart and displays the value for each month by hovering over the line on the chart. The percentage value below **Active customers** in the widget represents the amount of growth during the selected computation period. For example, the following Screenshot showing a decline of 1.93% in active customers for the selected computation period.
-[![Illustrates the Customers widget on the Customers page.](./media/customer-dashboard/customers-widget.png)](./media/customer-dashboard/customers-widget.png#lightbox)
+![Screenshot showing the Customers widget on the Customers page.](./media/customer-dashboard/customers-widget.png)
-There are three _customer types_: new, existing, and churned.
+There are three customer types: new, existing, and churned.
-- A new customer has acquired one or more of your offers for the first time within the selected month.-- An existing customer has acquired one or more of your offers prior to the month selected.-- A churned customer has canceled all offers previously purchased. Churned customers are represented in the negative axis in the Trend widget.
+- A **new** customer has acquired one or more of your offers for the first time within the selected month.
+- An **existing** customer has acquired one or more of your offers prior to the month selected.
+- A **churned** customer has canceled all offers previously purchased. Churned customers for VM offers are calculated as customers who didnΓÇÖt show any VM usage for two days or more. Churned customers are represented in the negative axis in the Trend widget.
+
+Select the ellipsis (...) to copy the widget image, download aggregated widget data as a .csv file, or download the image as a PDF.
### Customer growth trend including existing, new, and churned customers
-In this section, you will find trend and count of all customers, including new, existing, and churned, with a month-by-month growth trend.
+This widget shows trend and count of all customers, including new, existing, and churned, with a month-by-month growth trend.
- The line chart represents the overall customer growth percentages. - The month column represents the count of customers stacked by new, existing, and churned customers.
In this section, you will find trend and count of all customers, including new,
- You can select specific legend items to display more detailed views. For example, select new customers from the legend to display only new customers. - Hovering over a column in the chart displays details for that month only.
-[![Illustrates the Customers trend widget on the Customers page.](./media/customer-dashboard/customers-trend.png)](./media/customer-dashboard/customers-trend.png#lightbox)
+![Screenshot showing the Customers trend widget on the Customers page.](./media/customer-dashboard/customers-trend.png)
-### Customers by orders/usage
+Select the ellipsis (...) to copy the widget image, download aggregated widget data as .csv file, or download the image as a PDF.
-The **Customers by orders/usage** chart has three tabs: Orders, Normalized usage, and Raw usage. Select the **Orders** tab to show order details.
+### Orders by customers type
-[ ![Illustrates the Orders tab of the Customers by orders and usage widget on the Customers page.](./media/customer-dashboard/customers-by-orders-usage.png) ](./media/customer-dashboard/customers-by-orders-usage.png#lightbox)
+This chart has tabs for Orders, Normalized usage, and Raw usage. Select **Orders** to show order details.
-Note the following:
+![Screenshot showing the Orders tab on the Insights screen of the Customers dashboard.](./media/customer-dashboard/customers-by-orders-usage.png)
- The Leader board presents details of the customers ranked by order count. After selecting a customer, the customer details are presented in the adjoining ΓÇ£DetailsΓÇ¥, ΓÇ£Orders by SKUsΓÇ¥ and ΓÇ£SKUs by SeatsΓÇ¥ sections. - The Customer profile details are displayed in this space when publishers are signed in with an owner role. If publishers are signed in with a contributor role, the details in this section will not be available. - The **Orders by SKUs** donut chart displays the breakdown of orders purchased for plans. The top five plans with the highest order count are displayed, while the rest of the orders are grouped under **Rest all**. - The **SKUs by Seats** donut chart displays the breakdown of seats ordered for plans. The top five plans with the highest seats are displayed, while the rest of the orders are grouped under **Rest All**.
-You can also select the **Normalized usage** or **Raw usage** tab to view usage details.
+Select **Normalized usage** or **Raw usage** for those views.
- The Leader board presents details of the customers ranked by usage hours count. After selecting a customer, the details of the customer are presented in the adjoining ΓÇ£DetailsΓÇ¥, ΓÇ£Normalized Usage by offersΓÇ¥ and ΓÇ£Normalized Usage by virtual machine (VM) Size(s)ΓÇ¥ section. - The Customer profile details are displayed in this space when publishers are logged in with an owner role. If publishers are logged in with a contributor role, the details in this section will not be available. - The **Normalized usage by Offers** donut chart displays the breakdown of usage consumed by the offers. The top five plans with the highest usage count are displayed, while the rest of the offers are grouped under **Rest All**. - The **Normalized usage by VM Size(s)** donut chart displays the breakdown of usage consumed by different VM Size(s). The top five VM Sizes with the highest normalized usage are displayed, while the rest of the usage is grouped under **Rest All**.
+Select the ellipsis (...) to copy the widget image, download aggregated widget data as .csv file, or download the image as a PDF.
+ ### Top customers percentile The **Top customers percentile** chart has three tabs, "Orders," "Normalized usage," and "Raw usage." The _top customer percentile_ is displayed along the x-axis, as determined by the number of orders. The y-axis displays the customer's order count. The secondary y-axis (line graph) displays the cumulative percentage of the total number of orders. You can display details by hovering over points along the line chart.
-[![Illustrates the Orders tab of the Top Customer Percentile widget on the Customers page.](./media/customer-dashboard/top-customer-percentile.png)](./media/customer-dashboard/top-customer-percentile.png#lightbox)
+![Screenshot showing the Orders tab of the Top Customer Percentile widget on the Customers page.](./media/customer-dashboard/top-customer-percentile.png)
+
+Select the ellipsis (...) to copy the widget image, download aggregated widget data as .csv file, or download the image as a PDF.
### Customer type by orders and usage
This chart shows the following:
- A new customer has acquired one or more of your offers or reported usage for the first time within the same calendar month (y-axis). - An existing customer has previously acquired an offer from you or reported usage prior to the calendar month reported (on the y-axis).
-[![Illustrates the Orders tab of the Orders by Customer Type widget on the Customers page.](./media/customer-dashboard/orders-by-customer-type.png)](./media/customer-dashboard/orders-by-customer-type.png#lightbox)
+
+Select the ellipsis (...) to copy the widget image, download aggregated widget data as .csv file, or download the image as a PDF.
### Customers by geography For the selected computation period, the heatmap displays the total number of customers, and the percentage of newly added customers against geography dimension. The light to dark color on the map represents the low to high value of the customer count. Select a record in the table to zoom in on a country or region.
-[![Illustrates the Orders tab of the Orders by geography widget on the Customers page.](./media/customer-dashboard/customers-by-geography.png)](./media/customer-dashboard/customers-by-geography.png#lightbox)
-
-Note the following:
+![Screenshot showing the Orders tab of the Orders by geography widget on the Customers page.](./media/customer-dashboard/customers-by-geography.png)
- You can move the map to view the exact location. - You can zoom into a specific location. - The heatmap has a supplementary grid to view the details of customer count, order count, normalized usage hours in the specific location. - You can search and select a country/region in the grid to zoom to the location in the map. Revert to the original view by selecting the **Home** button in the map.
+Select the ellipsis (...) to copy the widget image, download aggregated widget data as .csv file, or download the image as a PDF.
+ ### Customer details table The **Customer details** table displays a numbered list of the top 1,000 customers sorted by the date they first acquired one of your offers. You can expand a section by selecting the expansion icon in the details ribbon.
-Note the following:
+![Screenshot showing the Customer Details table on the Insights screen of the Customers dashboard.](./media/customer-dashboard/customer-details-table.png)
- Customer personal information will only be available if the customer has provided consent. You can only view this information if you have signed in with an owner role level of permissions. - Each column in the grid is sortable.
Note the following:
- When an offer is purchased by a protected customer, information in **Customer Detailed Data** will be masked (************). - Customer dimension details such as Company Name, Customer Name, and Customer Email are at an organization ID level, not at Azure Marketplace or Microsoft AppSource transaction level.
+Select the ellipsis (...) to copy the widget image, download aggregated widget data as .csv file, or download the image as a PDF.
+ _**Table 1: Dictionary of data terms**_ | Column name in<br>user interface | Attribute name | Definition | Column name in programmatic<br>access reports |
_**Table 1: Dictionary of data terms**_
| Customer Postal Code | Customer Postal Code | The postal code provided by the customer. Code could be different than the postal code provided in a customer's Azure subscription. | CustomerPostal Code | | CustomerCommunicationCulture | Customer Communication Language | The language preferred by the customer for communication. | CustomerCommunicationCulture | | CustomerCountryRegion | Customer Country/Region | The country/region name provided by the customer. Country/region could be different than the country/region in a customer's Azure subscription. | CustomerCountryRegion |
-| AzureLicenseType | Azure License Type | The type of licensing agreement used by customers to purchase Azure. Also known as the _channel_. The possible values are:<ul><li>Cloud Solution Provider</li><li>Enterprise</li><li>Enterprise through Reseller</li><li>Pay as You Go</li></ul> | AzureLicenseType |
+| AzureLicenseType | Azure License Type | The type of licensing agreement used by customers to purchase Azure. Also known as the _channel_. The possible values are:<br>- Cloud Solution Provider<br>- Enterprise<br>- Enterprise through Reseller<br>- Pay as You Go | AzureLicenseType |
| PromotionalCustomers | Is Promotional Contact Opt In | The value will let you know if the customer proactively opted in for promotional contact from publishers. At this time, we are not presenting the option to customers, so we have indicated "No" across the board. After this feature is deployed, we will start updating accordingly. | PromotionalCustomers | | CustomerState | Customer State | The state of residence provided by the customer. State could be different than the state provided in a customer's Azure subscription. | CustomerState | | CommerceRootCustomer | Commerce Root Customer | One Billing Account ID can be associated with multiple Customer IDs.<br>One combination of a Billing Account ID and a Customer ID can be associated with multiple commercial marketplace subscriptions.<br>The Commerce Root Customer signifies the name of the subscriptionΓÇÖs customer. | CommerceRootCustomer | | Customer ID | Customer ID | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. | CustomerId | | Billing Account ID | Billing Account ID | The identifier of the account on which billing is generated. Map **Billing Account ID** to **customerID** to connect your Payout Transaction Report with the Customer, Order, and Usage Reports. | BillingAccountId |
-| Customer Type | Customer Type | The value of this field signifies the type of the customer. The possible values are:<ul><li>individual</li> <li>organization</li></ul> | CustomerType |
+| Customer Type | Customer Type | The value of this field signifies the type of the customer. The possible values are:<br>- individual<br>- organization | CustomerType |
| OfferName | OfferName | The name of the commercial marketplace offer | OfferName| | PlanID | PlanID | The display name of the plan entered when the offer was created in Partner Center | PlanID | | SKU | SKU | The plan associated with the offer | SKU | | N/A | lastModifiedAt | The latest timestamp for customer purchases. Use this field, via programmatic API access, to pull the latest snapshot of all customer purchase transactions since a specific date | lastModifiedAt |
-### Customers page filters
-
-The Customers page filters are applied at the Customers page level. You can select multiple filters to render the chart for the criteria you choose to view and the data you want to see in the ΓÇ£Detailed orders dataΓÇ¥ grid / export. Filters are applied on the data extracted for the month range that you selected on the upper-right corner of the Customers page.
-
-> [!TIP]
-> You can use the download icon in the upper-right corner of any widget to download the data. You can provide feedback on each of the widgets by clicking on the ΓÇ£thumbs upΓÇ¥ or ΓÇ£thumbs downΓÇ¥ icon.
- ## Next steps - For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).
network-watcher Diagnose Vm Network Traffic Filtering Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md
documentationcenter: network-watcher
editor: Previously updated : 04/20/2018 Last updated : 05/02/2022 ms.assetid:
Log in to the Azure portal at https://portal.azure.com.
## Create a VM 1. Select **+ Create a resource** found on the upper, left corner of the Azure portal.
-2. Select **Compute**, and then select **Windows Server 2016 Datacenter** or a version of **Ubuntu Server**.
-3. Enter, or select, the following information, accept the defaults for the remaining settings, and then select **OK**:
+1. Select **Compute**, and then select **Windows Server 2019 Datacenter** or a version of **Ubuntu Server**.
+1. Enter, or select, the following information, accept the defaults for the remaining settings, and then select **OK**:
|Setting|Value| |||
- |Name|myVm|
- |User name| Enter a user name of your choosing.|
- |Password| Enter a password of your choosing. The password must be at least 12 characters long and meet the defined complexity requirements.|
|Subscription| Select your subscription.| |Resource group| Select **Create new** and enter **myResourceGroup**.|
- |Location| Select **East US**|
+ |Virtual machine name|myVm|
+ |Region| Select **East US**|
+ |User name| Enter a user name of your choosing.|
+ |Password| Enter a password of your choosing. The password must be at least 12 characters long and meet the defined complexity requirements.|
-4. Select a size for the VM and then select **Select**.
-5. Under **Settings**, accept all the defaults, and select **OK**.
-6. Under **Create** of the **Summary**, select **Create** to start VM deployment. The VM takes a few minutes to deploy. Wait for the VM to finish deploying before continuing with the remaining steps.
+1. Select **Review + create** to start VM deployment. The VM takes a few minutes to deploy. Wait for the VM to finish deploying before continuing with the remaining steps.
## Test network communication
To test network communication with Network Watcher, first enable a network watch
If you already have a network watcher enabled in at least one region, skip to the [Use IP flow verify](#use-ip-flow-verify).
-1. In the portal, select **All services**. In the **Filter box**, enter *Network Watcher*. When **Network Watcher** appears in the results, select it.
-2. Enable a network watcher in the East US region, because that's the region the VM was deployed to in a previous step. Select **Regions**, to expand it, and then select **...** to the right of **East US**, as shown in the following picture:
-
- ![Enable Network Watcher](./media/diagnose-vm-network-traffic-filtering-problem/enable-network-watcher.png)
-
-3. Select **Enable Network Watcher**.
+1. In the **Home** portal, select **More services**. In the **Filter box**, enter *Network Watcher*. When **Network Watcher** appears in the results, select it.
+1. Enable a network watcher in the East US region, because that's the region the VM was deployed to in a previous step. Select **Add**, to expand it, and then select **Region** under **Subscription**, as shown in the following picture:
+ :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/enable-network-watcher.png" alt-text="Screenshot of how to Enable Network Watcher.":::
+1. Select your region then select **Add**.
### Use IP flow verify When you create a VM, Azure allows and denies network traffic to and from the VM, by default. You might later override Azure's defaults, allowing or denying additional types of traffic.
-1. In the portal, select **All services**. In the **All services** *Filter* box, enter *Network Watcher*. When **Network Watcher** appears in the results, select it.
-2. Select **IP flow verify**, under **NETWORK DIAGNOSTIC TOOLS**.
-3. Select your subscription, enter or select the following values, and then select **Check**, as shown in the picture that follows:
+1. In the **Home** portal, select **More services**. In the **All services** *Filter* box, enter *Network Watcher*. When **Network Watcher** appears in the results, select it.
+1. Select **IP flow verify**, under **Network diagnostic tools**.
+1. Select your subscription, enter or select the following values, and then select **Check**, as shown in the picture that follows:
|Setting |Value | | | |
- | Resource group | Select myResourceGroup |
- | Virtual machine | Select myVm |
+ | Subscription | Select your subscription |
+ | Resource group | Select **myResourceGroup** |
+ | Virtual machine | Select **myVm** |
| Network interface | myvm - The name of the network interface the portal created when you created the VM is different. | | Protocol | TCP | | Direction | Outbound |
When you create a VM, Azure allows and denies network traffic to and from the VM
| Remote IP address | 13.107.21.200 - One of the addresses for <www.bing.com>. | | Remote port | 80 |
- ![IP flow verify](./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-outbound.png)
+ :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-outbound.png" alt-text="Screenshot of values to input in IP flow verify.":::
After a few seconds, the result returned informs you that access is allowed because of a security rule named **AllowInternetOutbound**. When you ran the check, Network Watcher automatically created a network watcher in the East US region, if you had an existing network watcher in a region other than the East US region before you ran the check.
-4. Complete step 3 again, but change the **Remote IP address** to **172.31.0.100**. The result returned informs you that access is denied because of a security rule named **DefaultOutboundDenyAll**.
-5. Complete step 3 again, but change the **Direction** to **Inbound**, the **Local port** to **80** and the **Remote port** to **60000**. The result returned informs you that access is denied because of a security rule named **DefaultInboundDenyAll**.
+1. Complete step 3 again, but change the **Remote IP address** to **172.31.0.100**. The result returned informs you that access is denied because of a security rule named **DenyAllOutBound**.
+1. Complete step 3 again, but change the **Direction** to **Inbound**, the **Local port** to **80** and the **Remote port** to **60000**. **Remote IP address** remains **172.31.0.100**. The result returned informs you that access is denied because of a security rule named **DenyAllInBound**.
Now that you know which security rules are allowing or denying traffic to or from a VM, you can determine how to resolve the problems. ## View details of a security rule
-1. To determine why the rules in steps 3-5 of **Use IP flow verify** allow or deny communication, review the effective security rules for the network interface in the VM. In the search box at the top of the portal, enter *myvm*. When the **myvm** (or whatever the name of your network interface is) network interface appears in the search results, select it.
-2. Select **Effective security rules** under **SUPPORT + TROUBLESHOOTING**, as shown in the following picture:
+To determine why the rules in steps 3-5 of **Use IP flow verify** allow or deny communication, review the effective security rules for the network interface in the VM.
- ![Effective security rules](./media/diagnose-vm-network-traffic-filtering-problem/effective-security-rules.png)
+1. In the search box at the top of the portal, enter *myvm*. When the **myvm Regular Network Interface** appears in the search results, select it.
+1. Select **Effective security rules** under **Support + troubleshooting**, as shown in the following picture:
+
+ :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/effective-security-rules.png" alt-text="Screenshot of Effective security rules.":::
- In step 3 of **Use IP flow verify**, you learned that the reason the communication was allowed is because of the **AllowInternetOutbound** rule. You can see in the previous picture that the **DESTINATION** for the rule is **Internet**. It's not clear how 13.107.21.200, the address you tested in step 3 of **Use IP flow verify**, relates to **Internet** though.
-3. Select the **AllowInternetOutBound** rule, and then select **Destination**, as shown in the following picture:
+ In step 3 of **Use IP flow verify**, you learned that the reason the communication was allowed is because of the **AllowInternetOutbound** rule. You can see in the previous picture that the **Destination** for the rule is **Internet**. It's not clear how 13.107.21.200, the address you tested in step 3 of **Use IP flow verify**, relates to **Internet** though.
+1. Select the **AllowInternetOutBound** rule, and then scroll down to **Destination**, as shown in the following picture:
+
+ :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/security-rule-prefixes.png" alt-text="Screenshot of Security rule prefixes.":::
- ![Security rule prefixes](./media/diagnose-vm-network-traffic-filtering-problem/security-rule-prefixes.png)
+ One of the prefixes in the list is **12.0.0.0/8**, which encompasses the 12.0.0.1-15.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher priority (lower number) rules shown in the picture in step 2 that override this rule. Close the **Address prefixes** box. To deny outbound communication to 13.107.21.200, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address.
- One of the prefixes in the list is **12.0.0.0/6**, which encompasses the 12.0.0.1-15.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher priority (lower number) rules shown in the picture in step 2 that override this rule. Close the **Address prefixes** box. To deny outbound communication to 13.107.21.200, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address.
-4. When you ran the outbound check to 172.131.0.100 in step 4 of **Use IP flow verify**, you learned that the **DefaultOutboundDenyAll** rule denied communication. That rule equates to the **DenyAllOutBound** rule shown in the picture in step 2 that specifies **0.0.0.0/0** as the **DESTINATION**. This rule denies the outbound communication to 172.131.0.100, because the address is not within the **DESTINATION** of any of the other **Outbound rules** shown in the picture. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 for the 172.131.0.100 address.
-5. When you ran the inbound check from 172.131.0.100 in step 5 of **Use IP flow verify**, you learned that the **DefaultInboundDenyAll** rule denied communication. That rule equates to the **DenyAllInBound** rule shown in the picture in step 2. The **DenyAllInBound** rule is enforced because no other higher priority rule exists that allows port 80 inbound to the VM from 172.31.0.100. To allow the inbound communication, you could add a security rule with a higher priority, that allows port 80 inbound from 172.31.0.100.
+1. When you ran the outbound check to 172.131.0.100 in step 4 of **Use IP flow verify**, you learned that the **DenyAllOutBound** rule denied communication. That rule equates to the **DenyAllOutBound** rule shown in the picture in step 2 that specifies **0.0.0.0/0** as the **Destination**. This rule denies the outbound communication to 172.131.0.100, because the address is not within the **Destination** of any of the other **Outbound rules** shown in the picture. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 for the 172.131.0.100 address.
+1. When you ran the inbound check from 172.131.0.100 in step 5 of **Use IP flow verify**, you learned that the **DenyAllInBound** rule denied communication. That rule equates to the **DenyAllInBound** rule shown in the picture in step 2. The **DenyAllInBound** rule is enforced because no other higher priority rule exists that allows port 80 inbound to the VM from 172.31.0.100. To allow the inbound communication, you could add a security rule with a higher priority, that allows port 80 inbound from 172.31.0.100.
The checks in this quickstart tested Azure configuration. If the checks return expected results and you still have network problems, ensure that you don't have a firewall between your VM and the endpoint you're communicating with and that the operating system in your VM doesn't have a firewall that is allowing or denying communication.
The checks in this quickstart tested Azure configuration. If the checks return e
When no longer needed, delete the resource group and all of the resources it contains: 1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
-2. Select **Delete resource group**.
-3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+1. Select **Delete resource group**.
+1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
## Next steps
open-datasets How To Create Azure Machine Learning Dataset From Open Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md
Title: Create datasets with Azure Open Datasets
description: Learn how to create an Azure Machine Learning dataset from Azure Open Datasets. --++ Last updated 08/05/2020
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
Microsoft Purview features that enable customers to view and manage the metadata
## Data curator A role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets. ## Data map
-A metadata repository that is the foundation of Microsoft Purview. The data map is a graph that describes assets across a data estate and is populated through scans and other data ingestion processes. This graph helps organizations understand and govern their data by providing rich descriptions of assets, representing data lineage, classifying assets, storing relationships between assets, and housing information at both the technical and semantic layers. The data map is an open platform that can be interacted with and accessed through Apache Atlas APIs or the Microsoft Purview Governance Portal.
+A metadata repository that is the foundation of Microsoft Purview. The data map is a graph that describes assets across a data estate and is populated through scans and other data ingestion processes. This graph helps organizations understand and govern their data by providing rich descriptions of assets, representing data lineage, classifying assets, storing relationships between assets, and housing information at both the technical and semantic layers. The data map is an open platform that can be interacted with and accessed through Apache Atlas APIs or the Microsoft Purview governance portal.
## Data map operation A create, read, update, or delete action performed on an entity in the data map. For example, creating an asset in the data map is considered a data map operation. ## Data owner
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
To scan your data source, you'll need to configure an authentication method in t
The following options are supported:
-* **System-assigned managed identity** (Recommended) - This is an identity associated directly with your Microsoft Purview account that allows you to authenticate directly with other Azure resources without needing to manage a go-between user or credential set. The **system-assigned** managed identity is created when your Microsoft Purview resource is created, is managed by Azure, and uses your Microsoft Purview account's name. The SAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see the [managed identity overview](/active-directory/managed-identities-azure-resources/overview).
+* **System-assigned managed identity** (Recommended) - This is an identity associated directly with your Microsoft Purview account that allows you to authenticate directly with other Azure resources without needing to manage a go-between user or credential set. The **system-assigned** managed identity is created when your Microsoft Purview resource is created, is managed by Azure, and uses your Microsoft Purview account's name. The SAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see the [managed identity overview](/azure/active-directory/managed-identities-azure-resources/overview).
* **User-assigned managed identity** (preview) - Similar to a SAMI, a user-assigned managed identity (UAMI) is a credential resource that allows Microsoft Purview to authenticate against Azure Active Directory. The **user-assigned** managed by users in Azure, rather than by Azure itself, which gives you more control over security. The UAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see our [guide for user-assigned managed identities.](manage-credentials.md#create-a-user-assigned-managed-identity)
-* **Service Principal**- A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Their authentication has an expiration date, and so can be useful for temporary projects. For more information, see the [service principal documenatation](/active-directory/develop/app-objects-and-service-principals).
+* **Service Principal**- A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Their authentication has an expiration date, and so can be useful for temporary projects. For more information, see the [service principal documenatation](/azure/active-directory/develop/app-objects-and-service-principals).
* **SQL Authentication** - connect to the SQL database with a username and password. For more information about SQL Authentication, you can [follow the SQL authentication documentation](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication).If you need to create a login, follow this [guide to query an Azure SQL database](/azure/azure-sql/database/connect-query-portal), and use [this guide to create a login using T-SQL.](/sql/t-sql/statements/create-login-transact-sql) > [!NOTE]
role-based-access-control Change History Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/change-history-report.md
Here are the basic steps to get started:
1. [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
-1. [Configure the Activity Log Analytics solution](../azure-monitor/essentials/activity-log.md#activity-log-analytics-monitoring-solution) for your workspace.
+1. [Configure the Activity](../azure-monitor/essentials/activity-log.md) for your workspace.
-1. [View the activity logs](../azure-monitor/essentials/activity-log.md#activity-log-analytics-monitoring-solution). A quick way to navigate to the Activity Log Analytics solution Overview page is to click the **Logs** option.
+1. [View the activity logs Insights](../azure-monitor/essentials/activity-log.md). A quick way to navigate to the Activity Log Overview page is to click the **Logs** option.
![Azure Monitor logs option in portal](./media/change-history-report/azure-log-analytics-option.png)
route-server Route Injection In Spokes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-injection-in-spokes.md
Title: 'Default route injection in spoke VNets' description: Learn about how Azure Route Server injects routes in VNets. -+ Last updated 02/03/2022
Azure Route Server offers a centralized point where Network Virtual Appliances (
The following diagram depicts a simple hub and spoke design with a hub VNet and two spokes. In the hub a Network Virtual Appliance and a Route Server has been deployed. Without Route Server User-Defined Routes (UDRs) would have to be configured in every spoke (usually containing a default route for 0.0.0.0/0), that send all traffic from the spokes through this NVA, for example to get it inspected for security purposes. However, if the NVA advertises via BGP to the Route Server network prefixes, these will appear as effective routes in any virtual machine deployed in the hub or in any spoke. For spokes to "learn" the Route Server routes, they need to be peered with the hub VNet with the setting "Use Remote Gateway".
However, if the NVA advertises via BGP to the Route Server network prefixes, the
If the NVA is used to provide connectivity to on-premises network via IPsec VPNs or SD-WAN technologies, the same mechanism can be used to attract traffic from the spokes to the NVA. Additionally, the NVA can dynamically learn the Azure prefixes from the Azure Route Server, and advertise them with a dynamic routing protocol to on-premises. The following diagram describes this setup: ## Connectivity to on-premises through Azure Virtual Network Gateways If a VPN or an ExpresRoute gateway exists in the same VNet as the Route Server and NVA to provide connectivity to on-premises networks, routes learned by these gateways will be programmed as well in the spoke VNets. These routes would override the default route injected by the Route Server, since they would be more specific (longer network masks). The following diagram describes the previous design, where an ExpressRoute gateway has been added. You cannot configure the subnets in the spoke VNets to only learn the routes from the Azure Route Server. Disabling "Virtual network gateway route propagation" in a route table associated to a subnet would prevent both types of routes (routes from the Virtual Network Gateway and routes from the Azure Route Server) to be injected on NICs in that subnet.
+Note that Azure Route Server per default will advertise all prefixes learnt from the NVA to ExpressRoute too. This might not be desired, for example because of the route limits of ExpressRoute or the Route Server itself. In that case, the NVA can announce its routes to the Route Server including the BGP community `no-advertise` (with value 65535:65282). When Azure Route Server receives routes with this BGP community, it will push them to the subnets, but it will not advertise them to any other BGP peer (like ExpressRoute or VPN gateways, or other NVAs).
+
+## SDWAN coexistence with ExpressRoute and Azure Firewall
+
+A particular case of the previous design is when customers insert the Azure Firewall in the traffic flow to inspect all traffic going to on-premises networks, either via ExpressRoute or via SD-WAN/VPN appliances. In this situation, all spoke subnets have route tables that prevent the spokes from learning any route from ExpressRoute or the Route Server, and have default routes (0.0.0.0/0) with the Azure Firewall as next hop, as the following diagram shows:
++
+The Azure Firewall subnet will learn the routes coming from both ExpressRoute and the VPN/SDWAN NVA, and will decide whether sending traffic one way or the other. As described in the previous section, if the NVA appliance advertises more than 200 routes to the Azure Route Server, it should send its BGP routes marked with the BGP community `no-advertise`. This way, the SDWAN prefixes will not be injected back to on-premises via Express-Route.
+ ## Traffic symmetry If multiple NVA instances are used for in an active/active fashion for better resiliency or scalability, traffic symmetry will be a requirement if the NVAs need to keep the state of the connections. This is for example the case of Next Generation Firewalls.
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
Your Azure resources could be protected using any number of the network isolatio
| Resource | IP restriction | Private endpoint | | | | - | | Azure Storage for text-based indexing (blobs, ADLS Gen 2, files, tables) | Supported only if the storage account and search service are in different regions. | Supported |
-| Azure Storage for AI enrichment (caching, debug sessions, knowledge store) | Supported only if the storage account and search service are in different regions. | Unsupported |
+| Azure Storage for AI enrichment (caching, debug sessions, knowledge store) | Supported only if the storage account and search service are in different regions. | Supported |
| Azure Cosmos DB - SQL API | Supported | Supported | | Azure Cosmos DB - MongoDB API | Supported | Unsupported | | Azure Cosmos DB - Gremlin API | Supported | Unsupported |
search Search Query Partial Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-partial-matching.md
Azure Cognitive Search scans for whole tokenized terms in the index and won't fi
+ [Wildcard operators with prefix matching](query-simple-syntax.md#prefix-search) refers to a generally recognized pattern that includes the beginning of a term, followed by `*` or `?` suffix operators, such as `search=cap*` matching on "Cap'n Jack's Waterfront Inn" or "Gacc Capital". Prefixing matching is supported in both simple and full Lucene query syntax.
-+ [Wildcard with infix and suffix matching](query-lucene-syntax.md#bkmk_wildcard) places the `*` and `?` operators inside or at the beginning of a term, and requires regular expression syntax (where the expression is enclosed with forward slashes). For example, the query string (`search=/.*numeric*./`) returns results on "alphanumeric" and "alphanumerical" as suffix and infix matches.
++ [Wildcard with infix and suffix matching](query-lucene-syntax.md#bkmk_wildcard) places the `*` and `?` operators inside or at the beginning of a term, and requires regular expression syntax (where the expression is enclosed with forward slashes). For example, the query string (`search=/.*numeric.*/`) returns results on "alphanumeric" and "alphanumerical" as suffix and infix matches. For regular expression, wildcard, and fuzzy search, analyzers are not used at query time. For these query forms, which the parser detects by the presence of operators and delimiters, the query string is passed to the engine without lexical analysis. For these query forms, the analyzer specified on the field is ignored.
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/steps-secure-identity.md
Previously updated : 06/15/2021 Last updated : 05/01/2022
Azure AD built-in and custom roles operate on concepts similar to roles found in
Both systems contain similarly used role definitions and role assignments. However, Azure AD role permissions can't be used in Azure custom roles and vice versa. As part of deploying your privileged account process, follow the best practice to create at least two emergency accounts to make sure you still have access to Azure AD if you lock yourself out.
-For more information, see the article [Plan a Privileged Identity Management deployment](../../active-directory/privileged-identity-management/pim-deployment-plan.md).
+For more information, see the article [Plan a Privileged Identity Management deployment](../../active-directory/privileged-identity-management/pim-deployment-plan.md) and [securing privileged access](/security/compass/overview).
### Restrict user consent operations
Microsoft recommends restricting user consent to allow end-user consent only for
Make sure users can request admin approval for new applications to reduce user friction, minimize support volume, and prevent users from signing up for applications using non-Azure AD credentials. Once you regulate your consent operations, administrators should audit app and consent permissions regularly.
-For more information, see the article [Azure Active Directory consent framework(../../active-directory/develop/consent-framework.md).
+For more information, see the article [Azure Active Directory consent framework](../../active-directory/develop/consent-framework.md).
## Step 3 - Automate threat response
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
Title: Microsoft Sentinel content hub catalog | Microsoft Docs description: This article displays and details the currently available Microsoft Sentinel content hub packages.-+ Previously updated : 01/30/2022- Last updated : 04/20/2022+ # Microsoft Sentinel content hub catalog -
-> [!IMPORTANT]
->
-> The Microsoft Sentinel content hub experience is currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- [Microsoft Sentinel solutions](sentinel-solutions.md) provide a consolidated way to acquire Microsoft Sentinel content - like data connectors, workbooks, analytics, and automation - in your workspace with a single deployment step. This article lists the out-of-the-box (built-in), on-demand, Microsoft Sentinel data connectors and solutions available for you to deploy in your workspace. Deploying a solution makes any included security content, such as data connectors, playbooks, workbooks, or rules, in the relevant area of Microsoft Sentinel. For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
+> [!IMPORTANT]
+>
+> The Microsoft Sentinel content hub experience is currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ## Domain solutions |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
| **Microsoft Sentinel Deception** | [Workbooks, analytics rules, watchlists](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft | |**Zero Trust** (TIC3.0) |[Analytics rules, playbook, workbooks](/security/zero-trust/integrate/sentinel-solution) |Identity, Security - Others |Microsoft |
+## Apache
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Tomcat** |Data connector, parser | DevOps, application |[Microsoft |
## Arista Networks
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Arista Networks** (Awake Security) |Data connector, workbooks, analytics rules | Security - Network |[Arista - Awake Security](https://awakesecurity.com/) |
+## Atlassian
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Atlassian Confluence Audit** |Data connector |IT operations, application |Microsoft|
+|**Atlassian Jira Audit** |Workbook, analytics rules, hunting queries |DevOps |Microsoft|
## Armorblox
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Armorblox - Sentinel** |Data connector | Security - Threat protection |[Armorblox](https://www.armorblox.com/contact/) | --- ## Azure |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Microsoft Sentinel Training Lab** |Workbook, analytics rules, playbooks, hunting queries | Training and tutorials |Microsoft | |**Azure SQL** | [Data connector](data-connectors-reference.md#azure-sql-databases), workbook, analytics, playbooks, hunting queries | Application |Microsoft |
+## Bosch
+|Name |Includes |Categories |Supported by |
+|||||
+|**AIShield AI Security Monitoring**| Data connector, analytics rule, parser | Security - Threat Protection | [Bosch](https://www.bosch-softwaretechnologies.com/en/products-and-solutions/products-and-solutions/aishield/)|
## Box
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Box Solution**| Data connector, workbook, analytics rules, hunting queries, parser | Storage, application | Microsoft| -- ## Check Point |Name |Includes |Categories |Supported by | ||||| |**Check Point Microsoft Sentinel Solutions** |[Data connector](data-connectors-reference.md#check-point), playbooks, custom Logic App connector | Security - Automation (SOAR) | [Checkpoint](https://www.checkpoint.com/support-services/contact-support/)| -- ## Cisco |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Cisco Umbrella** |[Data connector](data-connectors-reference.md#cisco-umbrella-preview), workbooks, analytics rules, playbooks, hunting queries, parser, custom Logic App connector |Security - Cloud Security |Microsoft | |**Cisco Web Security Appliance (WSA)** | Data connector, parser|Security - Network |Microsoft | -- ## Cloudflare - |Name |Includes |Categories |Supported by | ||||| |**Cloudflare Solution**|Data connector, workbooks, analytics rules, hunting queries, parser| Security - Network, networking |Microsoft | -- ## Contrast Security -- |Name |Includes |Categories |Supported by | ||||| |**Contrast Protect Microsoft Sentinel Solution**|Data connector, workbooks, analytics rules |Security - Threat protection |Microsoft |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Continuous Threat Monitoring for GitHub** |[Data connector](data-connectors-reference.md#github-preview), parser, workbook, analytics rules |Cloud Provider |Microsoft | - ## Google |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Google Cloud Platform DNS Solution** |Data connector, parser |Cloud Provider, Networking |Microsoft | |**Google Cloud Platform Cloud Monitoring Solution**|Data connector, parser |Cloud Provider | Microsoft| |**Google Cloud Platform Identity and Access Management Solution**|Data connector, workbook, analytics rules, playbooks, hunting queries, parser, custom Logic App connector|Cloud Provider, Identity |Microsoft |
+|**Google Workspace Reports**|Workbook, analytics rules, hunting queries|IT Operations |Microsoft |
+## Holm Security
+|Name |Includes |Categories |Supported by |
+|||||
+|**Holm Security**| Data connector| Security - Threat Intelligence |[Holm Security](https://support.holmsecurity.com/hc)|
## HYAS
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**HYAS Insight for Microsoft Sentinel Solutions Gallery**| Playbooks| Security - Threat Intelligence, Security - Automation (SOAR) |Microsoft | - ## Imperva |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**InfoBlox Threat Defense / InfoBlox Cloud Data Connector**| [Data connector](data-connectors-reference.md#infoblox-network-identity-operating-system-nios-preview), workbook, analytics rules| Security - Threat protection | Microsoft| -- ## IronNet |Name |Includes |Categories |Supported by | ||||| |**IronNet CyberSecurity Iron Defense - Microsoft Sentinel** | |Security - Network |Microsoft |
+## Joshua Cyberisk Vision
-
+|Name |Includes |Categories |Supported by |
+|||||
+|**Joshua Cyberisk Vision**| Playbooks| Security - Threat Intelligence |[Joshua Cyberisk Vision](https://www.cyberiskvision.com/) |
## Juniper
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Juniper IDP** |Data connector, parser|Security - Network |Microsoft | - ## Kaspersky |Name |Includes |Categories |Supported by | ||||| |**Kaspersky AntiVirus** |Data connector, parser | Security - Threat protection|Microsoft |
+## Lastpass
+|Name |Includes |Categories |Supported by |
+|||||
+|**Lastpass Enterprise Activity Monitoring** |Data connector, analytic rules, hunting queries, watchlist, workbook | Application|[The Collective Consulting](https://thecollective.eu) |
## Lookout
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | |||||
-|**Microsoft Sentinel 4 Microsoft Dynamics 365** | [Data connector](data-connectors-reference.md#dynamics-365), workbooks, analytics rules, and hunting queries | Application |Microsoft |
+|**Microsoft Defender for Endpoint** | Hunting queries, parsers | Security - Threat Protection |Microsoft |
+|**Microsoft Sentinel for Microsoft Dynamics 365** | [Data connector](data-connectors-reference.md#dynamics-365), workbooks, analytics rules, and hunting queries | Application |Microsoft |
|**Microsoft Sentinel for Teams** | Analytics rules, playbooks, hunting queries | Application | Microsoft | | **Microsoft Sysmon for Linux** | [Data connector](data-connectors-reference.md#microsoft-sysmon-for-linux-preview) | Platform | Microsoft |
+## NGINX
+|Name |Includes |Categories |Supported by |
+|||||
+|**Nginx** | Data connector, workbooks, analytics rules, hunting queries, parser | Security ΓÇô Network, Networking, DevOps |Microsoft |
-## Oracle
+## NXLog
+|Name |Includes |Categories |Supported by |
+|||||
+|**NXLog AIX Audit** | Data connector, parser | IT operations |NXLog |
+|**NXLog DNS Logs** | Data connector | Networking |NXLog |
+
+## Oracle
|Name |Includes |Categories |Supported by | ||||| |**Oracle Cloud Infrastructure** |Data connector, parser | Cloud Provider | Microsoft|
-|**Oracle Database Audit Solution** | Data connector, workbook, analytics rules, hunting queries, parser| Application|Microsoft |
-
+|**Oracle Database Audit** | Data connector, workbook, analytics rules, hunting queries, parser| Application|Microsoft |
+|**Oracle WebLogic Server** | Data connector, workbook, analytics rules, hunting queries, parser| IT Operations|Microsoft |
## Palo Alto
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**RSA SecurID** |Data connector, parser |Security - Others, Identity |Microsoft | -- ## SAP |Name |Includes |Categories |Supported by | ||||| |**Continuous Threat Monitoring for SAP**|[Data connector](sap-deploy-solution.md), [workbooks, analytics rules, watchlists](sap-solution-security-content.md) | Application |Community | - ## Semperis |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Senserva Offer for Microsoft Sentinel** |Data connector, workbooks, analytics rules, hunting queries |Compliance |[Senserva](https://www.senserva.com/support/) |
+## Shadowbytes
+
+|Name |Includes |Categories |Supported by |
+|||||
+|**Shadowbytes ARIA Threat Intelligence** |Data connector, playbook |Security - Threat protection |[Shadowbyte](https://shadowbyte.com/#contact)|
+
+## SIGNL4
+|Name |Includes |Categories |Supported by |
+|||||
+|**SIGNL4 Mobile Alerting** |Data connector, playbook |DevOps, IT Operations |[SIGNL4](https://www.signl4.com) |
## Sonrai Security
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Slack Audit Solution**|Data connector, workbooks, analytics rules, hunting queries, parser |Application| Microsoft| -- ## Sophos |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Sophos XG Firewall Solution**| Workbooks, analytics rules, parser |Security - Network |Microsoft | - ## Symantec |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Symantec Endpoint**|Data connector, workbook, analytics rules, playbooks, hunting queries, parser| Security - Threat protection|Microsoft | |**Symantec ProxySG Solution**|Workbooks, analytics rules |Security - Network |Symantec | - ## Tenable |Name |Includes |Categories |Supported by | ||||| |**Tenable Nessus Scanner / IO VM reports for cloud** | Data connector, parser| Security - Vulnerability Management| Microsoft | -- ## Trend Micro |Name |Includes |Categories |Supported by | ||||| |**Trend Micro Apex One Solution** | Data connector, hunting queries, parser| Security - Threat protection|Microsoft | ---- ## Ubiquiti |Name |Includes |Categories |Supported by | ||||| |**Ubiquiti UniFi Solution**|Data connector, workbooks, analytics rules, hunting queries, parser |Security - Network |Microsoft | -- ## vArmour |Name |Includes |Categories |Supported by |
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Vectra Stream Solution** |Data connector, hunting queries, parser |Security - Network |Microsoft | -- ## VMware |Name |Includes |Categories |Supported by | ||||| |**VMware Carbon Black Solution**|Workbooks, analytics rules| Security - Threat protection| Microsoft|-
+|**VMware ESXi**|Workbooks, analytics rules, data connectors, hunting queries, parser| IT Operations| Microsoft|
## Zeek Network
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Corelight for Microsoft Sentinel**|Data connector, workbooks, analytics rules, hunting queries, parser | IT Operations, Security - Network | [Zeek Network](https://support.corelight.com/)|
+## Zscalar
+|Name |Includes |Categories |Supported by |
+|||||
+|**Zscalar Private Access**|Data connector, workbook, analytics rules, hunting queries, parser | Security - Network | Microsoft|
## Next steps
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
resources
To test that the minimum required TLS version for a Service Bus namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
-When a client accesses a Service Bus namespace using a TLS version that does not meet the minimum TLS version configured for the namespace, Azure Service Bus returns error code 400 error (Bad Request) and a message indicating that the TLS version that was used is not permitted for making requests against this Service Bus namespace.
+When a client accesses a Service Bus namespace using a TLS version that does not meet the minimum TLS version configured for the namespace, Azure Service Bus returns error code 401 (Unauthorized) and a message indicating that the TLS version that was used is not permitted for making requests against this Service Bus namespace.
> [!NOTE] > When you configure a minimum TLS version for a Service Bus namespace, that minimum version is enforced at the application layer. Tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the minimum required version when run directly against the Service Bus namespace endpoint.
See the following documentation for more information.
- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-enforce-minimum-version.md) - [Configure Transport Layer Security (TLS) for a Service Bus client application](transport-layer-security-configure-client-version.md)-- [Use Azure Policy to audit for compliance of minimum TLS version for a Service Bus namespace](transport-layer-security-audit-minimum-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for a Service Bus namespace](transport-layer-security-audit-minimum-version.md)
service-fabric Service Fabric Debugging Your Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-debugging-your-application.md
Title: Debug your application in Visual Studio description: Improve the reliability and performance of your services by developing and debugging them in Visual Studio on a local development cluster. Previously updated : 11/02/2017 Last updated : 05/02/2022 # Debug your Service Fabric application by using Visual Studio
## Debug a local Service Fabric application+
+> [!IMPORTANT]
+> Remote debugging is not supported on VS 2022
+ You can save time and money by deploying and debugging your Azure Service Fabric application in a local computer development cluster. Visual Studio 2019 or 2015 can deploy the application to the local cluster and automatically connect the debugger to all instances of your application. Visual Studio must be run as Administrator to connect the debugger. 1. Start a local development cluster by following the steps in [Setting up your Service Fabric development environment](service-fabric-get-started.md).
service-fabric Service Fabric Java Rest Api Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-java-rest-api-usage.md
Title: Azure Service Fabric Java Client APIs description: Generate and use Service Fabric Java client APIs using Service Fabric client REST API specification- Last updated 11/27/2017 - # Azure Service Fabric Java Client APIs
service-fabric Service Fabric Migrate Old Javaapp To Use Maven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-migrate-old-javaapp-to-use-maven.md
Title: Migrate from Java SDK to Maven description: Update the older Java applications which used to use the Service Fabric Java SDK, to fetch Service Fabric Java dependencies from Maven. After completing this setup, your older Java applications would be able to build .- Last updated 08/23/2017 - # Update your previous Java Service Fabric application to fetch Java libraries from Maven Service Fabric Java binaries have moved from the Service Fabric Java SDK to Maven hosting. You can use **mavencentral** to fetch the latest Service Fabric Java dependencies. This guide will help you update existing Java applications created for the Service Fabric Java SDK using either Yeoman template or Eclipse to be compatible with the Maven-based build.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
16.04 LTS | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure| |||
-18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new 18.04 LTS kernels supported in this release. |
+18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1134-azure </br> 4.15.0-1136-azure </br> 4.15.0-173-generic </br> 4.15.0-175-generic </br> 5.4.0-105-generic </br> 5.4.0-1073-azure </br> 5.4.0-1074-azure </br> 5.4.0-107-generic |
18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-92-generic </br> 4.15.0-166-generic </br> 4.15.0-1129-azure </br> 5.4.0-1065-azure </br> 4.15.0-1130-azure </br> 4.15.0-167-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 4.15.0-1131-azure </br> 4.15.0-169-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure | 18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1126-azure </br> 4.15.0-1125-azure </br> 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-162-generic </br> 4.15.0-161-generic </br> 4.15.0-156-generic </br> 5.4.0-1061-azure to 5.4.0-1063-azure </br> 5.4.0-90-generic </br> 5.4.0-89-generic </br> 9.46 hotfix patch** </br> 4.15.0-1127-azure </br> 4.15.0-163-generic </br> 5.4.0-1064-azure </br> 5.4.0-91-generic | 18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 18.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic | |||
-20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.11.0-1007-azure </br> 5.11.0-1012-azure </br> 5.11.0-1013-azure </br> 5.11.0-1015-azure </br> 5.11.0-1017-azure </br> 5.11.0-1019-azure </br> 5.11.0-1020-azure </br> 5.11.0-1021-azure </br> 5.11.0-1022-azure </br> 5.11.0-1023-azure </br> 5.11.0-1025-azure </br> 5.11.0-1027-azure </br> 5.11.0-1028-azure </br> 5.11.0-22-generic </br> 5.11.0-25-generic </br> 5.11.0-27-generic </br> 5.11.0-34-generic </br> 5.11.0-36-generic </br> 5.11.0-37-generic </br> 5.11.0-38-generic </br> 5.11.0-40-generic </br> 5.11.0-41-generic </br> 5.11.0-43-generic </br> 5.11.0-44-generic </br> 5.11.0-46-generic |
+20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-1074-azure </br> 5.4.0-107-generic |
20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1065-azure </br> 5.4.0-92-generic </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure | 20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-84-generic </br> 5.4.0-1058-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-1063-azure </br> 5.4.0-89-generic </br> 5.4.0-90-generic </br> 9.46 hotfix patch** </br> 5.4.0-1064-azure </br> 5.4.0-91-generic | 20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic |
Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azu
Debian 9.1 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64 Debian 9.1 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 |||
-Debian 10 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.10.0-0.bpo.11-amd64 </br> 5.10.0-0.bpo.11-cloud-amd64 </br> 5.10.0-0.bpo.7-amd64 </br> 5.10.0-0.bpo.7-cloud-amd64 </br> 5.10.0-0.bpo.9-amd64 </br> 5.10.0-0.bpo.9-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64 </br> 5.9.0-0.bpo.2-amd64 </br> 5.9.0-0.bpo.2-cloud-amd64 </br> 5.9.0-0.bpo.5-amd64 </br> 5.9.0-0.bpo.5-cloud-amd64
+Debian 10 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-20-amd64 </br> 4.19.0-20-cloud-amd64 |
Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 10 kernels supported in this release. Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 10 kernels supported in this release. Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64
-Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64 </br> 5.8.0-0.bpo.2-cloud-amd64
+Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64 </br>
**Note: To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.44](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | | SUSE Linux Enterprise Server 15, SP1, SP2 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new SLES 15 kernels supported in this release.
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-36-azure </br> 5.3.18-38.11-azure </br> 5.3.18-38.14-azure </br> 5.3.18-38.17-azure </br> 5.3.18-38.22-azure </br> 5.3.18-38.25-azure </br> 5.3.18-38.28-azure </br> 5.3.18-38.3-azure </br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default </br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-59.40-default:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-150300.59.49-default:3
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-59.40-default:3 </br>
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure </br> 5.3.18-18.72-azure </br> 5.3.18-18.75-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
Tags | Supported | User-generated tags on NICs are replicated every 24 hours.
## Next steps - Read [networking guidance](./azure-to-azure-about-networking.md) for replicating Azure VMs.-- Deploy disaster recovery by [replicating Azure VMs](./azure-to-azure-quickstart.md).
+- Deploy disaster recovery by [replicating Azure VMs](./azure-to-azure-quickstart.md).
spring-cloud Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/expose-apps-gateway-tls-termination.md
Title: "Expose applications to the internet using Application Gateway with TLS termination" description: How to expose applications to internet using Application Gateway with TLS termination--++ Last updated 11/09/2021
spring-cloud How To Configure Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-configure-palo-alto.md
Title: How to configure Palo Alto for Azure Spring Cloud description: How to configure Palo Alto for Azure Spring Cloud--++ Last updated 09/17/2021
spring-cloud How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-deploy-with-custom-container-image.md
+
+ Title: How to deploy applications in Azure Spring Cloud with a custom container image (Preview)
+description: How to deploy applications in Azure Spring Cloud with a custom container image
++++ Last updated : 4/28/2022++
+# Deploy an application with a custom container image (Preview)
+
+**This article applies to:** ✔️ Standard tier ✔️ Enterprise tier
+
+This article explains how to deploy Spring Boot applications in Azure Spring Cloud using a custom container image. Deploying an application with a custom container supports most features as when deploying a JAR application. Other Java and non-Java applications can also be deployed with the container image.
+
+## Prerequisites
+
+* A container image containing the application.
+* The image is pushed to an image registry. For more information, see [Azure Container Registry](/azure/container-instances/container-instances-tutorial-prepare-acr).
+
+> [!NOTE]
+> The web application must listen on port `1025` for Standard tier and on port `8080` for Enterprise tier. The way to change the port depends on the framework of the application. For example, specify `SERVER_PORT=1025` for Spring Boot applications or `ASPNETCORE_URLS=http://+:1025/` for ASP.Net Core applications. The probe can be disabled for applications that do not listen on any port.
+
+## Deploy your application
+
+To deploy an application to a custom container image, use the following steps:
+
+# [Azure CLI](#tab/azure-cli)
+
+To deploy a container image, use one of the following commands:
+
+* To deploy a container image to the public Docker Hub to an app, use the following command:
+
+ ```azurecli
+ az spring-cloud app deploy \
+ --resource-group <your-resource-group> \
+ --name <your-app-name> \
+ --container-image <your-container-image> \
+ --service <your-service-name>
+ ```
+
+* To deploy a container image from ACR to an app, or from another private registry to an app, use the following command:
+
+ ```azurecli
+ az spring-cloud app deploy \
+ --resource-group <your-resource-group> \
+ --name <your-app-name> \
+ --container-image <your-container-image> \
+ --service <your-service-name>
+ --container-registry <your-container-registry> \
+ --registry-password <your-password> |
+ --registry-username <your-username>
+ ```
+
+To overwrite the entry point of the image, add the following two arguments to any of the above commands:
+
+```azurecli
+ --container-command "java" \
+ --container-args "-jar /app.jar -Dkey=value"
+```
+
+To disable listening on a port for images that aren't web applications, add the following argument to the above commands:
+
+```azurecli
+ --disable-probe true
+```
+
+# [Portal](#tab/azure-portal)
+
+1. Open the [Azure portal](https://portal.azure.com).
+1. Open your existing Spring Cloud service instance.
+1. Select **Apps** from left the menu, then select **Create App**.
+1. Name your app, and in the **Runtime platform** pulldown list, select **Custom Container**.
+
+ :::image type="content" source="media/how-to-deploy-with-custom-container-image/create-app-custom-container.png" alt-text="Azure portal screenshot of Create App page with Runtime platform dropdown showing and Custom Container selected." lightbox="media/how-to-deploy-with-custom-container-image/create-app-custom-container.png":::
+
+1. Select **Edit** under *Image*, then fill in the fields as shown in the following image:
+
+ :::image type="content" source="media/how-to-deploy-with-custom-container-image/custom-image-settings.png" alt-text="Azure portal screenshot showing the Custom Image Settings pane." lightbox="media/how-to-deploy-with-custom-container-image/custom-image-settings.png":::
+
+ > [!NOTE]
+ > The **Commands** and **Arguments** field are optional, which are used to overwrite the `cmd` and `entrypoint` of the image.
+ >
+ > You need to also specify the **Language Framework**, which is the web framework of the container image used. Currently, only **Spring Boot** is supported. For other Java applications or non-Java (polyglot) applications, select **Polyglot**.
+
+1. Select **Save**, then select **Create** to deploy your application.
+++
+## Feature Support matrix
+
+The following matrix shows what features are supported in each application type.
+
+| Feature | Spring Boot Apps - container deployment | Polyglot Apps - container deployment | Notes |
+|||||
+| App lifecycle management | ✔️ | ✔️ | |
+| Support for container registries | ✔️ | ✔️ | |
+| Assign endpoint | ✔️ | ✔️ | |
+| Azure Monitor | ✔️ | ✔️ | |
+| APM integration | ✔️ | ✔️ | Supported by [manual installation](#install-an-apm-into-the-image-manually) |
+| Blue/green deployment | ✔️ | ✔️ | |
+| Custom domain | ✔️ | ✔️ | |
+| Scaling - auto scaling | ✔️ | ✔️ | |
+| Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | |
+| Managed Identity | ✔️ | ✔️ | |
+| Spring Cloud Eureka & Config Server | ✔️ | ❌ | |
+| API portal for VMware Tanzu® | ✔️ | ✔️ | Enterprise tier only |
+| Spring Cloud Gateway for VMware Tanzu® | ✔️ | ✔️ | Enterprise tier only |
+| Application Configuration Service for VMware Tanzu® | ✔️ | ❌ | Enterprise tier only |
+| VMware Tanzu® Service Registry | ✔️ | ❌ | Enterprise tier only |
+| VNET | ✔️ | ✔️ | Add registry to [allowlist in NSG or Azure Firewall](#avoid-not-being-able-to-connect-to-the-container-registry-in-a-vnet) |
+| Outgoing IP Address | ✔️ | ✔️ | |
+| E2E TLS | ✔️ | ✔️ | Trust a self-signed CA is supported by [manual installation](#trust-a-certificate-authority-in-the-image) |
+| Liveness and readiness settings | ✔️ | ✔️ | |
+| Advanced troubleshooting - thread/heap/JFR dump | ✔️ | ❌ | The image must include `bash` and JDK with `PATH` specified. |
+| Bring your own storage | ✔️ | ✔️ | |
+| Integrate service binding with Resource Connector | ✔️ | ❌ | |
+| Availability Zone | ✔️ | ✔️ | |
+| App Lifecycle events | ✔️ | ✔️ | |
+| Reduced app size - 0.5 vCPU and 512 MB | ✔️ | ✔️ | |
+| Automate app deployments with Terraform | ✔️ | ✔️ | |
+| Soft Deletion | ✔️ | ✔️ | |
+| Interactive diagnostic experience (AppLens-based) | ✔️ | ✔️ | |
+| SLA | ✔️ | ✔️ | |
+
+> [!NOTE]
+> Polyglot apps include non-Spring Boot Java, NodeJS, AngularJS, Python, and .NET apps.
+
+## Common points to be aware of when deploying with a custom container
+
+The following points will help you address common situations when deploying with a custom image.
+
+### Trust a Certificate Authority in the image
+
+To trust a CA in the image, set the following variables depending on your environment:
+
+* You must import Java applications into the trust store by adding the following lines into your *Dockerfile*:
+
+ ```dockerfile
+ ADD EnterpriseRootCA.crt /opt/
+ RUN keytool -keystore /etc/ssl/certs/java/cacerts -storepass changeit -noprompt -trustcacerts -importcert -alias EnterpriseRootCA -file /opt/EnterpriseRootCA.crt
+ ```
+
+* For Node.js applications, set the `NODE_EXTRA_CA_CERTS` environment variable:
+
+ ```dockerfile
+ ADD EnterpriseRootCA.crt /opt/
+ ENV NODE_EXTRA_CA_CERTS="/opt/EnterpriseRootCA.crt"
+ ```
+
+* For Python, or other languages relying on the system CA store, on Debian or Ubuntu images, add the following environment variables:
+
+ ```dockerfile
+ ADD EnterpriseRootCA.crt /usr/local/share/ca-certificates/
+ RUN /usr/sbin/update-ca-certificates
+ ```
+
+* For Python, or other languages relying on the system CA store, on CentOS or Fedora based images, add the following environment variables:
+
+ ```dockerfile
+ ADD EnterpriseRootCA.crt /etc/pki/ca-trust/source/anchors/
+ RUN /usr/bin/update-ca-trust
+ ```
+
+### Avoid unexpected behavior when images change
+
+When your application is restarted or scaled out, the latest image will always be pulled. If the image has been changed, the newly started application instances will use the new image while the old instances will continue to use the old image. Avoid using the `latest` tag or overwrite the image without a tag change to avoid unexpected application behavior.
+
+### Avoid not being able to connect to the container registry in a VNet
+
+If you deployed the instance to a VNet, make sure you allow the network traffic to your container registry in the NSG or Azure Firewall (if used). For more information, see [Customer responsibilities for running in VNet](/azure/spring-cloud/vnet-customer-responsibilities) to add the needed security rules.
+
+### Install an APM into the image manually
+
+The installation steps vary on different APMs and languages. The following steps are for New Relic with Java applications. You must modify the *Dockerfile* using the following steps:
+
+1. Download and install the agent file into the image by adding the following to the *Dockerfile*:
+
+ ```dockerfile
+ ADD newrelic-agent.jar /opt/agents/newrelic/java/newrelic-agent.jar
+ ```
+
+1. Add the environment variables required by the APM:
+
+ ```dockerfile
+ ENV NEW_RELIC_APP_NAME=appName
+ ENV NEW_RELIC_LICENSE_KEY=newRelicLicenseKey
+ ```
+
+1. Modify the image entry point by adding: `java -javaagent:/opt/agents/newrelic/java/newrelic-agent.jar`
+
+To install the agents for other languages, refer to the official documentation for the other agents:
+
+New Relic:
+
+* Python: [Standard Python agent install](https://docs.newrelic.com/docs/apm/agents/python-agent/installation/standard-python-agent-install/)
+* Node.js: [Install the Node.js agent](https://docs.newrelic.com/docs/apm/agents/nodejs-agent/installation-configuration/install-nodejs-agent/)
+
+Dynatrace:
+
+* Python: [Instrument Python applications with OpenTelemetry](https://www.dynatrace.com/support/help/extend-dynatrace/opentelemetry/opentelemetry-traces/opentelemetry-ingest/opent-python)
+* Node.js: [Instrument Node.js applications with OpenTelemetry](https://www.dynatrace.com/support/help/extend-dynatrace/opentelemetry/opentelemetry-traces/opentelemetry-ingest/opent-nodejs)
+
+AppDynamics:
+
+* Python: [Install the Python Agent](https://docs.appdynamics.com/4.5.x/en/application-monitoring/install-app-server-agents/python-agent/install-the-python-agent)
+* Node.js: [Installing the Node.js Agent](https://docs.appdynamics.com/4.5.x/en/application-monitoring/install-app-server-agents/node-js-agent/install-the-node-js-agent#InstalltheNode.jsAgent-install_nodejsInstallingtheNode.jsAgent)
+
+### View the container logs
+
+To view the console logs of your container application, the following CLI command can be used:
+
+```azurecli
+az spring-cloud app logs \
+ --resource-group <your-resource-group> \
+ --name <your-app-name> \
+ --service <your-service-name> \
+ --instance <your-instance-name>
+```
+
+To view the container events logs from the Azure Monitor, enter the query:
+
+```query
+AppPlatformContainerEventLogs
+| where App == "hw-20220317-1b"
+```
++
+### Scan your image for vulnerabilities
+
+We recommend that you use Microsoft Defender for Cloud with ACR to prevent your images from being vulnerable. For more information, see [Microsoft Defender for Cloud] (/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks#scanning-images-in-acr-registries)
+
+### Switch between JAR deployment and container deployment
+
+You can switch the deployment type directly by redeploying using the following command:
+
+```azurecli
+az spring-cloud app deploy \
+ --resource-group <your-resource-group> \
+ --name <your-app-name> \
+ --container-image <your-container-image> \
+ --service <your-service-name>
+```
+
+### Create another deployment with an existing JAR deployment
+
+You can create another deployment using an existing JAR deployment using the following command:
+
+```azurecli
+az spring-cloud app deployment create \
+ --resource-group <your-resource-group> \
+ --name <your-deployment-name> \
+ --app <your-app-name> \
+ --container-image <your-container-image> \
+ --service <your-service-name>
+```
+
+> [!NOTE]
+> Automating deployments using Azure Pipelines Tasks or GitHub Actions are not currently supported.
+
+## Next steps
+
+* [How to capture dumps](/azure/spring-cloud/how-to-capture-dumps)
static-web-apps Branch Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/branch-environments.md
You can configure your site to deploy every change made to branches that aren't
To enable stable URL environments, make the following changes to your [configuration file](configuration.md). -- Set the `production_branch` input on the `static-web-apps-deploy` GitHub action to your production branch name. This ensures changes to your production branch are deployed to the production environment, while changes to other branches are deployed to a preview environment.-- List the branches you want to deploy to preview environments in the `on > push > branches` array in your workflow configuration so that changes to those branches also trigger the GitHub Actions deployment.
- - Set this array to `**` if you want to track all branches.
+- Set the `production_branch` input to your production branch name on the `static-web-apps-deploy` job in GitHub action or on the AzureStaticWebApp task. This ensures changes to your production branch are deployed to the production environment, while changes to other branches are deployed to a preview environment.
+- List the branches you want to deploy to preview environments in the trigger array in your workflow configuration so that changes to those branches also trigger the GitHub Actions or Azure Pipelines deployment.
+ - Set this array to `**` for GitHub Actions or `*` for Azure Pipelines if you want to track all branches.
## Example The following example demonstrates how to enable branch preview environments.
+# [GitHub Actions](#tab/github-actions)
+ ```yml name: Azure Static Web Apps CI/CD
jobs:
... production_branch: "main" ```
+# [Azure Pipelines](#tab/azure-devops)
+
+```yml
+trigger:
+ - main
+ - dev
+ - staging
+
+pool:
+ vmImage: ubuntu-latest
+
+steps:
+ - checkout: self
+ submodules: true
+ - task: AzureStaticWebApp@0
+ inputs:
+ ...
+ production_branch: 'main'
+```
+
+
> [!NOTE] > The `...` denotes code skipped for clarity.
static-web-apps Named Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/named-environments.md
+
+ Title: Create named preview environments in Azure Static Web Apps
+description: Expose stable URLs for named environments to evaluate changes in Azure Static Web Apps
++++ Last updated : 04/27/2022+++
+# Create named preview environments in Azure Static Web Apps
+
+You can configure your site to deploy every change to a named environment. This preview deployment is published at a stable URL that includes the environment name. For example, if the environment is named `release`, then the preview is available at a location like `<DEFAULT_HOST_NAME>-release.<LOCATION>.azurestaticapps.net`.
+
+## Configuration
+
+To enable stable URL environments with named deployment environment, make the following changes to your [configuration file](configuration.md).
+
+- Set the `deployment_environment` input to a specific name on the `static-web-apps-deploy` job in GitHub action or on the AzureStaticWebApp task. This ensures all changes to your tracked branches are deployed to the named preview environment.
+- List the branches you want to deploy to preview environments in the trigger array in your workflow configuration so that changes to those branches also trigger the GitHub Actions or Azure Pipelines deployment.
+ - Set this array to `**` for GitHub Actions or `*` for Azure Pipelines if you want to track all branches.
+
+## Example
+
+The following example demonstrates how to enable branch preview environments.
+
+# [GitHub Actions](#tab/github-actions)
+
+```yml
+name: Azure Static Web Apps CI/CD
+
+on:
+ push:
+ branches:
+ - "**"
+ pull_request:
+ types: [opened, synchronize, reopened, closed]
+ branches:
+ - main
+
+jobs:
+ build_and_deploy_job:
+ ...
+ name: Build and Deploy Job
+ steps:
+ - uses: actions/checkout@v2
+ with:
+ submodules: true
+ - name: Build And Deploy
+ id: builddeploy
+ uses: Azure/static-web-apps-deploy@v1
+ with:
+ ...
+ deployment_environment: "release"
+```
+# [Azure Pipelines](#tab/azure-devops)
+
+```yml
+trigger:
+ - "*"
+
+pool:
+ vmImage: ubuntu-latest
+
+steps:
+ - checkout: self
+ submodules: true
+ - task: AzureStaticWebApp@0
+ inputs:
+ ...
+ deployment_environment: "release"
+```
+++
+> [!NOTE]
+> The `...` denotes code skipped for clarity.
+
+In this example, changes to all branches will be deployed to the `release` named preview environment.
+
+## Next Steps
+
+> [!div class="nextstepaction"]
+> [Review pull requests in pre-production environments](./review-publish-pull-requests.md)
static-web-apps Preview Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/preview-environments.md
The following deployment types are available in Azure Static Web Apps.
- [**Pull requests**](review-publish-pull-requests.md): Pull requests against your production branch deploy to a temporary environment that disappears after the pull request is closed. The URL for this environment includes the PR number as a suffix. For example, if you make your first PR, the preview location looks something like `<DEFAULT_HOST_NAME>-1.<LOCATION>.azurestaticapps.net`. -- [**Branch**](branch-environments.md): You can optionally configure your site to deploy every change made to branches that aren't a production branch. This preview deployment lives for the entire lifetime of the branch and is published at a stable URL that includes the branch name. For example, if the branch is named `dev`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-dev.<LOCATION>.azurestaticapps.net`.
+- [**Branch**](branch-environments.md): You can optionally configure your site to deploy every change made to branches that aren't a production branch. This preview deployment is published at a stable URL that includes the branch name. For example, if the branch is named `dev`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-dev.<LOCATION>.azurestaticapps.net`.
+
+- [**Named environment**](named-environments.md): You can configure your pipeline to deploy all changes to a named environment. This preview deployment is published at a stable URL that includes the environment name. For example, if the deployment environment is named `release`, then the environment is available at a location like `<DEFAULT_HOST_NAME>-release.<LOCATION>.azurestaticapps.net`.
## Next Steps
storage Data Lake Storage Explorer Acl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer-acl.md
After you successfully sign in with an Azure account, the account and the Azure
:::image type="content" alt-text="Screenshot that shows Microsoft Azure Storage Explorer, and highlights the Account Management pane and Open Explorer button." source="./media/data-lake-storage-explorer-acl/storage-explorer-account-panel-sml.png" lightbox="./media/data-lake-storage-explorer-acl/storage-explorer-account-panel-sml.png":::
-When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [Cosmos DB](../../cosmos-db/storage-explorer.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) accounts, or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
+When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
:::image type="content" alt-text="Microsoft Azure Storage Explorer - Connect window" source="./media/data-lake-storage-explorer-acl/storage-explorer-main-page-sml.png" lightbox="./media/data-lake-storage-explorer-acl/storage-explorer-main-page-lrg.png":::
storage Data Lake Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer.md
After you successfully sign in with an Azure account, the account and the Azure
:::image type="content" alt-text="Screenshot that shows Microsoft Azure Storage Explorer, and highlights the Account Management pane and Open Explorer button." source="./media/data-lake-storage-explorer/storage-explorer-account-panel-sml.png" lightbox="./media/data-lake-storage-explorer-acl/storage-explorer-account-panel-sml.png":::
-When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [Cosmos DB](../../cosmos-db/storage-explorer.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) accounts, or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
+When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
:::image type="content" alt-text="Microsoft Azure Storage Explorer - Connect window" source="./media/data-lake-storage-explorer/storage-explorer-main-page-sml.png" lightbox="./media/data-lake-storage-explorer-acl/storage-explorer-main-page-lrg.png":::
storage Quickstart Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-storage-explorer.md
After you successfully sign in with an Azure account, the account and the Azure
:::image type="content" source="media/quickstart-storage-explorer/storage-explorer-account-panel-sml.png" alt-text="Select Azure subscriptions" lightbox="media/quickstart-storage-explorer/storage-explorer-account-panel-lrg.png":::
-After Storage Explorer finishes connecting, it displays the **Explorer** tab. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [Cosmos DB](../../cosmos-db/storage-explorer.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) accounts, or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
+After Storage Explorer finishes connecting, it displays the **Explorer** tab. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
:::image type="content" source="media/quickstart-storage-explorer/storage-explorer-main-page-sml.png" alt-text="Screenshot showing Storage Explorer main page" lightbox="media/quickstart-storage-explorer/storage-explorer-main-page-lrg.png":::
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
The following table describes the options that Azure Storage offers for authoriz
| Azure Tables | [Supported](/rest/api/storageservices/authorize-with-shared-key/) | [Supported](storage-sas-overview.md) | [Supported](../tables/authorize-access-azure-active-directory.md) | Not supported | Not supported | Not supported | Each authorization option is briefly described below:
+- **Shared Key authorization** for blobs, files, queues, and tables. A client using Shared Key passes a header with every request that is signed using the storage account access key. For more information, see [Authorize with Shared Key](/rest/api/storageservices/authorize-with-shared-key/).
+
+ Microsoft recommends that you disallow Shared Key authorization for your storage account. When Shared Key authorization is disallowed, clients must use Azure AD or a user delegation SAS to authorize requests for data in that storage account. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md).
+
+- **Shared access signatures** for blobs, files, queues, and tables. Shared access signatures (SAS) provide limited delegated access to resources in a storage account via a signed URL. The signed URL specifies the permissions granted to the resource and the interval over which the signature is valid. A service SAS or account SAS is signed with the account key, while the user delegation SAS is signed with Azure AD credentials and applies to blobs only. For more information, see [Using shared access signatures (SAS)](storage-sas-overview.md).
+ - **Azure Active Directory (Azure AD) integration** for authorizing requests to blob, queue, and table resources. Microsoft recommends using Azure AD credentials to authorize requests to data when possible for optimal security and ease of use. For more information about Azure AD integration, see the articles for either [blob](../blobs/authorize-access-azure-active-directory.md), [queue](../queues/authorize-access-azure-active-directory.md), or [table](../tables/authorize-access-azure-active-directory.md) resources.
Each authorization option is briefly described below:
- **On-premises Active Directory Domain Services (AD DS, or on-premises AD DS) authentication** for Azure Files. Azure Files supports identity-based authorization over SMB through AD DS. Your AD DS environment can be hosted in on-premises machines or in Azure VMs. SMB access to Files is supported using AD DS credentials from domain joined machines, either on-premises or in Azure. You can use a combination of Azure RBAC for share level access control and NTFS DACLs for directory/file level permission enforcement. For more information about Azure Files authentication using domain services, see the [overview](../files/storage-files-active-directory-overview.md). -- **Shared Key authorization** for blobs, files, queues, and tables. A client using Shared Key passes a header with every request that is signed using the storage account access key. For more information, see [Authorize with Shared Key](/rest/api/storageservices/authorize-with-shared-key/).-
- Microsoft recommends that you disallow Shared Key authorization for your storage account. When Shared Key authorization is disallowed, clients must use Azure AD or a user delegation SAS to authorize requests for data in that storage account. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md).
--- **Shared access signatures** for blobs, files, queues, and tables. Shared access signatures (SAS) provide limited delegated access to resources in a storage account via a signed URL. The signed URL specifies the permissions granted to the resource and the interval over which the signature is valid. A service SAS or account SAS is signed with the account key, while the user delegation SAS is signed with Azure AD credentials and applies to blobs only. For more information, see [Using shared access signatures (SAS)](storage-sas-overview.md).- - **Anonymous public read access** for containers and blobs. When anonymous access is configured, then clients can read blob data without authorization. For more information, see [Manage anonymous read access to containers and blobs](../blobs/anonymous-read-access-configure.md). You can disallow anonymous public read access for a storage account. When anonymous public read access is disallowed, then users cannot configure containers to enable anonymous access, and all requests must be authorized. For more information, see [Prevent anonymous public read access to containers and blobs](../blobs/anonymous-read-access-prevent.md).
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
By default, storage accounts accept connections from clients on any network. You
- To allow traffic only from specific virtual networks, select **Enabled from selected virtual networks and IP addresses**.
- - To block traffic from all networks, use PowerShell or the Azure CLI. This setting does not yet appear in the Azure Portal.
+ - To block traffic from all networks, select **Disabled**.
4. Select **Save** to apply your changes.
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-minimum-version.md
To create a policy with an Audit effect for the minimum TLS version with the Azu
"equals": "Microsoft.Storage/storageAccounts" }, {
- "not": {
- "field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
- "equals": "TLS1_2"
- }
+ "anyOf": [
+ {
+ "field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
+ "notEquals": "TLS1_2"
+ },
+ {
+ "field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
+ "exists": "false"
+ }
+ ]
} ] },
To create a policy with a Deny effect for a minimum TLS version that is less tha
"equals": "Microsoft.Storage/storageAccounts" }, {
- "not": {
- "field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
- "equals": "TLS1_2"
- }
+ "anyOf": [
+ {
+ "field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
+ "notEquals": "TLS1_2"
+ },
+ {
+ "field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
+ "exists": "false"
+ }
+ ]
} ] },
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
The status of items that appear in this table may change over time as support co
| [Standard tiers (Hot, Cool, and Transaction optimized)](storage-files-planning.md#storage-tiers)| ⛔ | | [POSIX-permissions](https://en.wikipedia.org/wiki/File-system_permissions#Notation_of_traditional_Unix_permissions)| ✔️ | | Root squash| ✔️ |
+| Acess same data from Windows and Linux client| Γ¢ö |
| [Identity-based authentication](storage-files-active-directory-overview.md) | Γ¢ö | | [Azure file share soft delete](storage-files-prevent-file-share-deletion.md) | Γ¢ö | | [Azure File Sync](../file-sync/file-sync-introduction.md)| Γ¢ö |
storage Storage Files Migration Nas Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-hybrid.md
As mentioned in the Azure Files [migration overview article](storage-files-migra
## Phase 2: Provision a suitable Windows Server on-premises
-* Create a Windows Server 2019 - at a minimum 2012R2 - as a virtual machine or physical server. A Windows Server fail-over cluster is also supported.
+* Create a Windows Server 2022 or Windows Server 2019 virtual machine, or deploy a physical server. A Windows Server failover cluster is also supported.
* Provision or add Direct Attached Storage (DAS as compared to NAS, which is not supported). The amount of storage you provision can be smaller than what you are currently using on your NAS appliance. This configuration choice requires that you also make use of Azure File Syncs [cloud tiering](../file-sync/file-sync-cloud-tiering-overview.md) feature.
As mentioned in the Azure Files [migration overview article](storage-files-migra
1. Move a set of files that fits onto the disk 2. let file sync and cloud tiering engage
- 3. when more free space is created on the volume, proceed with the next batch of files. Alternatively, review the RoboCopy command in the upcoming [RoboCopy section](#phase-7-robocopy) for use of the new `/LFSM` switch. Using `/LFSM` can significantly simplify your RoboCopy jobs, but it is not compatible with some other RoboCopy switches you might depend on.
+ 3. when more free space is created on the volume, proceed with the next batch of files. Alternatively, review the RoboCopy command in the [RoboCopy section](#phase-7-robocopy) of this article for use of the new `/LFSM` switch. Using `/LFSM` can significantly simplify your RoboCopy jobs, but it is not compatible with some other RoboCopy switches you might depend on. Only use the `/LFSM` switch when the migration destination is local storage. It's not supported when the destination is a remote SMB share.
You can avoid this batching approach by provisioning the equivalent space on the Windows Server that your files occupy on the NAS appliance. Consider deduplication on NAS / Windows. If you don't want to permanently commit this high amount of storage to your Windows Server, you can reduce the volume size after the migration and before you adjust the cloud tiering policies. That creates a smaller on-premises cache of your Azure file shares.
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
There are two main types of storage accounts for Azure Files:
| Maximum request rate (Max IOPS) | <ul><li>20,000, with large file share feature enabled<sup>2</sup></li><li>1,000 or 100 requests per 100 ms, default</li></ul> | <ul><li>Baseline IOPS: 3000 + 1 IOPS per GiB, up to 100,000</li><li>IOPS bursting: Max (10000, 3x IOPS per GiB), up to 100,000</li></ul> | | Throughput (ingress + egress) for a single file share (MiB/sec) | <ul><li>Up to 300 MiB/sec, with large file share feature enabled<sup>2</sup></li><li>Up to 60 MiB/sec, default</li></ul> | 100 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06 * ProvisionedGiB) | | Maximum number of share snapshots | 200 snapshots | 200 snapshots |
-| Maximum object (directories and files) name length | 2,048 characters | 2,048 characters |
-| Maximum pathname component (in the path \A\B\C\D, each letter is a component) | 255 characters | 255 characters |
+| Maximum object name length (total pathname including all directories and filename) | 2,048 characters | 2,048 characters |
+| Maximum individual pathname component length (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters |
| Hard link limit (NFS only) | N/A | 178 | | Maximum number of SMB Multichannel channels | N/A | 4 | | Maximum number of stored access policies per file share | 5 | 5 |
synapse-analytics How To Discover Connect Analyze Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/catalog-and-governance/how-to-discover-connect-analyze-azure-purview.md
In this document, you will learn the type of interactions that you can perform
## Prerequisites -- [Azure Microsoft Purview account](../../purview/create-catalog-portal.md)
+- [Microsoft Purview account](../../purview/create-catalog-portal.md)
- [Synapse workspace](../quickstart-create-workspace.md) - [Connect a Microsoft Purview Account into Synapse](quickstart-connect-azure-purview.md)
synapse-analytics Quickstart Connect Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md
You can perform the following tasks in Synapse:
- Execute pipelines and [push lineage information to Microsoft Purview](../../purview/how-to-lineage-azure-synapse-analytics.md) ## Prerequisites -- [Azure Microsoft Purview account](../../purview/create-catalog-portal.md)
+- [Microsoft Purview account](../../purview/create-catalog-portal.md)
- [Synapse workspace](../quickstart-create-workspace.md) ## Permissions for connecting a Microsoft Purview account
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
Azure Active Directory based authentication is an integrated authentication appr
#### Basic authentication
-A basic authentication approach requires user to configure `username` and `password` options. Refer to the section - [Configuration Options](#configuration-options) to learn about relevant configuration parameters for reading from and writing to tables in Azure Synapse Dedicated SQL Pool.
+A basic authentication approach requires user to configure `username` and `password` options. Refer to the section - [Configuration options](#configuration-options) to learn about relevant configuration parameters for reading from and writing to tables in Azure Synapse Dedicated SQL Pool.
### Authorization
Following is the list of configuration options based on usage scenario:
* `Constants.DATA_SOURCE` is a required configuration option. * The connector uses the storage path set on the data source's location parameter in combination with the `location` argument to the `synapsesql` method and derives the absolute path to persist external table data. * If the `location` argument to `synapsesql` method isn't specified, then the connector will derive the location value as `<base_path>/dbName/schemaName/tableName`.
-* **Write using Basic Authentication**
+* **Write using basic authentication**
* Azure Synapse Dedicated SQL End Point * `Constants.SERVER` * `Constants.USER` - SQL User Name.
val dfToReadFromTable:DataFrame = spark.read.
//If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument //to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point. option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net").
- //Defaults to storage path defined in the runtime configurations (See section on Configuration Options above).
+ //Defaults to storage path defined in the runtime configurations
option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>"). //Three-part table name from where data will be read. synapsesql("<database_name>.<schema_name>.<table_name>").
readDF.
if(errorDuringWrite.isDefined) throw errorDuringWrite.get ```
-#### Write using Basic Authentication
+#### Write using basic authentication
Following code snippet replaces the write definition described in the [Write using Azure AD based authentication](#write-using-azure-ad-based-authentication) section, to submit write request using SQL basic authentication approach:
virtual-desktop Environment Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/environment-setup.md
Title: Azure Virtual Desktop environment - Azure
description: Learn about the basic elements of a Azure Virtual Desktop environment, like host pools and app groups. Previously updated : 04/30/2020 Last updated : 05/02/2022
A workspace is a logical grouping of application groups in Azure Virtual Desktop
After you've assigned users to their app groups, they can connect to a Azure Virtual Desktop deployment with any of the Azure Virtual Desktop clients.
+## User sessions
+
+In this section, we'll go over each of the three types of user sessions that end users can have.
+
+### Active user session
+
+A user session is considered "active" when a user signs in and connects to their remote app or desktop resource.
+
+### Disconnected user session
+
+A disconnected user session is an inactive session that the user hasn't signed out of yet. When a user closes the remote session window without signing out, the session becomes disconnected. When a user reconnects to their remote resources, they'll be redirected to their disconnected session on the session host they were working on. At this point, the disconnected session becomes an active session again.
+
+### Pending user session
+
+A pending user session is a placeholder session that reserves a spot on the load-balanced virtual machine for the user. Because the sign-in process can take anywhere from 30 seconds to five minutes depending on the user profile, this placeholder session ensures that the user won't be kicked out of their session if another user completes their sign-in process first.
+ ## Next steps Learn more about delegated access and how to assign roles to users at [Delegated Access in Azure Virtual Desktop](delegated-access-virtual-desktop.md).
virtual-machines Ddv4 Ddsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ddv4-ddsv4-series.md
Title: Ddv4 and Ddsv4-series description: Specifications for the Dv4, Ddv4, Dsv4 and Ddsv4-series VMs.--++
virtual-machines Ddv5 Ddsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ddv5-ddsv5-series.md
Title: Ddv5 and Ddsv5-series - Azure Virtual Machines description: Specifications for the Ddv5 and Ddsv5-series VMs.---++
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
Depending on how you delete a VM, it may only delete the VM resource, not the ne
1. Select **+ Create a resource**. 1. On the **Create a resource** page, under **Virtual machines**, select **Create**. 1. Make your choices on the **Basics**, then select **Next : Disks >**. The **Disks** tab will open.
-1. Under **Disk options**, by default the OS disk is set to **Delete with VM**. If you don't want to delete the OS disk, uncheck the box. If you're using an existing OS disk, the default is to detach the OS disk when the VM is deleted.
+1. Under **Disk options**, by default the OS disk is set to **Delete with VM**. If you don't want to delete the OS disk, clear the checkbox. If you're using an existing OS disk, the default is to detach the OS disk when the VM is deleted.
:::image type="content" source="media/delete/delete-disk.png" alt-text="Screenshot checkbox to choose to have the disk deleted when the VM is deleted.":::
virtual-machines Dv2 Dsv2 Series Memory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv2-dsv2-series-memory.md
Title: Memory optimized Dv2 and Dsv2-series VMs - Azure Virtual Machines description: Specifications for the Dv2 and DSv2-series VMs.-++ Last updated 02/03/2020- # Memory optimized Dv2 and Dsv2-series
virtual-machines Dv2 Dsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv2-dsv2-series.md
Title: Dv2 and DSv2-series - Azure Virtual Machines description: Specifications for the Dv2 and Dsv2-series VMs.-++ Last updated 02/03/2020- # Dv2 and DSv2-series
virtual-machines Dv3 Dsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv3-dsv3-series.md
Title: Dv3 and Dsv3-series description: Specifications for the Dv3 and Dsv3-series VMs.-++ Last updated 09/22/2020- # Dv3 and Dsv3-series
virtual-machines Dv4 Dsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv4-dsv4-series.md
Title: Dv4 and Dsv4-series - Azure Virtual Machines description: Specifications for the Dv4 and Dsv4-series VMs.----++
virtual-machines Dv5 Dsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv5-dsv5-series.md
Title: Dv5 and Dsv5-series - Azure Virtual Machines description: Specifications for the Dv5 and Dsv5-series VMs.----++
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv4-edsv4-series.md
Title: Edv4 and Edsv4-series description: Specifications for the Ev4, Edv4, Esv4 and Edsv4-series VMs.----++
virtual-machines Edv5 Edsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv5-edsv5-series.md
Title: Edv5 and Edsv5-series - Azure Virtual Machines description: Specifications for the Edv5 and Edsv5-series VMs.----++
virtual-machines Ev3 Esv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev3-esv3-series.md
Title: Ev3-series and Esv3-series description: Specifications for the Ev3 and Esv3-series VMs.- Last updated 09/22/2020-++ # Ev3 and Esv3-series
virtual-machines Ev4 Esv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev4-esv4-series.md
Title: Ev4 and Esv4-series - Azure Virtual Machines description: Specifications for the Ev4, and Esv4-series VMs.----++
virtual-machines Ev5 Esv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev5-esv5-series.md
Title: Ev5 and Esv5-series - Azure Virtual Machines description: Specifications for the Ev5 and Esv5-series VMs.----++
virtual-machines How To Verify Encryption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/how-to-verify-encryption-status.md
Title: Verify encryption status for Linux - Azure Disk Encryption description: This article provides instructions on verifying the encryption status from the platform and OS levels.-+ -+ Last updated 03/11/2020
virtual-machines Resize Os Disk Gpt Partition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/resize-os-disk-gpt-partition.md
Title: Resize an OS disk that has a GPT partition description: This article provides instructions on how to resize an OS disk that has a GUID Partition Table (GPT) partition in Linux. -+
ms.devlang: azurecli Last updated 05/03/2020-+
virtual-machines Sizes B Series Burstable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-b-series-burstable.md
The B-series comes in the following VM sizes:
<br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Base CPU Perf of VM | Max CPU Perf of VM | Initial Credits | Credits banked/hour | Max Banked Credits | Max data disks | Max cached and temp storage throughput: IOPS/MBps | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> |Max NICs |
-|||||||||||||||
-| Standard_B1ls<sup>2</sup> | 1 | 0.5 | 4 | 5% | 100% | 30 | 3 | 72 | 2 | 200/10 | 160/10 | 4000/100 | 2 |
-| Standard_B1s | 1 | 1 | 4 | 10% | 100% | 30 | 6 | 144 | 2 | 400/10 | 320/10 | 4000/100 | 2 |
-| Standard_B1ms | 1 | 2 | 4 | 20% | 100% | 30 | 12 | 288 | 2 | 800/10 | 640/10 | 4000/100 | 2 |
-| Standard_B2s | 2 | 4 | 8 | 40% | 200% | 60 | 24 | 576 | 4 | 1600/15 | 1280/15 | 4000/100 | 3 |
-| Standard_B2ms | 2 | 8 | 16 | 60% | 200% | 60 | 36 | 864 | 4 | 2400/22.5 | 1920/22.5 | 4000/100 | 3 |
-| Standard_B4ms | 4 | 16 | 32 | 90% | 400% | 120 | 54 | 1296 | 8 | 3600/35 | 2880/35 | 8000/200 | 4 |
-| Standard_B8ms | 8 | 32 | 64 | 135% | 800% | 240 | 81 | 1944 | 16 | 4320/50 | 4320/50 | 8000/200 | 4 |
-| Standard_B12ms | 12 | 48 | 96 | 202% | 1200% | 360 | 121 | 2909 | 16 | 6480/75 | 4320/50 | 16000/400 | 6 |
-| Standard_B16ms | 16 | 64 | 128 | 270% | 1600% | 480 | 162 | 3888 | 32 | 8640/100 | 4320/50 | 16000/400 | 8 |
-| Standard_B20ms | 20 | 80 | 160 | 337% | 2000% | 600 | 203 | 4860 | 32 | 10800/125 | 4320/50 | 16000/400 | 8 |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Base CPU Perf of VM | Max CPU Perf of VM | Initial Credits | Credits banked/hour | Max Banked Credits | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> |Max NICs |
+||||||||||||||
+| Standard_B1ls<sup>2</sup> | 1 | 0.5 | 4 | 5% | 100% | 30 | 3 | 72 | 2 | 160/10 | 4000/100 | 2 |
+| Standard_B1s | 1 | 1 | 4 | 10% | 100% | 30 | 6 | 144 | 2 | 320/10 | 4000/100 | 2 |
+| Standard_B1ms | 1 | 2 | 4 | 20% | 100% | 30 | 12 | 288 | 2 | 640/10 | 4000/100 | 2 |
+| Standard_B2s | 2 | 4 | 8 | 40% | 200% | 60 | 24 | 576 | 4 | 1280/15 | 4000/100 | 3 |
+| Standard_B2ms | 2 | 8 | 16 | 60% | 200% | 60 | 36 | 864 | 4 | 1920/22.5 | 4000/100 | 3 |
+| Standard_B4ms | 4 | 16 | 32 | 90% | 400% | 120 | 54 | 1296 | 8 | 2880/35 | 8000/200 | 4 |
+| Standard_B8ms | 8 | 32 | 64 | 135% | 800% | 240 | 81 | 1944 | 16 | 4320/50 | 8000/200 | 4 |
+| Standard_B12ms | 12 | 48 | 96 | 202% | 1200% | 360 | 121 | 2909 | 16 | 4320/50 | 16000/400 | 6 |
+| Standard_B16ms | 16 | 64 | 128 | 270% | 1600% | 480 | 162 | 3888 | 32 | 4320/50 | 16000/400 | 8 |
+| Standard_B20ms | 20 | 80 | 160 | 337% | 2000% | 600 | 203 | 4860 | 32 | 4320/50 | 16000/400 | 8 |
<sup>1</sup> B-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
virtual-machines Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide.md
For more documentation, see [this article][vpn-gateway-create-site-to-site-rm-po
#### VNet to VNet Connection Using Multi-Site VPN, you need to configure a separate Azure Virtual Network in each of the regions. However often you have the requirement that the software components in the different regions should communicate with each other. Ideally this communication should not be routed from one Azure Region to on-premises and from there to the other Azure Region. To shortcut, Azure offers the possibility to configure a connection from one Azure Virtual Network in one region to another Azure Virtual Network hosted in another region. This functionality is called VNet-to-VNet connection. More details on this functionality can be found here:
-<https://azure.microsoft.com/documentation/articles/vpn-gateway-vnet-vnet-rm-ps/>.
+[Configure a VNet-to-VNet VPN gateway connection by using the Azure portal](/azure/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal).
#### Private Connection to Azure ExpressRoute
Microsoft Azure ExpressRoute allows the creation of private connections between
Find more details on Azure ExpressRoute and offerings here:
-* <https://azure.microsoft.com/documentation/services/expressroute/>
-* <https://azure.microsoft.com/pricing/details/expressroute/>
-* <https://azure.microsoft.com/documentation/articles/expressroute-faqs/>
+* [ExpressRoute documentation](https://azure.microsoft.com/documentation/services/expressroute/)
+* [Azure ExpressRoute pricing](https://azure.microsoft.com/pricing/details/expressroute/)
+* [ExpressRoute FAQ](/azure/expressroute/expressroute-faqs)
Express Route enables multiple Azure subscriptions through one ExpressRoute circuit as documented here
-* <https://azure.microsoft.com/documentation/articles/expressroute-howto-linkvnet-arm/>
-* <https://azure.microsoft.com/documentation/articles/expressroute-howto-circuit-arm/>
+* [Tutorial: Connect a virtual network to an ExpressRoute circuit](/azure/expressroute/expressroute-howto-linkvnet-arm)
+* [Quickstart: Create and modify an ExpressRoute circuit using Azure PowerShell](/azure/expressroute/expressroute-howto-circuit-arm)
#### Forced tunneling in case of cross-premises For VMs joining on-premises domains through site-to-site, point-to-site, or ExpressRoute, you need to make sure that the Internet proxy settings are getting deployed for all the users in those VMs as well. By default, software running in those VMs or users using a browser to access the internet would not go through the company proxy, but would connect straight through Azure to the internet. But even the proxy setting is not a 100% solution to direct the traffic through the company proxy since it is responsibility of software and services to check for the proxy. If software running in the VM is not doing that or an administrator manipulates the settings, traffic to the Internet can be detoured again directly through Azure to the Internet.
-In order to avoid such a direct internet connectivity, you can configure Forced Tunneling with site-to-site connectivity between on-premises and Azure. The detailed description of the Forced Tunneling feature is published here
-<https://azure.microsoft.com/documentation/articles/vpn-gateway-forced-tunneling-rm/>
+In order to avoid such a direct internet connectivity, you can configure Forced Tunneling with site-to-site connectivity between on-premises and Azure. The detailed description of the Forced Tunneling feature is published here:
+[Configure forced tunneling using the classic deployment model](/azure/vpn-gateway/vpn-gateway-about-forced-tunneling)
Forced Tunneling with ExpressRoute is enabled by customers advertising a default route via the ExpressRoute BGP peering sessions.
High Availability and Disaster recovery functionality for DBMS in general as wel
Here are two examples of a complete SAP NetWeaver HA architecture in Azure - one for Windows and one for Linux.
-Unmanaged disks only: The concepts as explained below may need to be compromised a bit when you deploy many SAP systems and the number of VMs deployed are exceeding the maximum limit of Storage Accounts per subscription. In such cases, VHDs of VMs need to be combined within one Storage Account. Usually you would do so by combining VHDs of SAP application layer VMs of different SAP systems. We also combined different VHDs of different DBMS VMs of different SAP systems in one Azure Storage Account. Thereby keeping the IOPS limits of Azure Storage Accounts in mind (<https://azure.microsoft.com/documentation/articles/storage-scalability-targets>)
+Unmanaged disks only: The concepts as explained below may need to be compromised a bit when you deploy many SAP systems and the number of VMs deployed are exceeding the maximum limit of Storage Accounts per subscription. In such cases, VHDs of VMs need to be combined within one Storage Account. Usually you would do so by combining VHDs of SAP application layer VMs of different SAP systems. We also combined different VHDs of different DBMS VMs of different SAP systems in one Azure Storage Account. Thereby keeping the IOPS limits of Azure Storage Accounts in mind [Scalability and performance targets for standard storage accounts](/azure/storage/common/scalability-targets-standard-account)
##### ![Windows logo.][Logo_Windows] HA on Windows
Read the articles:
- [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md) - [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_general.md)-- [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md)
+- [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md)
virtual-machines Sap High Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios.md
[sap-ha-partner-information]:https://scn.sap.com/docs/DOC-8541 [azure-sla]:https://azure.microsoft.com/support/legal/sla/ [azure-virtual-machines-manage-availability]:../../windows/manage-availability.md
-[azure-storage-redundancy]:https://azure.microsoft.com/documentation/articles/storage-redundancy/
+[azure-storage-redundancy]:/azure/storage/common/storage-redundancy
[azure-storage-managed-disks-overview]:../../../virtual-machines/managed-disks-overview.md [planning-guide-figure-100]:media/virtual-machines-shared-sap-planning-guide/100-single-vm-in-azure.png
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager using the Azure portal' description: Use this quickstart to learn how to create a mesh network topology with Virtual Network Manager using the Azure portal.--++ Previously updated : 11/02/2021 Last updated : 04/20/2021
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
1. On the *Basics* tab, enter or select the following information:
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-manager-basics.png" alt-text="Screenshot of create a Network Manager basics page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-manager-basics.png" alt-text="Screenshot of Create a network manager Basics page.":::
| Setting | Value | | - | -- |
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
| Name | Enter a name for this Azure Virtual Network Manager instance. This example will use the name **myAVNM**. | | Region | Select the region for this deployment. Azure Virtual Network Manager can manage virtual networks in any region. The region selected is for where the Virtual Network Manager instance will be deployed. | | Description | *(Optional)* Provide a description about this Virtual Network Manager instance and the task it will be managing. |
- | [Scope](concept-network-manager-scope.md#scope) | Define the scope for which Azure Virtual Network Manager can manage.
+ | [Scope](concept-network-manager-scope.md#scope) | Define the scope for which Azure Virtual Network Manager can manage. This example will use a subscription-level scope.
| [Features](concept-network-manager-scope.md#features) | Select the features you want to enable for Azure Virtual Network Manager. Available features are *Connectivity*, *SecurityAdmin*, or *Select All*. </br> Connectivity - Enables the ability to create a full mesh or hub and spoke network topology between virtual networks within the scope. </br> SecurityAdmin - Enables the ability to create global network security rules. | 1. Select **Review + create** and then select **Create** once validation has passed.
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-manager-validation.png" alt-text="Screenshot of validation page for create a Network Manager resource.":::
- ## Create three virtual networks 1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Select **+ Create a resource** and search for **Virtual network**. Then select **Create** to begin configuring the virtual network.
- :::image type="content" source="./media/create-virtual-network-manager-portal/create-vnet.png" alt-text="Screenshot of create a virtual network page.":::
- 1. On the *Basics* tab, enter or select the following information. :::image type="content" source="./media/create-virtual-network-manager-portal/create-mesh-vnet-basic.png" alt-text="Screenshot of create a virtual network basics page.":::
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
1. Select **Review + create** and then select **Create** once validation has passed to deploy the virtual network.
- :::image type="content" source="./media/create-virtual-network-manager-portal/vnet-validation.png" alt-text="Screenshot of validation page for create a virtual network.":::
- 1. Repeat steps 2-5 to create two more virtual networks with the following information: | Setting | Value |
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
1. Go to Azure Virtual Network Manager instance you created.
-1. Select **Network Groups** under *Settings*, then select **+ Add**.
+1. Select **Network Groups** under *Settings*, then select **+ Create**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group.png" alt-text="Screenshot of add a network group button.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
-1. On the *Add a network group* page, enter a **Name** for the network group. This example will use the name **myNetworkGroup**. Select **Next: Static group members >** to begin adding virtual networks to the network group.
+1. On the *Create a network group* page, enter a **Name** for the network group. This example will use the name **myNetworkGroup**. Select **Add** to create the network group.
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group basics tab.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
-1. On the *Static group members* tab, select **+ Add virtual networks**.
+1. You'll see the new network group added to the *Network Groups* page.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks-button.png" alt-text="Screenshot of add a virtual network button.":::
+1. From the list of network groups, select **myNetworkGroup** and select **Add** under **Static membership** on the *myNetworkGroup* page.
-1. On the *Add virtual networks* page, select all three virtual networks created previously (VNetA, VNetB, and VNetC). Then select **Add** to commit the selection.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-static-member.png" alt-text="Screenshot of add a virtual network f.":::
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page.":::
+1. On the *Add static members* page, select all three virtual networks created previously (VNetA, VNetB, and VNetC). Then select **Add** to add the 3 virtual networks to the network group.
-1. Select **Review + create**, and then select **Create** once validation has passed.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page.":::
- :::image type="content" source="./media/create-virtual-network-manager-portal/review-create.png" alt-text="Screenshot of review and create button for a new network group.":::
+1. Return to the *Network groups* page, and you'll see 3 members added under **Member virtual network**.
-1. You'll see the new network group added to the *Network Groups* page.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network groups page with new network group added.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/list-network-members.png" alt-text="Screenshot of network manager instance page with three member virtual networks.":::
## Create a connectivity configuration
-1. Select **Configurations** under *Settings*, then select **+ Add a configuration**.
+1. Select **Configurations** under *Settings*, then select **+ Create**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of add a configuration button for Network Manager.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of configuration creation screen for Network Manager.":::
-1. Select **Connectivity** from the drop-down menu to begin creating a connectivity configuration.
+1. Select **Connectivity configuration** from the drop-down menu to begin creating a connectivity configuration.
:::image type="content" source="./media/create-virtual-network-manager-portal/configuration-menu.png" alt-text="Screenshot of configuration drop-down menu.":::
-1. On the *Add a connectivity configuration* page, enter, or select the following information:
+1. On the *Basics* page, enter the following information, and select **Next: Topology >**.
:::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
| - | -- | | Name | Enter a name for this connectivity configuration. | | Description | *(Optional)* Provide a description about this connectivity configuration. |
- | Topology | Select the type of topology you want to create with this configuration. This example will use the **Mesh** topology. |
-1. Once you select the *Mesh* topology, the **Global Mesh** and **Network Groups** option will appear. *Global Mesh* isn't required for this set up since all the virtual networks are in the same region. Select **+ Add network groups** and then select the network group you created in the last section. Click **Select** to add the network group to the configuration.
+
+1. On the *Topology* tab, select the *Mesh* topology if not selected, and leave the **Enable mesh connectivity across regions** unchecked. Cross-region connectivity isn't required for this set up since all the virtual networks are in the same region.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/topology-configuration.png" alt-text="Screenshot of topology selection for network group connectivity configuration.":::
+
+1. Select **+ Add** and then select the network group you created in the last section. Select **Select** to add the network group to the configuration.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-configuration.png" alt-text="Screenshot of add a network group to a connectivity configuration.":::
-1. Select **Add** to create the configuration.
+1. Select **Next: Review + Create >** and **Create** to create the configuration.
:::image type="content" source="./media/create-virtual-network-manager-portal/create-connectivity-configuration.png" alt-text="Screenshot of create a connectivity configuration.":::
-1. You'll see the new connectivity configuration added to the *Configuration* page.
+1. Once the deployment completes, select **Refresh** and you'll see the new connectivity configuration added to the *Configurations* page.
:::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-list.png" alt-text="Screenshot of connectivity configuration list.":::
In this quickstart, you'll deploy three virtual networks and use Azure Virtual N
To have your configurations applied to your environment, you'll need to commit the configuration by deployment. You'll need to deploy the configuration to the **West US** region where the virtual networks are deployed.
-1. Select **Deployments** under *Settings*, then select **Deploy a configuration**.
+1. Select **Deployments** under *Settings*, then select **Deploy configurations**.
:::image type="content" source="./media/create-virtual-network-manager-portal/deployments.png" alt-text="Screenshot of deployments page in Network Manager.":::
To have your configurations applied to your environment, you'll need to commit t
| Setting | Value | | - | -- |
- | Configuration type | Select the type of configuration you want to deploy. This example will deploy a **Connectivity** configuration. |
- | Configurations | Select the **myConnectivityConfig** configuration created from the previous section. |
- | Target regions | Select the region to deploy this configuration to. The **West US** region is selected, since all the virtual networks were created in that region. |
+ | Configurations | Select the type of configuration you want to deploy. This example will select **Include connectivity configurations in your goal state** . |
+ | Connectivity configurations | Select the **ConnectivityConfigA** configuration created from the previous section. |
+ | Regions | Select the region to deploy this configuration to. For this example, choose the **West US** region since all the virtual networks were created in that region. |
-1. Select **Deploy** and then select **OK** to confirm you want to overwrite any existing configuration.
+1. Select **Next** and then select **Deploy** to complete the deployment.
:::image type="content" source="./media/create-virtual-network-manager-portal/deployment-confirmation.png" alt-text="Screenshot of deployment confirmation message.":::
-1. You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take about 15-20 minutes to complete.
+1. You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take several minutes to complete.
:::image type="content" source="./media/create-virtual-network-manager-portal/deployment-in-progress.png" alt-text="Screenshot of configuration deployment in progress status."::: ## Confirm configuration deployment
-1. Select **Refresh** on the *Deployment* page to see the updated status of the configuration that you committed.
+1. Select **Refresh** on the *Deployments* page to see the updated status of the configuration that you committed.
:::image type="content" source="./media/create-virtual-network-manager-portal/deployment-status.png" alt-text="Screenshot of refresh button for updated deployment status.":::
If you no longer need Azure Virtual Network Manager, you'll need to make sure al
* All configurations have been deleted. * All network groups have been deleted.
-1. To remove all configurations from a region, deploy a **None** configuration to the target region. Select **Deploy** and then select **OK** to confirm.
+1. To remove all configurations from a region, start in the virtual network manager and select **Deploy configurations**. Select the following settings:
+
+ :::image type="content" source="./media/create-virtual-network-manager-portal/none-configuration.png" alt-text="Screenshot of deploy a none connectivity configuration settings.":::
+
+ | Setting | Value |
+ | - | -- |
+ | Configurations | Select **Include connectivity configurations in your goal state**. |
+ | Connectivity configurations | Select the ****None - Remove existing connectivity configurations**** configuration. |
+ | Regions | Select **West US** as the deployed region. |
- :::image type="content" source="./media/create-virtual-network-manager-portal/none-configuration.png" alt-text="Screenshot of deploy a none connectivity configuration.":::
+1. Select **Next** and select **Deploy** to complete the deployment removal.
-1. To delete a configuration, select **Configurations** under *Settings* from the left pane of Azure Virtual Network Manager. Select the checkbox next to the configuration you want to remove and then select **Delete** at the top of the resource page.
+1. To delete a configuration, select **Configurations** under *Settings* from the left pane of Azure Virtual Network Manager. Select the checkbox next to the configuration you want to remove and then select **Delete** at the top of the resource page. Select **Yes** to confirm the configuration deletion.
:::image type="content" source="./media/create-virtual-network-manager-portal/delete-configuration.png" alt-text="Screenshot of delete button for a connectivity configuration."::: 1. To delete a network group, select **Network Groups** under *Settings* from the left pane of Azure Virtual Network Manager. Select the checkbox next to the network group you want to remove and then select **Delete** at the top of the resource page.
- :::image type="content" source="./media/create-virtual-network-manager-portal/delete-network-group.png" alt-text="Screenshot of delete button for network group.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/delete-network-group.png" alt-text="Screenshot of delete a network group button.":::
-1. Once all network groups have been removed, you can now delete the resource by right-clicking the Azure Virtual Network Manager from the list and selecting **Delete**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/delete-network-manager.png" alt-text="Screenshot of delete button for an Azure Virtual Network Manager.":::
+1. On the **Delete a network group** page, select the following options:
-1. To delete the resource group, locate the resource group and select the **Delete resource group**. Confirm that you want to delete by entering the name of the resource group, then select **Delete**
+ :::image type="content" source="./media/create-virtual-network-manager-portal/ng-delete-options.png" alt-text="Screenshot of Network group to be deleted option selection.":::
+
+ | Setting | Value |
+ | - | -- |
+ | Delete option | Select **Force delete the resource and all dependent resources**. |
+ | Confirm deletion | Enter the name of the network group. In this example, it's **myNetworkGroup**. |
+
+1. Select **Delete** and Select **Yes** to confirm the network group deletion.
+
+1. Once all network groups have been removed, select **Overview** from the left pane of Azure Virtual Network Manager and select **Delete**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/delete-resource-group.png" alt-text="Screenshot of delete button for a resource group.":::
+1. On the **Delete a network manager** page, select the following options and select **Delete**. Select **Yes** to confirm the deletion.
+
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-manager-delete.png" alt-text="Screenshot of network manager to be deleted option selection.":::
+
+ | Setting | Value |
+ | - | -- |
+ | Delete option | Select **Force delete the resource and all dependent resources**. |
+ | Confirm deletion | Enter the name of the network manager. In this example, it's **myAVNM**. |
+
+1. To delete the resource group, locate the resource group and select the **Delete resource group**. Confirm that you want to delete by entering the name of the resource group, then select **Delete**
## Next steps
virtual-network-manager How To Block Network Traffic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-network-traffic-portal.md
Title: 'How to block network traffic with Azure Virtual Network Manager (Preview) - Azure portal' description: Learn how to block network traffic using security rules in Azure Virtual Network Manager with the Azure portal.--++ Previously updated : 11/02/2021 Last updated : 05/02/2022
Before you start to configure security admin rules, confirm that you've done the
1. Select **Configurations** under *Settings* and then select **+ Add a configuration**.
- :::image type="content" source="./media/how-to-block-network-traffic-portal/create-security-admin.png" alt-text="Screenshot of add a security admin configuration.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of add a security admin configuration.":::
1. Select **SecurityAdmin** from the drop-down menu. :::image type="content" source="./media/how-to-block-network-traffic-portal/security-admin-drop-down.png" alt-text="Screenshot of add a configuration drop-down.":::
-1. Enter a *Name* to identify this security configuration.
+1. On the **Basics** tab, enter a *Name* to identify this security configuration and select **Next: Rule collections**.
:::image type="content" source="./media/how-to-block-network-traffic-portal/security-configuration-name.png" alt-text="Screenshot of security configuration name field."::: ## Add a rule collection
-1. Select **+ Add a rule collection** to create your set of rules.
-
- :::image type="content" source="./media/how-to-block-network-traffic-portal/add-rule-collection-button.png" alt-text="Screenshot of add a rule collection button.":::
-
-1. Enter a *Name* to identify this rule collection and then select the *Network groups* you want to apply the set of rules to.
+1. Enter a *Name* to identify this rule collection and then select the *Target network groups* you want to apply the set of rules to.
:::image type="content" source="./media/how-to-block-network-traffic-portal/rule-collection-target.png" alt-text="Screenshot of rule collection name and target network groups."::: ## Add a security rule
-1. Select **+ Add a rule** from the *Add a rule collection page*.
+1. Select **+ Add** from the *Add a rule collection page*.
:::image type="content" source="./media/how-to-block-network-traffic-portal/add-rule-button.png" alt-text="Screenshot of add a rule button.":::
Before you start to configure security admin rules, confirm that you've done the
| Destination service tag | This field will appear when you select the destination type of *Service tag*. Select service tag(s) for services you want to specify as the destination. See [Available service tags](../virtual-network/service-tags-overview.md#available-service-tags), for the list of supported tags. | | Destination port | Enter a single port number or a port range such as (1024-65535). When defining more than one port or port ranges, separate them using a comma. To specify any port, enter *. Enter **3389** for this example. |
-1. Repeat steps 1-3 again if you want to add more rule to the rule collection.
+1. Repeat steps 1-3 again if you want to add more rules to the rule collection.
-1. Once you're satisfied with all the rules you wanted to create, select **Save** to add the rule collection to the security admin configuration.
+1. Once you're satisfied with all the rules you wanted to create, select **Add** to add the rule collection to the security admin configuration.
- :::image type="content" source="./media/how-to-block-network-traffic-portal/save-rule-collection.png" alt-text="Screenshot of a rule in a rule collection.":::
+ :::image type="content" source="./media/how-to-block-network-traffic-portal/save-rule-collection.png" alt-text="Screenshot of a rule collection.":::
-1. Then select **Add** to create the configuration.
+1. Then select **Review + Create** and **Create** to complete the security configuration.
- :::image type="content" source="./media/how-to-block-network-traffic-portal/create-security-configuration.png" alt-text="Screenshot of add button for creating a security configuration.":::
## Deploy the security admin configuration
If you just created a new security admin configuration, make sure to deploy this
:::image type="content" source="./media/how-to-block-network-traffic-portal/deploy-configuration.png" alt-text="Screenshot of deploy a configuration button.":::
-1. Select the configuration type of **SecurityAdmin** and the configuration you created in the last section. Then choose the region(s) you would like to deploy this configuration to and select **Deploy**.
+1. Select the configuration type of **Include security admin in your goal state** and the security configuration you created in the last section. Then choose the region(s) you would like to deploy this configuration to.
:::image type="content" source="./media/how-to-block-network-traffic-portal/deploy-security-configuration.png" alt-text="Screenshot of deploy a security configuration page.":::
-1. Select **OK** to confirm you want to overwrite any existing configuration and deploy the security admin configuration.
-
- :::image type="content" source="./media/how-to-block-network-traffic-portal/confirm-security.png" alt-text="Screenshot of confirmation message for deploying a security configuration.":::
+1. Select **Next** and **Deploy** to deploy the security admin configuration.
## Update existing security admin configuration
virtual-network-manager How To Block Network Traffic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vs-azure-tools-storage-manage-with-storage-explorer.md
As you enter text in the search box, Storage Explorer displays all resources tha
## Next steps * [Manage Azure Blob storage resources with Storage Explorer](vs-azure-tools-storage-explorer-blobs.md)
-* [Work with data using Azure Storage Explorer](./cosmos-db/storage-explorer.md)
* [Manage Azure Data Lake Store resources with Storage Explorer](./data-lake-store/data-lake-store-in-storage-explorer.md) [14]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/get-shared-access-signature-for-storage-explorer.png
web-application-firewall Ag Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/ag-overview.md
description: This article provides an overview of Web Application Firewall (WAF)
Previously updated : 09/02/2021 Last updated : 04/21/2022
Azure Web Application Firewall (WAF) on Azure Application Gateway provides centralized protection of your web applications from common exploits and vulnerabilities. Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. SQL injection and cross-site scripting are among the most common attacks.
-WAF on Application Gateway is based on [Core Rule Set (CRS)](https://owasp.org/www-project-modsecurity-core-rule-set/) 3.1, 3.0, or 2.2.9 from the Open Web Application Security Project (OWASP).
+WAF on Application Gateway is based on the [Core Rule Set (CRS)](application-gateway-crs-rulegroups-rules.md) from the Open Web Application Security Project (OWASP).
-All of the WAF features listed below exist inside of a WAF Policy. You can create multiple policies, and they can be associated with an Application Gateway, to individual listeners, or to path-based routing rules on an Application Gateway. This way, you can have separate policies for each site behind your Application Gateway if needed. For more information on WAF Policies, see [Create a WAF Policy](create-waf-policy-ag.md).
+All of the WAF features listed below exist inside of a WAF policy. You can create multiple policies, and they can be associated with an Application Gateway, to individual listeners, or to path-based routing rules on an Application Gateway. This way, you can have separate policies for each site behind your Application Gateway if needed. For more information on WAF policies, see [Create a WAF Policy](create-waf-policy-ag.md).
> [!Note] > Application Gateway has two versions of the WAF sku: Application Gateway WAF_v1 and Application Gateway WAF_v2. WAF policy associations are only supported for the Application Gateway WAF_v2 sku.
This section describes the core benefits that WAF on Application Gateway provide
## Features -- SQL-injection protection.-- Cross-site scripting protection.-- Protection against other common web attacks, such as command injection, HTTP request smuggling, HTTP response splitting, and remote file inclusion.-- Protection against HTTP protocol violations.-- Protection against HTTP protocol anomalies, such as missing host user-agent and accept headers.-- Protection against crawlers and scanners.-- Detection of common application misconfigurations (for example, Apache and IIS).
+- SQL injection protection.
+- Cross-site scripting protection.
+- Protection against other common web attacks, such as command injection, HTTP request smuggling, HTTP response splitting, and remote file inclusion.
+- Protection against HTTP protocol violations.
+- Protection against HTTP protocol anomalies, such as missing host user-agent and accept headers.
+- Protection against crawlers and scanners.
+- Detection of common application misconfigurations (for example, Apache and IIS).
- Configurable request size limits with lower and upper bounds. - Exclusion lists let you omit certain request attributes from a WAF evaluation. A common example is Active Directory-inserted tokens that are used for authentication or password fields. - Create custom rules to suit the specific needs of your applications.
A web application delivered by Application Gateway can have a WAF policy associa
### Core rule sets
-Application Gateway supports three rule sets: CRS 3.1, CRS 3.0, and CRS 2.2.9. These rules protect your web applications from malicious activity.
+Application Gateway supports multiple rule sets, including CRS 3.2, CRS 3.1, and CRS 3.0. These rules protect your web applications from malicious activity.
For more information, see [Web application firewall CRS rule groups and rules](application-gateway-crs-rulegroups-rules.md).
The geomatch operator is now available for custom rules. See [geomatch custom ru
For more information on custom rules, see [Custom Rules for Application Gateway.](custom-waf-rules-overview.md)
-### Bot Mitigation
+### Bot mitigation
A managed Bot protection rule set can be enabled for your WAF to block or log requests from known malicious IP addresses, alongside the managed ruleset. The IP addresses are sourced from the Microsoft Threat Intelligence feed. Intelligent Security Graph powers Microsoft threat intelligence and is used by multiple services including Microsoft Defender for Cloud.
The Application Gateway WAF can be configured to run in the following two modes:
> [!NOTE] > It is recommended that you run a newly deployed WAF in Detection mode for a short period of time in a production environment. This provides the opportunity to obtain [firewall logs](../../application-gateway/application-gateway-diagnostics.md#firewall-log) and update any exceptions or [custom rules](./custom-waf-rules-overview.md) prior to transition to Prevention mode. This can help reduce the occurrence of unexpected blocked traffic.
+### WAF engines
+
+The Azure web application firewall (WAF) engine is the component that inspects traffic and determines whether a request includes a signature that represents a potential attack. When you use CRS 3.2 or later, your WAF runs the new [WAF engine](waf-engine.md), which gives you higher performance and an improved set of features. When you use earlier versions of the CRS, your WAF runs on an older engine. New features will only be available on the new Azure WAF engine.
+ ### Anomaly Scoring mode OWASP has two modes for deciding whether to block traffic: Traditional mode and Anomaly Scoring mode.
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
description: This page provides information on web application firewall CRS rule
Previously updated : 04/07/2022 Last updated : 04/28/2022
Application Gateway web application firewall (WAF) protects web applications fro
## Core rule sets
-The Application Gateway WAF comes pre-configured with CRS 3.1 by default. But you can choose to use CRS 3.2, 3.0, or 2.2.9 instead.
+The Application Gateway WAF comes pre-configured with CRS 3.1 by default, but you can choose to use any other supported CRS version.
-
-CRS 3.2 (preview) offers a new engine and new rule sets defending against Java infections, an initial set of file upload checks, fixed false positives, and more.
-
-CRS 3.1 offers reduced false positives compared with CRS 3.0 and 2.2.9. You can also [customize rules to suit your needs](application-gateway-customize-waf-rules-portal.md).
+CRS 3.2 offers a new engine and new rule sets defending against Java infections, an initial set of file upload checks, and fewer false positives compared with earlier versions of CRS. You can also [customize rules to suit your needs](application-gateway-customize-waf-rules-portal.md). Learn more about the new [Azure WAF engine](waf-engine.md).
> [!div class="mx-imgBorder"] > ![Manages rules](../media/application-gateway-crs-rulegroups-rules/managed-rules-01.png) The WAF protects against the following web vulnerabilities: -- SQL-injection attacks-- Cross-site scripting attacks-- Other common attacks, such as command injection, HTTP request smuggling, HTTP response splitting, and remote file inclusion-- HTTP protocol violations-- HTTP protocol anomalies, such as missing host user-agent and accept headers-- Bots, crawlers, and scanners-- Common application misconfigurations (for example, Apache and IIS)
+- SQL-injection attacks
+- Cross-site scripting attacks
+- Other common attacks, such as command injection, HTTP request smuggling, HTTP response splitting, and remote file inclusion
+- HTTP protocol violations
+- HTTP protocol anomalies, such as missing host user-agent and accept headers
+- Bots, crawlers, and scanners
+- Common application misconfigurations (for example, Apache and IIS)
-### OWASP CRS 3.2 (preview)
+### OWASP CRS 3.2
CRS 3.2 includes 14 rule groups, as shown in the following table. Each group contains multiple rules, which can be disabled. > [!NOTE]
-> CRS 3.2 is only available on the WAF_v2 SKU.
+> CRS 3.2 is only available on the WAF_v2 SKU. Because CRS 3.2 runs on the new Azure WAF engine, you can't downgrade to CRS 3.1 or earlier. If you need to downgrade, [contact Azure Support](https://aka.ms/azuresupportrequest).
|Rule group|Description| |||
CRS 3.0 includes 13 rule groups, as shown in the following table. Each group con
CRS 2.2.9 includes 10 rule groups, as shown in the following table. Each group contains multiple rules, which can be disabled.
+> [!NOTE]
+> CRS 2.2.9 is no longer supported for new WAF policies. We recommend you upgrade to the latest CRS version.
+ |Rule group|Description| ||| |**[crs_20_protocol_violations](#crs20)**|Protect against protocol violations (such as invalid characters or a GET with a request body)|
CRS 2.2.9 includes 10 rule groups, as shown in the following table. Each group c
The following rule groups and rules are available when using Web Application Firewall on Application Gateway.
-# [OWASP 3.2 (preview)](#tab/owasp32)
+# [OWASP 3.2](#tab/owasp32)
## <a name="owasp32"></a> 3.2 rule sets
web-application-firewall Application Gateway Waf Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-configuration.md
description: This article provides information on Web Application Firewall exclu
Previously updated : 03/08/2022 Last updated : 04/21/2022
# Web Application Firewall exclusion lists
-The Azure Application Gateway Web Application Firewall (WAF) provides protection for web applications. This article describes the configuration for WAF exclusion lists. These settings are located in the WAF Policy associated to your Application Gateway. To learn more about WAF Policies, see [Azure Web Application Firewall on Azure Application Gateway](ag-overview.md) and [Create Web Application Firewall policies for Application Gateway](create-waf-policy-ag.md)
+The Azure Application Gateway Web Application Firewall (WAF) provides protection for web applications. This article describes the configuration for WAF exclusion lists. These settings are located in the WAF policy associated to your Application Gateway. To learn more about WAF policies, see [Azure Web Application Firewall on Azure Application Gateway](ag-overview.md) and [Create Web Application Firewall policies for Application Gateway](create-waf-policy-ag.md).
-Sometimes Web Application Firewall (WAF) might block a request that you want to allow for your application. WAF exclusion lists allow you to omit certain request attributes from a WAF evaluation. The rest of the request is evaluated as normal.
+Sometimes WAF might block a request that you want to allow for your application. WAF exclusion lists allow you to omit certain request attributes from a WAF evaluation. The rest of the request is evaluated as normal.
-For example, Active Directory inserts tokens that are used for authentication. When used in a request header, these tokens can contain special characters that may trigger a false positive from the WAF rules. By adding the header to an exclusion list, you can configure WAF to ignore the header, but WAF still evaluates the rest of the request.
+For example, Active Directory inserts tokens that are used for authentication. When used in a request header, these tokens can contain special characters that might trigger a false positive detection from the WAF rules. By adding the header to an exclusion list, you can configure WAF to ignore the header, but WAF still evaluates the rest of the request.
-Exclusion lists are global in scope.
+You can configure exclusions to apply when specific WAF rules are evaluated, or to apply globally to the evaluation of all WAF rules. Exclusion rules apply to your whole web application.
-To set exclusion lists in the Azure portal, configure **Exclusions** in the WAF policy resource's **Policy settings** page:
+## Identify request attributes to exclude
+When you configure a WAF exclusion, you must specify the attributes of the request that should be excluded from the WAF evaluation. You can configure a WAF exclusion for the following request attributes:
-## Attributes
-
-The following attributes can be added to exclusion lists by name. The values of the chosen field aren't evaluated against WAF rules, but their names still are (see Example 1 below, the value of the User-Agent header is excluded from WAF evaluation). The exclusion lists remove inspection of the field's value.
-
-* Request Headers
-* Request Cookies
+* Request headers
+* Request cookies
* Request attribute name (args) can be added as an exclusion element, such as:- * Form field name * JSON entity * URL query string args
-You can specify an exact request header, body, cookie, or query string attribute match. Or, you can optionally specify partial matches. Exclusion rules are global in scope, and apply to all pages and all rules.
-
-The following are the supported match criteria operators:
+You can specify an exact request header, body, cookie, or query string attribute match. Or, you can specify partial matches. Use the following operators to configure the exclusion:
- **Equals**: This operator is used for an exact match. As an example, for selecting a header named **bearerToken**, use the equals operator with the selector set as **bearerToken**. - **Starts with**: This operator matches all fields that start with the specified selector value.
The following are the supported match criteria operators:
- **Contains**: This operator matches all request fields that contain the specified selector value. - **Equals any**: This operator matches all request fields. * will be the selector value.
-In all cases matching is case insensitive and regular expression aren't allowed as selectors.
+In all cases matching is case insensitive. Regular expressions aren't allowed as selectors.
> [!NOTE] > For more information and troubleshooting help, see [WAF troubleshooting](web-application-firewall-troubleshoot.md).
-## Examples
+### Request attributes by keys and values
+
+When you configure an exclusion, you need to determine whether you want to exclude the key or the value from WAF evaluation.
+
+For example, suppose your requests include this header:
+
+```
+My-Header: 1=1
+```
+
+The value of the header (`1=1`) might be detected as an attack by the WAF. But if you know this is a legitimate value for your scenario, you can configure an exclusion for the *value* of the header. To do so, you use the **RequestHeaderValues** request attribute, and select the header name (`My-Header`) with the value that should be ignored.
+
+> [!NOTE]
+> Request attributes by key and values are only available in CRS 3.2 and newer.
+>
+> Request attributes by names work the same way as request attributes by values, and are included for backward compatibility with CRS 3.1 and earlier versions. We recommend you use request attributes by values instead of attributes by names. For example, use **RequestHeaderValues** instead of **RequestHeaderNames**.
+
+In contrast, if your WAF detects the header's name (`My-Header`) as an attack, you could configure an exclusion for the header *key* by using the **RequestHeaderKeys** request attribute. The **RequestHeaderKeys** attribute is only available in CRS 3.2 and newer.
+## Exclusion scopes
-The following examples demonstrate the use of exclusions.
+Exclusions can be configured to apply to a specific set of WAF rules, to rulesets, or globally across all rules.
-### Example 1
+> [!TIP]
+> It's a good practice to make exclusions as narrow and specific as possible, to avoid accidentally leaving room for attackers to exploit your system. When you need to add an exclusion rule, use per-rule exclusions wherever possible.
-In this example, you want to exclude the user-agent header. The user-agent request header contains a characteristic string that allows the network protocol peers to identify the application type, operating system, software vendor, or software version of the requesting software user agent. For more information, see [User-Agent](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent).
+### Per-rule exclusions
-There can be any number of reasons to disable evaluating this header. There could be a string that the WAF sees and assumes itΓÇÖs malicious. For example, the classic SQL attack ΓÇ£x=xΓÇ¥ in a string. In some cases, this can be legitimate traffic. So you might need to exclude this header from WAF evaluation.
+You can configure an exclusion for a specific rule, group of rules, or rule set. You must specify the rule or rules that the exclusion applies to. You also need to specify the request attribute that should be excluded from the WAF evaluation.
-The following Azure PowerShell cmdlet excludes the user-agent header from evaluation:
+Per-rule exclusions are available when you use the OWASP (CRS) ruleset version 3.2 or later.
+
+#### Example
+
+Suppose you want the WAF to ignore the value of the `User-Agent` request header. The `User-Agent` header contains a characteristic string that allows the network protocol peers to identify the application type, operating system, software vendor, or software version of the requesting software user agent. For more information, see [User-Agent](https://developer.mozilla.org/docs/Web/HTTP/Headers/User-Agent).
+
+There can be any number of reasons to disable evaluating this header. There could be a string that the WAF detects and assumes itΓÇÖs malicious. For example, the `User-Agent` header might include the classic SQL injection attack `x=x` in a string. In some cases, this can be legitimate traffic. So you might need to exclude this header from WAF evaluation.
+
+You can use the following approaches to exclude the `User-Agent` header from evaluation by all of the SQL injection rules:
+
+> [!NOTE]
+> As of early May 2022, we are rolling out updates to the Azure portal for these features. If you don't see configuration options in the portal, please use PowerShell, the Azure CLI, Bicep, or ARM templates to configure global or per-rule exclusions.
+
+# [Azure PowerShell](#tab/powershell)
```azurepowershell
-$exclusion1 = New-AzApplicationGatewayFirewallExclusionConfig `
- -MatchVariable "RequestHeaderNames" `
- -SelectorMatchOperator "Equals" `
- -Selector "User-Agent"
+$ruleGroupEntry = New-AzApplicationGatewayFirewallPolicyExclusionManagedRuleGroup `
+ -RuleGroupName 'REQUEST-942-APPLICATION-ATTACK-SQLI'
+
+$exclusionManagedRuleSet = New-AzApplicationGatewayFirewallPolicyExclusionManagedRuleSet `
+ -RuleSetType 'OWASP' `
+ -RuleSetVersion '3.2' `
+ -RuleGroup $ruleGroupEntry
+
+$exclusionEntry = New-AzApplicationGatewayFirewallPolicyExclusion `
+ -MatchVariable "RequestHeaderValues" `
+ -SelectorMatchOperator 'Equals' `
+ -Selector 'User-Agent' `
+ -ExclusionManagedRuleSet $exclusionManagedRuleSet
+
+$wafPolicy = Get-AzApplicationGatewayFirewallPolicy `
+ -Name $wafPolicyName `
+ -ResourceGroupName $resourceGroupName
+$wafPolicy.ManagedRules[0].Exclusions.Add($exclusionEntry)
+$wafPolicy | Set-AzApplicationGatewayFirewallPolicy
+```
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az network application-gateway waf-policy managed-rule exclusion rule-set add \
+ --resource-group $resourceGroupName \
+ --policy-name $wafPolicyName \
+ --type OWASP \
+ --version 3.2 \
+ --group-name 'REQUEST-942-APPLICATION-ATTACK-SQLI' \
+ --match-variable 'RequestHeaderValues' \
+ --match-operator 'Equals' \
+ --selector 'User-Agent'
```
-### Example 2
-This example excludes the value in the *user* parameter that is passed in the request via the URL. For example, say itΓÇÖs common in your environment for the user field to contain a string that the WAF views as malicious content, so it blocks it. You can exclude the user parameter in this case so that the WAF doesn't evaluate anything in the field.
+# [Bicep](#tab/bicep)
-The following Azure PowerShell cmdlet excludes the user parameter from evaluation:
+```bicep
+resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies@2021-05-01' = {
+ name: wafPolicyName
+ location: location
+ properties: {
+ managedRules: {
+ managedRuleSets: [
+ {
+ ruleSetType: 'OWASP'
+ ruleSetVersion: '3.2'
+ }
+ ]
+ exclusions: [
+ {
+ matchVariable: 'RequestHeaderValues'
+ selectorMatchOperator: 'Equals'
+ selector: 'User-Agent'
+ exclusionManagedRuleSets: [
+ {
+ ruleSetType: 'OWASP'
+ ruleSetVersion: '3.2'
+ ruleGroups: [
+ {
+ ruleGroupName: 'REQUEST-942-APPLICATION-ATTACK-SQLI'
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+}
+```
+
+# [ARM template](#tab/armtemplate)
+
+```json
+{
+ "type": "Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies",
+ "apiVersion": "2021-05-01",
+ "name": "[parameters('wafPolicyName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "managedRules": {
+ "managedRuleSets": [
+ {
+ "ruleSetType": "OWASP",
+ "ruleSetVersion": "3.2"
+ }
+ ],
+ "exclusions": [
+ {
+ "matchVariable": "RequestHeaderValues",
+ "selectorMatchOperator": "Equals",
+ "selector": "User-Agent",
+ "exclusionManagedRuleSets": [
+ {
+ "ruleSetType": "OWASP",
+ "ruleSetVersion": "3.2",
+ "ruleGroups": [
+ {
+ "ruleGroupName": "REQUEST-942-APPLICATION-ATTACK-SQLI"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+}
+```
+++
+### Global exclusions
+
+You can configure an exclusion to apply across all WAF rules.
+
+#### Example
+
+Suppose you want to exclude the value in the *user* parameter that is passed in the request via the URL. For example, say itΓÇÖs common in your environment for the `user` query string argument to contain a string that the WAF views as malicious content, so it blocks it. You can exclude all query string arguments where the name begins with the word `user`, so that the WAF doesn't evaluate the field's value.
+
+The following example shows how you can exclude the `user` query string argument from evaluation:
+
+> [!NOTE]
+> As of early May 2022, we are rolling out updates to the Azure portal for these features. If you don't see configuration options in the portal, please use PowerShell, the Azure CLI, Bicep, or ARM templates to configure global or per-rule exclusions.
+
+# [Azure PowerShell](#tab/powershell)
```azurepowershell
-$exclusion2 = New-AzApplicationGatewayFirewallExclusionConfig `
- -MatchVariable "RequestArgNames" `
- -SelectorMatchOperator "StartsWith" `
- -Selector "user"
+$exclusion = New-AzApplicationGatewayFirewallExclusionConfig `
+ -MatchVariable 'RequestArgNames' `
+ -SelectorMatchOperator 'StartsWith' `
+ -Selector 'user'
+```
+
+# [Azure CLI](#tab/cli)
+
+```azurecli
+az network application-gateway waf-policy managed-rule exclusion add \
+ --resource-group $resourceGroupName \
+ --policy-name $wafPolicyName \
+ --match-variable 'RequestArgNames' \
+ --selector-match-operator 'StartsWith' \
+ --selector 'user'
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies@2021-05-01' = {
+ name: wafPolicyName
+ location: location
+ properties: {
+ managedRules: {
+ managedRuleSets: [
+ {
+ ruleSetType: 'OWASP'
+ ruleSetVersion: '3.2'
+ }
+ ]
+ exclusions: [
+ {
+ matchVariable: 'RequestArgNames'
+ selectorMatchOperator: 'StartsWith'
+ selector: 'user'
+ }
+ ]
+ }
+ }
+}
```
-So if the URL `http://www.contoso.com/?user%3c%3e=joe` is passed to the WAF, it won't evaluate the string **joe**, but it will still evaluate the parameter name **user%3c%3e**.
+
+# [ARM template](#tab/armtemplate)
+
+```json
+{
+ "type": "Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies",
+ "apiVersion": "2021-05-01",
+ "name": "[parameters('wafPolicyName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "managedRules": {
+ "managedRuleSets": [
+ {
+ "ruleSetType": "OWASP",
+ "ruleSetVersion": "3.2"
+ }
+ ],
+ "exclusions": [
+ {
+ "matchVariable": "RequestArgNames",
+ "selectorMatchOperator": "StartsWith",
+ "selector": "user"
+ }
+ ]
+ }
+ }
+}
+```
+++
+So if the URL `http://www.contoso.com/?user%3c%3e=joe` is scanned by the WAF, it won't evaluate the string **joe**, but it will still evaluate the parameter name **user%3c%3e**.
## Next steps
web-application-firewall Application Gateway Web Application Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-web-application-firewall-portal.md
Previously updated : 05/19/2021 Last updated : 04/26/2022 #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway with Web Application Firewall so I can protect my applications.
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
- **Resource group**: Select **myResourceGroupAG** for the resource group. If it doesn't exist, select **Create new** to create it. - **Application gateway name**: Enter *myAppGateway* for the name of the application gateway. - **Tier**: select **WAF V2**.
+ - **WAF Policy**: Select **Create new**, type a name for the new policy, and then select **OK**.
+ This creates a basic WAF policy with a managed Core Rule Set (CRS).
- ![Create new application gateway: Basics](../media/application-gateway-web-application-firewall-portal/application-gateway-create-basics.png)
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-basics.png" alt-text="Screenshot of Create new application gateway: Basics tab." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-basics.png":::
2. For Azure to communicate between the resources that you create, it needs a virtual network. You can either create a new virtual network or use an existing one. In this example, you'll create a new virtual network at the same time that you create the application gateway. Application Gateway instances are created in separate subnets. You create two subnets in this example: one for the application gateway, and another for the backend servers.
In this example, you install IIS on the virtual machines only to verify Azure cr
![Install custom extension](../media/application-gateway-web-application-firewall-portal/application-gateway-extension.png)
-2. Set the location parameter for you environment, and then run the following command to install IIS on the virtual machine:
+2. Set the location parameter for your environment, and then run the following command to install IIS on the virtual machine:
```azurepowershell-interactive Set-AzVMExtension `
In this example, you install IIS on the virtual machines only to verify Azure cr
7. Wait for the deployment to complete before proceeding to the next step.
-
-## Create and link a Web Application Firewall policy
-
-All of the WAF customizations and settings are in a separate object, called a WAF Policy. The policy must be associated with your Application Gateway.
-
-Create a basic WAF policy with a managed Default Rule Set (DRS).
-
-1. On the upper left side of the portal, select **Create a resource**. Search for **WAF**, select **Web Application Firewall**, then select **Create**.
-2. On **Create a WAF policy** page, **Basics** tab, enter or select the following information, accept the defaults for the remaining settings, and then select **Review + create**:
-
- |Setting |Value |
- |||
- |Policy for |Regional WAF (Application Gateway)|
- |Subscription |Select your subscription name|
- |Resource group |Select **myResourceGroupAG**|
- |Policy name |Type a unique name for your WAF policy.|
-1. Select **Next : Managed rules**.
-1. Accept the defaults and then select **Next : Policy settings**.
-1. Accept the default, and then select **Next : Custom rules**.
-1. Select **Next : Association**.
-1. Select **Add association** and then select **Application Gateway**.
-1. Select the checkbox for **Apply the Web Application Firewall policy configuration even if it is different from the current configuration**.
-1. Select **Add**.
-
- > [!NOTE]
- > If you assign a policy to your Application Gateway (or listener) that already has a policy in place, the original policy is overwritten and replaced by the new policy.
-4. Select **Review + create**, then select **Create**.
- ## Test the application gateway Although IIS isn't required to create the application gateway, you installed it to verify whether Azure successfully created the application gateway. Use IIS to test the application gateway:
web-application-firewall Bot Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/bot-protection-overview.md
description: This article provides an overview of web application firewall (WAF)
Previously updated : 07/30/2021 Last updated : 04/21/2022
You can enable a managed bot protection rule set for your WAF to block or log re
## Use with OWASP rulesets
-You can use the Bot Protection ruleset alongside any of the OWASP rulesets (2.2.9, 3.0, and 3.1). Only one OWASP ruleset can be used at any given time. The bot protection ruleset contains an additional rule that appears in its own ruleset. It's titled **Microsoft_BotManagerRuleSet_0.1**, and you can enable or disable it like the other OWASP rules.
+You can use the Bot Protection ruleset alongside any of the OWASP rulesets with the Application Gateway WAF v2 SKU. Only one OWASP ruleset can be used at any given time. The bot protection ruleset contains an additional rule that appears in its own ruleset. It's titled **Microsoft_BotManagerRuleSet_0.1**, and you can enable or disable it like the other OWASP rules.
![Bot ruleset](../media/bot-protection-overview/bot-ruleset.png)
web-application-firewall Waf Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-engine.md
+
+ Title: WAF engine on Azure Application Gateway
+
+description: This article provides an overview of the Azure WAF engine.
+++ Last updated : 04/29/2022++++
+# WAF engine on Azure Application Gateway
+
+The Azure Web Application Firewall (WAF) engine is the component that inspects traffic and determines whether a request includes a signature that represents a potential attack and takes appropriate action depending on the configuration.
+
+## Next generation of WAF engine
+
+The new WAF engine is a high-performance, scalable Microsoft proprietary engine and has significant improvements over the previous WAF engine.
+
+The new engine, released with CRS 3.2, provides the following benefits:
+
+* **Improved performance:** Significant improvements in WAF latency, including P99 POST and GET latencies. We observed a significant reduction in P99 tail latencies with up to approximately 8x reduction in processing POST requests and approximately 4x reduction in processing GET requests.
+* **Increased scale:** Higher requests per second (RPS), using the same compute power and with the ability to process larger request sizes. Our next-generation engine can scale up to 8 times more RPS using the same compute power, and has an ability to process 16 times larger request sizes (up to 2 MB request sizes), which was not possible with the previous engine.
+* **Better protection:** New redesigned engine with efficient regex processing offers better protection against RegEx denial of service (DOS) attacks while maintaining a consistent latency experience.
+* **Richer feature set:** New features and future enhancement are available only through the new engine.
+
+## Support for new features
+
+There are many new features that are only supported in the Azure WAF engine. The features include:
+
+* [CRS 3.2](application-gateway-crs-rulegroups-rules.md#owasp-crs-32)
+ * Increased request body size limit to 2 MB
+ * Increased file upload limit to 4 GB
+* [WAF v2 metrics](application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics)
+* [Per rule exclusions](application-gateway-waf-configuration.md) and support for exclusion attributes by name
+
+New WAF features will only be released with later versions of CRS on the new WAF engine.
+
+## Request logging for custom rules
+
+There's a difference between how the previous engine and the new WAF engine log requests when a custom rule defines the action type as *Log*.
+
+When your WAF runs in prevention mode, the previous engine logs the request's action type as *Blocked* even though the request is allowed through by the custom rule. In detection mode, the previous engine logs the same request's action type as *Detected*.
+
+In contrast, the new WAF engine logs the request action type as *Log*, whether the WAF is running in prevention or detection mode.
+
+## Next steps
+
+Learn more about [WAF managed rules](application-gateway-crs-rulegroups-rules.md).