Updates from: 03/28/2023 01:16:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication In Sample Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md
Previously updated : 07/07/2022 Last updated : 03/24/2023
active-directory-b2c Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md
Data resides in the **United States** for the following locations:
Data resides in **Europe** for the following locations:
-> Algeria (DZ), Austria (AT), Azerbaijan (AZ), Bahrain (BH), Belarus (BY), Belgium (BE), Bulgaria (BG), Croatia (HR), Cyprus (CY), Czech Republic (CZ), Denmark (DK), Egypt (EG), Estonia (EE), Finland (FT), France (FR), Germany (DE), Greece (GR), Hungary (HU), Iceland (IS), Ireland (IE), Israel (IL), Italy (IT), Jordan (JO), Kazakhstan (KZ), Kenya (KE), Kuwait (KW), Latvia (LV), Lebanon (LB), Liechtenstein (LI), Lithuania (LT), Luxembourg (LU), North Macedonia (ML), Malta (MT), Montenegro (ME), Morocco (MA), Netherlands (NL), Nigeria (NG), Norway (NO), Oman (OM), Pakistan (PK), Poland (PL), Portugal (PT), Qatar (QA), Romania (RO), Russia (RU), Saudi Arabia (SA), Serbia (RS), Slovakia (SK), Slovenia (ST), South Africa (ZA), Spain (ES), Sweden (SE), Switzerland (CH), Tunisia (TN), Turkey (TR), Ukraine (UA), United Arab Emirates (AE) and United Kingdom (GB)
+> Algeria (DZ), Austria (AT), Azerbaijan (AZ), Bahrain (BH), Belarus (BY), Belgium (BE), Bulgaria (BG), Croatia (HR), Cyprus (CY), Czech Republic (CZ), Denmark (DK), Egypt (EG), Estonia (EE), Finland (FT), France (FR), Germany (DE), Greece (GR), Hungary (HU), Iceland (IS), Ireland (IE), Israel (IL), Italy (IT), Jordan (JO), Kazakhstan (KZ), Kenya (KE), Kuwait (KW), Latvia (LV), Lebanon (LB), Liechtenstein (LI), Lithuania (LT), Luxembourg (LU), North Macedonia (ML), Malta (MT), Montenegro (ME), Morocco (MA), Netherlands (NL), Nigeria (NG), Norway (NO), Oman (OM), Pakistan (PK), Poland (PL), Portugal (PT), Qatar (QA), Romania (RO), Russia (RU), Saudi Arabia (SA), Serbia (RS), Slovakia (SK), Slovenia (ST), South Africa (ZA), Spain (ES), Sweden (SE), Switzerland (CH), Tunisia (TN), T├╝rkiye (TR), Ukraine (UA), United Arab Emirates (AE) and United Kingdom (GB)
Data resides in **Asia Pacific** for the following locations:
active-directory-b2c Enable Authentication Angular Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app-options.md
Previously updated : 03/09/2023 Last updated : 03/23/2023
active-directory-b2c Enable Authentication Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app.md
Previously updated : 03/09/2023 Last updated : 03/23/2023
active-directory-b2c Enable Authentication Ios App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-ios-app-options.md
-+ Last updated 07/29/2021
active-directory-b2c Enable Authentication Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-ios-app.md
- Previously updated : 07/29/2021+ Last updated : 03/24/2023
Review the prerequisites and integration instructions in [Configure authenticati
## Create an iOS Swift app project
-If you don't already have an iOS Swift application, set up a new project by doing the following:
+If you don't already have an iOS Swift application, set up a new project by doing the following steps:
1. Open [Xcode](https://developer.apple.com/xcode/), and then select **File** > **New** > **Project**. 1. For iOS apps, select **iOS** > **App**, and then select **Next**.
If you don't already have an iOS Swift application, set up a new project by doin
## Step 1: Install the MSAL library
-1. Use [CocoaPods](https://cocoapods.org/) to install the MSAL library. In the same folder as your project's *.xcodeproj* file, if the *podfile* file doesn't exist, create an empty file called *podfile*. Add the following code to the *podfile* file:
+1. Use [CocoaPods](https://cocoapods.org/) to install the MSAL library. In the same folder as your project's *.xcodeproj* file, if the *podfile* file doesn't exist, create an empty file and name it *podfile*. Add the following code to the *podfile* file:
``` use_frameworks!
The [sample code](configure-authentication-sample-ios-app.md#step-4-get-the-ios-
- Contains information about your Azure AD B2C identity provider. The app uses this information to establish a trust relationship with Azure AD B2C. - Contains the authentication code to authenticate users, acquire tokens, and validate them.
-Choose a `UIViewController` where users will authenticate. In your `UIViewController`, merge the code with the [code that's provided in GitHub](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/blob/vNext/MSALiOS/ViewController.swift).
+Choose a `UIViewController` where users authenticate. In your `UIViewController`, merge the code with the [code that's provided in GitHub](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/blob/vNext/MSALiOS/ViewController.swift).
## Step 4: Configure your iOS Swift app
Authorization: Bearer <access-token>
When users [authenticate interactively](#step-62-start-an-interactive-authorization-request), the app gets an access token in the `acquireToken` closure. For subsequent web API calls, use the acquire token silent (`acquireTokenSilent`) method, as described in this section.
-The `acquireTokenSilent` method does the following:
+The `acquireTokenSilent` method does the following actions:
1. It attempts to fetch an access token with the requested scopes from the token cache. If the token is present and hasn't expired, the token is returned. 1. If the token isn't present in the token cache or it has expired, the MSAL library attempts to use the refresh token to acquire a new access token.
active-directory-b2c Enable Authentication Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app.md
- Previously updated : 06/25/2021+ Last updated : 03/24/2023
app.listen(port, () => {
## Step 4: Create the SPA user interface
-Add the SAP app `https://docsupdatetracker.net/index.html` file. This file implements a user interface that's built with a Bootstrap framework, and it imports script files for configuration, authentication, and web API calls.
+Add the SPA app `https://docsupdatetracker.net/index.html` file. This file implements a user interface that's built with a Bootstrap framework, and it imports script files for configuration, authentication, and web API calls.
The resources referenced by the *https://docsupdatetracker.net/index.html* file are detailed in the following table:
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
The following example shows the use of some of the user interface elements in th
<LocalizedString ElementType="UxElement" StringId="error_phone_throttled">You hit the limit on the number of call attempts. Try again shortly.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="error_throttled">You hit the limit on the number of verification attempts. Try again shortly.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="error_incorrect_code">The verification code you have entered does not match our records. Please try again, or request a new code.</LocalizedString>
- <LocalizedString ElementType="UxElement" StringId="countryList">{"DEFAULT":"Country/Region","AF":"Afghanistan","AX":"Åland Islands","AL":"Albania","DZ":"Algeria","AS":"American Samoa","AD":"Andorra","AO":"Angola","AI":"Anguilla","AQ":"Antarctica","AG":"Antigua and Barbuda","AR":"Argentina","AM":"Armenia","AW":"Aruba","AU":"Australia","AT":"Austria","AZ":"Azerbaijan","BS":"Bahamas","BH":"Bahrain","BD":"Bangladesh","BB":"Barbados","BY":"Belarus","BE":"Belgium","BZ":"Belize","BJ":"Benin","BM":"Bermuda","BT":"Bhutan","BO":"Bolivia","BQ":"Bonaire","BA":"Bosnia and Herzegovina","BW":"Botswana","BV":"Bouvet Island","BR":"Brazil","IO":"British Indian Ocean Territory","VG":"British Virgin Islands","BN":"Brunei","BG":"Bulgaria","BF":"Burkina Faso","BI":"Burundi","CV":"Cabo Verde","KH":"Cambodia","CM":"Cameroon","CA":"Canada","KY":"Cayman Islands","CF":"Central African Republic","TD":"Chad","CL":"Chile","CN":"China","CX":"Christmas Island","CC":"Cocos (Keeling) Islands","CO":"Colombia","KM":"Comoros","CG":"Congo","CD":"Congo (DRC)","CK":"Cook Islands","CR":"Costa Rica","CI":"Côte d'Ivoire","HR":"Croatia","CU":"Cuba","CW":"Curaçao","CY":"Cyprus","CZ":"Czech Republic","DK":"Denmark","DJ":"Djibouti","DM":"Dominica","DO":"Dominican Republic","EC":"Ecuador","EG":"Egypt","SV":"El Salvador","GQ":"Equatorial Guinea","ER":"Eritrea","EE":"Estonia","ET":"Ethiopia","FK":"Falkland Islands","FO":"Faroe Islands","FJ":"Fiji","FI":"Finland","FR":"France","GF":"French Guiana","PF":"French Polynesia","TF":"French Southern Territories","GA":"Gabon","GM":"Gambia","GE":"Georgia","DE":"Germany","GH":"Ghana","GI":"Gibraltar","GR":"Greece","GL":"Greenland","GD":"Grenada","GP":"Guadeloupe","GU":"Guam","GT":"Guatemala","GG":"Guernsey","GN":"Guinea","GW":"Guinea-Bissau","GY":"Guyana","HT":"Haiti","HM":"Heard Island and McDonald Islands","HN":"Honduras","HK":"Hong Kong SAR","HU":"Hungary","IS":"Iceland","IN":"India","ID":"Indonesia","IR":"Iran","IQ":"Iraq","IE":"Ireland","IM":"Isle of Man","IL":"Israel","IT":"Italy","JM":"Jamaica","JP":"Japan","JE":"Jersey","JO":"Jordan","KZ":"Kazakhstan","KE":"Kenya","KI":"Kiribati","KR":"Korea","KW":"Kuwait","KG":"Kyrgyzstan","LA":"Laos","LV":"Latvia","LB":"Lebanon","LS":"Lesotho","LR":"Liberia","LY":"Libya","LI":"Liechtenstein","LT":"Lithuania","LU":"Luxembourg","MO":"Macao SAR","MK":"North Macedonia","MG":"Madagascar","MW":"Malawi","MY":"Malaysia","MV":"Maldives","ML":"Mali","MT":"Malta","MH":"Marshall Islands","MQ":"Martinique","MR":"Mauritania","MU":"Mauritius","YT":"Mayotte","MX":"Mexico","FM":"Micronesia","MD":"Moldova","MC":"Monaco","MN":"Mongolia","ME":"Montenegro","MS":"Montserrat","MA":"Morocco","MZ":"Mozambique","MM":"Myanmar","NA":"Namibia","NR":"Nauru","NP":"Nepal","NL":"Netherlands","NC":"New Caledonia","NZ":"New Zealand","NI":"Nicaragua","NE":"Niger","NG":"Nigeria","NU":"Niue","NF":"Norfolk Island","KP":"North Korea","MP":"Northern Mariana Islands","NO":"Norway","OM":"Oman","PK":"Pakistan","PW":"Palau","PS":"Palestinian Authority","PA":"Panama","PG":"Papua New Guinea","PY":"Paraguay","PE":"Peru","PH":"Philippines","PN":"Pitcairn Islands","PL":"Poland","PT":"Portugal","PR":"Puerto Rico","QA":"Qatar","RE":"Réunion","RO":"Romania","RU":"Russia","RW":"Rwanda","BL":"Saint Barthélemy","KN":"Saint Kitts and Nevis","LC":"Saint Lucia","MF":"Saint Martin","PM":"Saint Pierre and Miquelon","VC":"Saint Vincent and the Grenadines","WS":"Samoa","SM":"San Marino","ST":"São Tomé and Príncipe","SA":"Saudi Arabia","SN":"Senegal","RS":"Serbia","SC":"Seychelles","SL":"Sierra Leone","SG":"Singapore","SX":"Sint Maarten","SK":"Slovakia","SI":"Slovenia","SB":"Solomon Islands","SO":"Somalia","ZA":"South Africa","GS":"South Georgia and South Sandwich Islands","SS":"South Sudan","ES":"Spain","LK":"Sri Lanka","SH":"St Helena, Ascension, Tristan da Cunha","SD":"Sudan","SR":"Suriname","SJ":"Svalbard","SZ":"Swaziland","SE":"Sweden","CH":"Switzerland","SY":"Syria","TW":"Taiwan","TJ":"Tajikistan","TZ":"Tanzania","TH":"Thailand","TL":"Timor-Leste","TG":"Togo","TK":"Tokelau","TO":"Tonga","TT":"Trinidad and Tobago","TN":"Tunisia","TR":"Turkey","TM":"Turkmenistan","TC":"Turks and Caicos Islands","TV":"Tuvalu","UM":"U.S. Outlying Islands","VI":"U.S. Virgin Islands","UG":"Uganda","UA":"Ukraine","AE":"United Arab Emirates","GB":"United Kingdom","US":"United States","UY":"Uruguay","UZ":"Uzbekistan","VU":"Vanuatu","VA":"Vatican City","VE":"Venezuela","VN":"Vietnam","WF":"Wallis and Futuna","YE":"Yemen","ZM":"Zambia","ZW":"Zimbabwe"}</LocalizedString>
+ <LocalizedString ElementType="UxElement" StringId="countryList">{"DEFAULT":"Country/Region","AF":"Afghanistan","AX":"Åland Islands","AL":"Albania","DZ":"Algeria","AS":"American Samoa","AD":"Andorra","AO":"Angola","AI":"Anguilla","AQ":"Antarctica","AG":"Antigua and Barbuda","AR":"Argentina","AM":"Armenia","AW":"Aruba","AU":"Australia","AT":"Austria","AZ":"Azerbaijan","BS":"Bahamas","BH":"Bahrain","BD":"Bangladesh","BB":"Barbados","BY":"Belarus","BE":"Belgium","BZ":"Belize","BJ":"Benin","BM":"Bermuda","BT":"Bhutan","BO":"Bolivia","BQ":"Bonaire","BA":"Bosnia and Herzegovina","BW":"Botswana","BV":"Bouvet Island","BR":"Brazil","IO":"British Indian Ocean Territory","VG":"British Virgin Islands","BN":"Brunei","BG":"Bulgaria","BF":"Burkina Faso","BI":"Burundi","CV":"Cabo Verde","KH":"Cambodia","CM":"Cameroon","CA":"Canada","KY":"Cayman Islands","CF":"Central African Republic","TD":"Chad","CL":"Chile","CN":"China","CX":"Christmas Island","CC":"Cocos (Keeling) Islands","CO":"Colombia","KM":"Comoros","CG":"Congo","CD":"Congo (DRC)","CK":"Cook Islands","CR":"Costa Rica","CI":"Côte d'Ivoire","HR":"Croatia","CU":"Cuba","CW":"Curaçao","CY":"Cyprus","CZ":"Czech Republic","DK":"Denmark","DJ":"Djibouti","DM":"Dominica","DO":"Dominican Republic","EC":"Ecuador","EG":"Egypt","SV":"El Salvador","GQ":"Equatorial Guinea","ER":"Eritrea","EE":"Estonia","ET":"Ethiopia","FK":"Falkland Islands","FO":"Faroe Islands","FJ":"Fiji","FI":"Finland","FR":"France","GF":"French Guiana","PF":"French Polynesia","TF":"French Southern Territories","GA":"Gabon","GM":"Gambia","GE":"Georgia","DE":"Germany","GH":"Ghana","GI":"Gibraltar","GR":"Greece","GL":"Greenland","GD":"Grenada","GP":"Guadeloupe","GU":"Guam","GT":"Guatemala","GG":"Guernsey","GN":"Guinea","GW":"Guinea-Bissau","GY":"Guyana","HT":"Haiti","HM":"Heard Island and McDonald Islands","HN":"Honduras","HK":"Hong Kong SAR","HU":"Hungary","IS":"Iceland","IN":"India","ID":"Indonesia","IR":"Iran","IQ":"Iraq","IE":"Ireland","IM":"Isle of Man","IL":"Israel","IT":"Italy","JM":"Jamaica","JP":"Japan","JE":"Jersey","JO":"Jordan","KZ":"Kazakhstan","KE":"Kenya","KI":"Kiribati","KR":"Korea","KW":"Kuwait","KG":"Kyrgyzstan","LA":"Laos","LV":"Latvia","LB":"Lebanon","LS":"Lesotho","LR":"Liberia","LY":"Libya","LI":"Liechtenstein","LT":"Lithuania","LU":"Luxembourg","MO":"Macao SAR","MK":"North Macedonia","MG":"Madagascar","MW":"Malawi","MY":"Malaysia","MV":"Maldives","ML":"Mali","MT":"Malta","MH":"Marshall Islands","MQ":"Martinique","MR":"Mauritania","MU":"Mauritius","YT":"Mayotte","MX":"Mexico","FM":"Micronesia","MD":"Moldova","MC":"Monaco","MN":"Mongolia","ME":"Montenegro","MS":"Montserrat","MA":"Morocco","MZ":"Mozambique","MM":"Myanmar","NA":"Namibia","NR":"Nauru","NP":"Nepal","NL":"Netherlands","NC":"New Caledonia","NZ":"New Zealand","NI":"Nicaragua","NE":"Niger","NG":"Nigeria","NU":"Niue","NF":"Norfolk Island","KP":"North Korea","MP":"Northern Mariana Islands","NO":"Norway","OM":"Oman","PK":"Pakistan","PW":"Palau","PS":"Palestinian Authority","PA":"Panama","PG":"Papua New Guinea","PY":"Paraguay","PE":"Peru","PH":"Philippines","PN":"Pitcairn Islands","PL":"Poland","PT":"Portugal","PR":"Puerto Rico","QA":"Qatar","RE":"Réunion","RO":"Romania","RU":"Russia","RW":"Rwanda","BL":"Saint Barthélemy","KN":"Saint Kitts and Nevis","LC":"Saint Lucia","MF":"Saint Martin","PM":"Saint Pierre and Miquelon","VC":"Saint Vincent and the Grenadines","WS":"Samoa","SM":"San Marino","ST":"São Tomé and Príncipe","SA":"Saudi Arabia","SN":"Senegal","RS":"Serbia","SC":"Seychelles","SL":"Sierra Leone","SG":"Singapore","SX":"Sint Maarten","SK":"Slovakia","SI":"Slovenia","SB":"Solomon Islands","SO":"Somalia","ZA":"South Africa","GS":"South Georgia and South Sandwich Islands","SS":"South Sudan","ES":"Spain","LK":"Sri Lanka","SH":"St Helena, Ascension, Tristan da Cunha","SD":"Sudan","SR":"Suriname","SJ":"Svalbard","SZ":"Swaziland","SE":"Sweden","CH":"Switzerland","SY":"Syria","TW":"Taiwan","TJ":"Tajikistan","TZ":"Tanzania","TH":"Thailand","TL":"Timor-Leste","TG":"Togo","TK":"Tokelau","TO":"Tonga","TT":"Trinidad and Tobago","TN":"Tunisia","TR":"Türkiye","TM":"Turkmenistan","TC":"Turks and Caicos Islands","TV":"Tuvalu","UM":"U.S. Outlying Islands","VI":"U.S. Virgin Islands","UG":"Uganda","UA":"Ukraine","AE":"United Arab Emirates","GB":"United Kingdom","US":"United States","UY":"Uruguay","UZ":"Uzbekistan","VU":"Vanuatu","VA":"Vatican City","VE":"Venezuela","VN":"Vietnam","WF":"Wallis and Futuna","YE":"Yemen","ZM":"Zambia","ZW":"Zimbabwe"}</LocalizedString>
<LocalizedString ElementType="UxElement" StringId="error_448">The phone number you provided is unreachable.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="error_449">User has exceeded the number of retry attempts.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="verification_code_input_placeholder_text">Verification code</LocalizedString>
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 03/23/2023 Last updated : 03/27/2023
There are four different mapping types supported:
- **Direct** ΓÇô the target attribute is populated with the value of an attribute of the linked object in Azure AD. - **Constant** ΓÇô the target attribute is populated with a specific string you specified.-- **Expression** - the target attribute is populated based on the result of a script-like expression.
- For more information, see [Writing Expressions for Attribute-Mappings in Azure Active Directory](../app-provisioning/functions-for-customizing-application-data.md).
+- **Expression** - the target attribute is populated based on the result of a script-like expression. For more information about expressions, see [Writing Expressions for Attribute-Mappings in Azure Active Directory](../app-provisioning/functions-for-customizing-application-data.md).
- **None** - the target attribute is left unmodified. However, if the target attribute is ever empty, it's populated with the Default value that you specify. Along with these four basic types, custom attribute-mappings support the concept of an optional **default** value assignment. The default value assignment ensures that a target attribute is populated with a value if there's not a value in Azure AD or on the target object. The most common configuration is to leave this blank. ### Understanding attribute-mapping properties
-In the previous section, you were already introduced to the attribute-mapping type property.
-Along with this property, attribute-mappings also support the following attributes:
+In the previous section, you were introduced to the attribute-mapping type property.
+Along with this property, attribute-mappings also supports the attributes:
- **Source attribute** - The user attribute from the source system (example: Azure Active Directory). - **Target attribute** ΓÇô The user attribute in the target system (example: ServiceNow).-- **Default value if null (optional)** - The value that is passed to the target system if the source attribute is null. This value is only provisioned when a user is created. The "default value when null" won't be provisioned when updating an existing user. If for example, you provision all existing users in the target system with a particular Job Title (when it's null in the source system), you'll use the following [expression](../app-provisioning/functions-for-customizing-application-data.md): Switch(IsPresent([jobTitle]), "DefaultValue", "True", [jobTitle]). Make sure to replace the "Default Value" with the value to provision when null in the source system.
+- **Default value if null (optional)** - The value that is passed to the target system if the source attribute is null. This value is only provisioned when a user is created. The "default value when null" isn't provisioned when updating an existing user. For example, add a default value for job title, when creating a user, with the expression: `Switch(IsPresent([jobTitle]), "DefaultValue", "True", [jobTitle])`. For more information about expressions, see [Reference for writing expressions for attribute mappings in Azure Active Directory](../app-provisioning/functions-for-customizing-application-data.md).
- **Match objects using this attribute** ΓÇô Whether this mapping should be used to uniquely identify users between the source and target systems. It's typically set on the userPrincipalName or mail attribute in Azure AD, which is typically mapped to a username field in a target application.-- **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they're evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated. While you can set as many matching attributes as you would like, consider whether the attributes you're using as matching attributes are truly unique and need to be matching attributes. Generally customers have 1 or 2 matching attributes in their configuration.
+- **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they're evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated. While you can set as many matching attributes as you would like, consider whether the attributes you're using as matching attributes are truly unique and need to be matching attributes. Generally customers have one or two matching attributes in their configuration.
- **Apply this mapping** - **Always** ΓÇô Apply this mapping on both user creation and update actions. - **Only during creation** - Apply this mapping only on user creation actions.
Along with this property, attribute-mappings also support the following attribut
The Azure AD provisioning service can be deployed in both "green field" scenarios (where users don't exist in the target system) and "brownfield" scenarios (where users already exist in the target system). To support both scenarios, the provisioning service uses the concept of matching attributes. Matching attributes allow you to determine how to uniquely identify a user in the source and match the user in the target. As part of planning your deployment, identify the attribute that can be used to uniquely identify a user in the source and target systems. Things to note: - **Matching attributes should be unique:** Customers often use attributes such as userPrincipalName, mail, or object ID as the matching attribute.-- **Multiple attributes can be used as matching attributes:** You can define multiple attributes to be evaluated when matching users and the order in which they're evaluated (defined as matching precedence in the UI). If for example, you define three attributes as matching attributes, and a user is uniquely matched after evaluating the first two attributes, the service won't evaluate the third attribute. The service will evaluate matching attributes in the order specified and stop evaluating when a match is found.
+- **Multiple attributes can be used as matching attributes:** You can define multiple attributes to be evaluated when matching users and the order in which they're evaluated (defined as matching precedence in the UI). If for example, you define three attributes as matching attributes, and a user is uniquely matched after evaluating the first two attributes, the service won't evaluate the third attribute. The service evaluates matching attributes in the order specified and stops evaluating when a match is found.
- **The value in the source and the target don't have to match exactly:** The value in the target can be a function of the value in the source. So, one could have an emailAddress attribute in the source and the userPrincipalName in the target, and match by a function of the emailAddress attribute that replaces some characters with some constant value. - **Matching based on a combination of attributes isn't supported:** Most applications don't support querying based on two properties. Therefore, it's not possible to match based on a combination of attributes. It's possible to evaluate single properties on after another. - **All users must have a value for at least one matching attribute:** If you define one matching attribute, all users must have a value for that attribute in the source system. If for example, you define userPrincipalName as the matching attribute, all users must have a userPrincipalName. If you define multiple matching attributes (for example, both extensionAttribute1 and mail), not all users have to have the same matching attribute. One user could have a extensionAttribute1 but not mail while another user could have mail but no extensionAttribute1.
Applications and systems that support customization of the attribute list includ
- ServiceNow - Workday to Active Directory / Workday to Azure Active Directory - SuccessFactors to Active Directory / SuccessFactors to Azure Active Directory-- Azure Active Directory ([Azure AD Graph API default attributes](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#user-entity) and custom directory extensions are supported). Learn more about [creating extensions](./user-provisioning-sync-attributes-for-mapping.md) and [known limitations](./known-issues.md).
+- Azure Active Directory ([Azure AD Graph API default attributes](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#user-entity) and custom directory extensions are supported). For more information about creating extensions, see [Syncing extension attributes for Azure Active Directory Application Provisioning](./user-provisioning-sync-attributes-for-mapping.md) and [Known issues for provisioning in Azure Active Directory](./known-issues.md).
- Apps that support [SCIM 2.0](https://tools.ietf.org/html/rfc7643)-- For Azure Active Directory writeback to Workday or SuccessFactors, it's supported to update relevant metadata for supported attributes (XPATH and JSONPath), but isn't supported to add new Workday or SuccessFactors attributes beyond those included in the default schema
+- Azure Active Directory supports writeback to Workday or SuccessFactors for XPATH and JSONPath metadata. Azure Active Directory doesn't support new Workday or SuccessFactors attributes not included in the default schema.
> [!NOTE]
The SCIM RFC defines a core user and group schema, while also allowing for exten
4. Select **Edit attribute list for AppName**. 5. At the bottom of the attribute list, enter information about the custom attribute in the fields provided. Then select **Add Attribute**.
-For SCIM applications, the attribute name must follow the pattern shown in the example below. The "CustomExtensionName" and "CustomAttribute" can be customized per your application's requirements, for example: urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User:CustomAttribute
+For SCIM applications, the attribute name must follow the pattern shown in the example. The "CustomExtensionName" and "CustomAttribute" can be customized per your application's requirements, for example: urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User:CustomAttribute
These instructions are only applicable to SCIM-enabled applications. Applications such as ServiceNow and Salesforce aren't integrated with Azure AD using SCIM, and therefore they don't require this specific namespace when adding a custom attribute.
Custom attributes can't be referential attributes, multi-value or complex-typed
## Provisioning a role to a SCIM app
-Use the steps below to provision roles for a user to your application. Note that the description below is specific to custom SCIM applications. For gallery applications such as Salesforce and ServiceNow, use the pre-defined role mappings. The bullets below describe how to transform the AppRoleAssignments attribute to the format your application expects.
+Use the steps in the example to provision roles for a user to your application. Note that the description is specific to custom SCIM applications. For gallery applications such as Salesforce and ServiceNow, use the predefined role mappings. The bullets describe how to transform the AppRoleAssignments attribute to the format your application expects.
- Mapping an appRoleAssignment in Azure AD to a role in your application requires that you transform the attribute using an [expression](../app-provisioning/functions-for-customizing-application-data.md). The appRoleAssignment attribute **shouldn't be mapped directly** to a role attribute without using an expression to parse the role details.
The request formats in the PATCH and POST differ. To ensure that POST and PATCH
![Add roles](./media/customize-application-attributes/add-roles.png)<br>
- Then use the AppRoleAssignmentsComplex expression to map to the custom role attribute as shown in the image below:
+ Then use the AppRoleAssignmentsComplex expression to map to the custom role attribute as shown in the image:
![Add AppRoleAssignmentsComplex](./media/customize-application-attributes/edit-attribute-approleassignmentscomplex.png)<br> - **Things to consider**
The request formats in the PATCH and POST differ. To ensure that POST and PATCH
## Provisioning a multi-value attribute
-Certain attributes such as phoneNumbers and emails are multi-value attributes where you may need to specify different types of phone numbers or emails. Use the expression below for multi-value attributes. It allows you to specify the attribute type and map that to the corresponding Azure AD user attribute for the value.
+Certain attributes such as phoneNumbers and emails are multi-value attributes where you may need to specify different types of phone numbers or emails. Use the expression for multi-value attributes. It allows you to specify the attribute type and map that to the corresponding Azure AD user attribute for the value.
* phoneNumbers[type eq "work"].value * phoneNumbers[type eq "mobile"].value
Selecting this option will effectively force a resynchronization of all users wh
- The attribute IsSoftDeleted is often part of the default mappings for an application. IsSoftdeleted can be true in one of four scenarios (the user is out of scope due to being unassigned from the application, the user is out of scope due to not meeting a scoping filter, the user has been soft deleted in Azure AD, or the property AccountEnabled is set to false on the user). It's not recommended to remove the IsSoftDeleted attribute from your attribute mappings. - The Azure AD provisioning service doesn't support provisioning null values. - They primary key, typically "ID", shouldn't be included as a target attribute in your attribute mappings. -- The role attribute typically needs to be mapped using an expression, rather than a direct mapping. See section above for more details on role mapping.
+- The role attribute typically needs to be mapped using an expression, rather than a direct mapping. For more information about role mapping, see [Provisioning a role to a SCIM app](#Provisioning a role to a SCIM app).
- While you can disable groups from your mappings, disabling users isn't supported. ## Next steps
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| Fortinet | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.fortinet.com/ | | Giesecke + Devrient (G+D) | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.gi-de.com/en/identities/enterprise-security/hardware-based-authentication | | GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key |
-| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us |
+| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/products/crescendo-key |
| Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido |
+| Hypr | ![y] | ![y]| ![n]| ![y]| ![n] | https://www.hypr.com/true-passwordless-mfa |
| Identiv | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.identiv.com/products/logical-access-control/utrust-fido2-security-keys/nfc | | IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon | | Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ |
The following providers offer FIDO2 security keys of different form factors that
| Thales Group | ![n] | ![y]| ![y]| ![n]| ![y] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | | Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |
+| Token Ring | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.tokenring.com/ |
| TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ | | VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net |
+| WiSECURE Technologies | ![n] | ![y]| ![n]| ![n]| ![n] | https://wisecure-tech.com/en-us/zero-trust/fido/authtron |
| Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ | + <!--Image references--> [y]: ./media/fido2-compatibility/yes.png [n]: ./media/fido2-compatibility/no.png
active-directory Concept Fido2 Hardware Vendor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md
The following table lists partners who are Microsoft-compatible FIDO2 security k
| Fortinet | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.fortinet.com/ | | Giesecke + Devrient (G+D) | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.gi-de.com/en/identities/enterprise-security/hardware-based-authentication | | GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key |
-| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us |
+| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/products/crescendo-key |
| Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido |
+| Hypr | ![y] | ![y]| ![n]| ![y]| ![n] | https://www.hypr.com/true-passwordless-mfa |
+| Identiv | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.identiv.com/products/logical-access-control/utrust-fido2-security-keys/nfc |
| IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon | | Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ | | KONA I | ![y] | ![n]| ![y]| ![y]| ![n] | https://konai.com/business/security/fido |
+| Movenda | ![y] | ![n]| ![y]| ![y]| ![n] | https://www.movenda.com/en/authentication/fido2/overview |
| NeoWave | ![n] | ![y]| ![y]| ![n]| ![n] | https://neowave.fr/en/products/fido-range/ | | Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/nymi-band | | Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ |
The following table lists partners who are Microsoft-compatible FIDO2 security k
| Thales Group | ![n] | ![y]| ![y]| ![n]| ![y] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | | Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |
+| Token Ring | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.tokenring.com/ |
| TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ | | VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net |
+| WiSECURE Technologies | ![n] | ![y]| ![n]| ![n]| ![n] | https://wisecure-tech.com/en-us/zero-trust/fido/authtron |
| Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ | ++ <!--Image references--> [y]: ./media/fido2-compatibility/yes.png [n]: ./media/fido2-compatibility/no.png
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
For more information, and additional Azure AD Multi-Factor Authentication report
### Troubleshoot Azure AD Multi-Factor Authentication See [Troubleshooting Azure AD Multi-Factor Authentication](https://support.microsoft.com/help/2937344/troubleshooting-azure-multi-factor-authentication-issues) for common issues.
+## Guided walkthrough
+
+For a guided walkthrough of many of the recommendations in this article, see the [Microsoft 365 Configure multifactor authentication guided walkthrough](https://go.microsoft.com/fwlink/?linkid=2221401).
+ ## Next steps [Deploy other identity features](../fundamentals/active-directory-deployment-plans.md)
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
For more information about pricing, see [Azure Active Directory pricing](https:/
* An account with Global Administrator privileges.
+### Guided walkthrough
+
+For a guided walkthrough of many of the recommendations in this article, see the [Plan your self-service password reset deployment](https://go.microsoft.com/fwlink/?linkid=2221600) guide.
### Training resources
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Networks and network services used by clients connecting to identity and resourc
CAE only has insight into [IP-based named locations](../conditional-access/location-condition.md#ipv4-and-ipv6-address-ranges). CAE doesn't have insight into other location conditions like [MFA trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) or country-based locations. When a user comes from an MFA trusted IP, trusted location that includes MFA Trusted IPs, or country location, CAE won't be enforced after that user moves to a different location. In those cases, Azure AD will issue a one-hour access token without instant IP enforcement check. > [!IMPORTANT]
-> If you want your location policies to be enforced in real time by continuous access evaluation, use only the [IP based Conditional Access location condition](../conditional-access/location-condition.md) and configure all IP addresses, **including both IPv4 and IPv6**, that can be seen by your identity provider and resources provider. Do not use country location conditions or the trusted ips feature that is available in Azure AD Multifactor Authentication's service settings page.
+> If you want your location policies to be enforced in real time by continuous access evaluation, use only the [IP based Conditional Access location condition](../conditional-access/location-condition.md) and configure all IP addresses, **including both IPv4 and IPv6**, that can be seen by your identity provider and resources provider. Do not use country/region location conditions or the trusted ips feature that is available in Azure AD Multifactor Authentication's service settings page.
### Named location limitations
active-directory Concept Token Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md
description: Learn how to use token protection in Conditional Access policies.
Previously updated : 03/09/2023 Last updated : 03/24/2023
Token protection creates a cryptographically secure tie between the token and th
> [!IMPORTANT] > Token protection is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-With this preview, we're giving you the ability to create a Conditional Access policy to require token protection for sign-in tokens for specific services. We support token protection for sign-in tokens in Conditional Access for desktop applications accessing Exchange Online and SharePoint Online on Windows devices.
+With this preview, we're giving you the ability to create a Conditional Access policy to require token protection for sign-in tokens (refresh tokens) for specific services. We support token protection for sign-in tokens in Conditional Access for desktop applications accessing Exchange Online and SharePoint Online on Windows devices.
+
+> [!NOTE]
+> We may interchange sign in tokens and refresh tokens in this content. This preview doesn't currently support access tokens or web cookies.
:::image type="content" source="media/concept-token-protection/complete-policy-components-session.png" alt-text="Screenshot showing a Conditional Access policy requiring token protection as the session control":::
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
The location found using the public IP address a client provides to Azure Active
## Named locations
-Locations exist in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries.
+Locations exist in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries/regions.
![Named locations in the Azure portal](./media/location-condition/new-named-location.png)
Locations such as your organization's public network ranges can be marked as tru
Organizations can determine country location by IP address or GPS coordinates.
-To define a named location by country, you need to provide:
+To define a named location by country/region, you need to provide:
- A **Name** for the location. - Choose to determine location by IP address or GPS coordinates.-- Add one or more countries.
+- Add one or more countries/regions.
- Optionally choose to **Include unknown countries/regions**. ![Country as a location in the Azure portal](./media/location-condition/new-named-location-country-region.png)
-If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address (starting April 3, 2023) to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries to block traffic from countries where they don't do business.
+If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address (starting April 3, 2023) to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries/regions to block traffic from countries/regions where they don't do business.
If you select **Determine location by GPS coordinates**, the user needs to have the Microsoft Authenticator app installed on their mobile device. Every hour, the system contacts the userΓÇÖs Microsoft Authenticator app to collect the GPS location of the userΓÇÖs mobile device.
active-directory Multi Service Web App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-storage.md
Previously updated : 04/25/2021 Last updated : 03/24/2023 ms.devlang: csharp, javascript
To create a general-purpose v2 storage account in the Azure portal, follow these
1. On the Azure portal menu, select **All services**. In the list of resources, enter **Storage Accounts**. As you begin typing, the list filters based on your input. Select **Storage Accounts**.
-1. In the **Storage Accounts** window that appears, select **Add**.
+1. In the **Storage Accounts** window that appears, select **Create**.
1. Select the subscription in which to create the storage account.
To create a general-purpose v2 storage account in the Azure portal, follow these
1. Select a location for your storage account, or use the default location.
-1. Leave these fields set to their default values:
+1. For **Performance**, select the **Standard** option.
- |Field|Value|
- |--|--|
- |Deployment model|Resource Manager|
- |Performance|Standard|
- |Account kind|StorageV2 (general-purpose v2)|
- |Replication|Read-access geo-redundant storage (RA-GRS)|
- |Access tier|Hot|
+1. For **Redundancy**, select the **Locally-redundant storage (LRS)** option from the dropdown.
-1. Select **Review + Create** to review your storage account settings and create the account.
+1. Select **Review** to review your storage account settings and create the account.
1. Select **Create**.
To create a Blob Storage container in Azure Storage, follow these steps.
1. Go to your new storage account in the Azure portal.
-1. In the left menu for the storage account, scroll to the **Blob service** section, and then select **Containers**.
+1. In the left menu for the storage account, scroll to the **Data storage** section, and then select **Containers**.
1. Select the **+ Container** button.
To create a Blob Storage container in Azure Storage, follow these steps.
1. Set the level of public access to the container. The default level is **Private (no anonymous access)**.
-1. Select **OK** to create the container.
+1. Select **Create** to create the container.
# [PowerShell](#tab/azure-powershell)
You need to grant your web app access to the storage account before you can crea
In the [Azure portal](https://portal.azure.com), go into your storage account to grant your web app access. Select **Access control (IAM)** in the left pane, and then select **Role assignments**. You'll see a list of who has access to the storage account. Now you want to add a role assignment to a robot, the app service that needs access to the storage account. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-Assign the **Storage Blob Data Contributor** role to the **App Service** at subscription scope. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+1. In the **Assignment type** tab, select **Job function type** and then **Next**.
+
+1. In the **Role** tab, select **Storage Blob Data Contributor** role from the dropdown and then select **Next**.
+
+1. In the **Members** tab, select **Assign access to** -> **Managed identity** and then select **Members** -> **Select members**. In the **Select managed identities** window, find and select the managed identity created for your App Service in the **Managed identity** dropdown. Select the **Select** button.
+
+1. Select **Review and assign** and then select **Review and assign** once more.
+
+For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
Your web app now has access to your storage account.
active-directory Scenario Desktop Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token.md
Previously updated : 10/21/2022 Last updated : 03/27/2023
There are various ways you can acquire tokens in a desktop application.
- [Device code flow](scenario-desktop-acquire-token-device-code-flow.md) +
+> [!IMPORTANT]
+If users need to use multi-factor authentication (MFA) to log in to the application, they will be blocked instead.
+ ## Next steps Move on to the next article in this scenario,
active-directory Single Sign Out Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-out-saml-protocol.md
Per section 3.7 of the [SAML 2.0 core specification](http://docs.oasis-open.org/
The `Issuer` element in a `LogoutRequest` must exactly match one of the **ServicePrincipalNames** in the cloud service in Azure AD. Typically, this is set to the **App ID URI** that is specified during application registration. ### NameID
-The value of the `NameID` element must exactly match the `NameID` of the user that is being signed out.
+The value of the `NameID` element must exactly match the `NameID` of the user that is being signed out.
+
+> [!NOTE]
+> During SAML logout request, the `NameID` value is not considered by Azure Active Directory.
+> If a single user session is active, Azure Active Directory will automatically select that session and the SAML logout will proceed.
+> If multiple user sessions are active, Azure Active Directory will enumerate the active sessions for user selection. After user selection, the SAML logout will proceed.
## LogoutResponse Azure AD sends a `LogoutResponse` in response to a `LogoutRequest` element. The following excerpt shows a sample `LogoutResponse`.
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md
-# required metadata
Title: Validation differences by supported account types description: Learn about the validation differences of various properties for different supported account types when registering your app with the Microsoft identity platform. Previously updated : 09/29/2021 Last updated : 03/24/2023 -+
If you change this property you may need to change other properties first.
See the following table for the validation differences of various properties for different supported account types.
-| Property | `AzureADMyOrg` | `AzureADMultipleOrgs` | `AzureADandPersonalMicrosoftAccount` and `PersonalMicrosoftAccount` |
-| | | - | |
-| Application ID URI (`identifierURIs`) | Must be unique in the tenant <br><br> urn:// schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> urn:// schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> urn:// schemes aren't supported <br><br> Wildcards, fragments, and query strings aren't supported <br><br> Maximum length of 120 characters <br><br> Maximum of 50 identifierURIs |
-| Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key |
-| Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets |
-| Redirect URIs (`replyURLs`) | See [Redirect URI/reply URL restrictions and limitations](reply-url.md) for more info. | | |
-| API permissions (`requiredResourceAccess`) | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | Maximum of 50 resources per application and 30 permissions per resource (for example, Microsoft Graph). Total limit of 200 per application (resources x permissions). |
-| Scopes defined by this API (`oauth2Permissions`) | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 40 characters <br><br> Maximum of 100 scopes defined |
-| Authorized client applications (`preAuthorizedApplications`) | No limit\* | No limit\* | Total maximum of 500 <br><br> Maximum of 100 client apps defined <br><br> Maximum of 30 scopes defined per client |
-| appRoles | Supported <br> No limit\* | Supported <br> No limit\* | Not supported |
-| Front-channel logout URL | https://localhost is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | https://localhost is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | https://localhost is allowed, http://localhost fails <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters <br><br> Wildcards aren't supported |
-| Display name | Maximum length of 120 characters | Maximum length of 120 characters | Maximum length of 90 characters |
-| Tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags |
+| Property | `AzureADMyOrg` | `AzureADMultipleOrgs` | `AzureADandPersonalMicrosoftAccount` and `PersonalMicrosoftAccount` |
+| -- | | | -- |
+| Application ID URI (`identifierURIs`) | Must be unique in the tenant <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> urn:// schemes aren't supported <br><br> Wildcards, fragments, and query strings aren't supported <br><br> Maximum length of 120 characters <br><br> Maximum of 50 identifierURIs |
+| National clouds | Supported | Supported | Not supported |
+| Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key |
+| Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets |
+| Redirect URIs (`replyURLs`) | See [Redirect URI/reply URL restrictions and limitations](reply-url.md) for more info. | | |
+| API permissions (`requiredResourceAccess`) | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | Maximum of 50 resources per application and 30 permissions per resource (for example, Microsoft Graph). Total limit of 200 per application (resources x permissions). |
+| Scopes defined by this API (`oauth2Permissions`) | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 40 characters <br><br> Maximum of 100 scopes defined |
+| Authorized client applications (`preAuthorizedApplications`) | No limit\* | No limit\* | Total maximum of 500 <br><br> Maximum of 100 client apps defined <br><br> Maximum of 30 scopes defined per client |
+| appRoles | Supported <br> No limit\* | Supported <br> No limit\* | Not supported |
+| Front-channel logout URL | `https://localhost` is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | `https://localhost` is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | `https://localhost` is allowed, `http://localhost` fails <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters <br><br> Wildcards aren't supported |
+| Display name | Maximum length of 120 characters | Maximum length of 120 characters | Maximum length of 90 characters |
+| Tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags |
\* There's a global limit of about 1000 items across all the collection properties on the app object.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 02/16/2023 Last updated : 03/23/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on February 16th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on March 23rd, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Power BI Pro for GCC | POWERBI_PRO_GOV | f0612879-44ea-47fb-baf0-3d76d9235576 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76) | | Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | | Power Virtual Agents Viral Trial | CCIBOTS_PRIVPREV_VIRAL | 606b54a9-78d8-4298-ad8b-df6ef4481c80 | DYN365_CDS_CCI_BOTS (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>FLOW_CCI_BOTS (5d798708-6473-48ad-9776-3acc301c40af) | Common Data Service for CCI Bots (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Flow for CCI Bots (5d798708-6473-48ad-9776-3acc301c40af) |
+| Privacy Management ΓÇô risk| PRIVACY_MANAGEMENT_RISK | e42bc969-759a-4820-9283-6b73085b68e6 | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) |
+| Privacy Management - risk for EDU | PRIVACY_MANAGEMENT_RISK_EDU | dcdbaae7-d8c9-40cb-8bb1-62737b9e5a86 | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) |
+| Privacy Management - risk GCC | PRIVACY_MANAGEMENT_RISK_GCC | 046f7d3b-9595-4685-a2e8-a2832d2b26aa | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) |
+| Privacy Management - risk_USGOV_DOD | PRIVACY_MANAGEMENT_RISK_USGOV_DOD | 83b30692-0d09-435c-a455-2ab220d504b9 | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) |
+| Privacy Management - risk_USGOV_GCCHIGH | PRIVACY_MANAGEMENT_RISK_USGOV_GCCHIGH | 787d7e75-29ca-4b90-a3a9-0b780b35367c | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) |
+| Privacy Management - subject rights request (1) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2 | d9020d1c-94ef-495a-b6de-818cbbcaa3b8 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (MIP_S_EXCHANGE_CO)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (PRIVACY_MANGEMENT_DSR_EXCHANGE_1)<br/>Privacy Management - Subject Rights Request (1) (PRIVACY_MANGEMENT_DSR_1) |
+| Privacy Management - subject rights request (1) for EDU | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_EDU_V2 | 475e3e81-3c75-4e07-95b6-2fed374536c8 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) |
+| Privacy Management - subject rights request (1) GCC | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_GCC | 017fb6f8-00dd-4025-be2b-4eff067cae72 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) |
+| Privacy Management - subject rights request (1) USGOV_DOD | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_USGOV_DOD | d3c841f3-ea93-4da2-8040-6f2348d20954 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) |
+| Privacy Management - subject rights request (1) USGOV_GCCHIGH | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_USGOV_GCCHIGH | 706d2425-6170-4818-ba08-2ad8f1d2d078 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) |
+| Privacy Management - subject rights request (10) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_V2 | 78ea43ac-9e5d-474f-8537-4abb82dafe27 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) |
+| Privacy Management - subject rights request (10) for EDU | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_EDU_V2 | e001d9f1-5047-4ebf-8927-148530491f83 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) |
+| Privacy Management - subject rights request (10) GCC | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_V2_GCC | a056b037-1fa0-4133-a583-d05cff47d551 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) |
+| Privacy Management - subject rights request (10) USGOV_DOD | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_V2_USGOV_DOD | ab28dfa1-853a-4f54-9315-f5146975ac9a | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) |
+| Privacy Management - subject rights request (10) USGOV_GCCHIGH | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_V2_USGOV_GCCHIGH | f6aa3b3d-62f4-4c1d-a44f-0550f40f729c | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) |
+| Privacy Management - subject rights request (50) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_50 | c416b349-a83c-48cb-9529-c420841dedd6 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE (7ca7f875-98db-4458-ab1b-47503826dd73) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>Privacy Management - Subject Rights Request (Exchange) (7ca7f875-98db-4458-ab1b-47503826dd73) |
+| Privacy Management - subject rights request (50) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_50_V2 | f6c82f13-9554-4da1-bed3-c024cc906e02 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE (7ca7f875-98db-4458-ab1b-47503826dd73) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>Privacy Management - Subject Rights Request (Exchange) (7ca7f875-98db-4458-ab1b-47503826dd73) |
+| Privacy Management - subject rights request (50) for EDU | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_50_EDU_V2 | ed45d397-7d61-4110-acc0-95674917bb14 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE (7ca7f875-98db-4458-ab1b-47503826dd73) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>Privacy Management - Subject Rights Request (Exchange) (7ca7f875-98db-4458-ab1b-47503826dd73) |
+| Privacy Management - subject rights request (100) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_V2 | cf4c6c3b-f863-4940-97e8-1d25e912f4c4 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) |
+| Privacy Management - subject rights request (100) for EDU | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_EDU_V2 | 9b85b4f0-92d9-4c3d-b230-041520cb1046 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) |
+| Privacy Management - subject rights request (100) GCC | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_V2_GCC | 91bbc479-4c2c-4210-9c88-e5b468c35b83 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) |
+| Privacy Management - subject rights request (100) USGOV_DOD | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_V2_USGOV_DOD | ba6e69d5-ba2e-47a7-b081-66c1b8e7e7d4 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) |
+| Privacy Management - subject rights request (100) USGOV_GCCHIGH | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_V2_USGOV_GCCHIGH | cee36ce4-cc31-481f-8cab-02765d3e441f | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) |
| Project for Office 365 | PROJECTCLIENT | a10d5e58-74da-4312-95c8-76be4e5b75a0 | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | | Project Online Essentials | PROJECTESSENTIALS | 776df282-9fc0-4862-99e2-70e561b9909e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | Project Online Essentials for Faculty | PROJECTESSENTIALS_FACULTY | e433b246-63e7-4d0b-9efa-7940fa3264d6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md
Previously updated : 11/11/2022- Last updated : 03/27/2023
# Add Azure Active Directory (Azure AD) as an identity provider for External Identities
-Azure Active Directory is available as an identity provider option for [B2B collaboration](what-is-b2b.md#integrate-with-identity-providers) by default. If an external guest user has an Azure AD account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Azure AD account.
+Azure Active Directory is available as an identity provider option for B2B collaboration by default. If an external guest user has an Azure AD account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Azure AD account.
## Guest sign-in using Azure Active Directory accounts
-Azure Active Directory is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Azure AD account using either the [invitation flow](redemption-experience.md#invitation-redemption-flow) or a [self-service sign-up user flow](self-service-sign-up-overview.md).
+Azure Active Directory is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Azure AD account using either the invitation flow or a self-service sign-up user flow.
:::image type="content" source="media/azure-ad-account/azure-ad-account-identity-provider.png" alt-text="Screenshot of Azure AD account in the identity provider list." lightbox="media/azure-ad-account/azure-ad-account-identity-provider.png":::
active-directory Configure Saas Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/configure-saas-apps.md
- Title: Configure SaaS apps for B2B collaboration
-description: Learn how to configure SaaS apps for Azure Active Directory B2B collaboration and view additional available resources.
----- Previously updated : 05/23/2017--------
-# Configure SaaS apps for B2B collaboration
-
-Azure Active Directory (Azure AD) B2B collaboration works with most apps that integrate with Azure AD. In this section, we walk through instructions for configuring some popular SaaS apps for use with Azure AD B2B.
-
-Before you look at app-specific instructions, here are some rules of thumb:
-
-* For most of the apps, user setup needs to happen manually. That is, users must be created manually in the app as well.
-
-* For apps that support automatic setup, such as Dropbox, separate invitations are created from the apps. Users must be sure to accept each invitation.
-
-* In the user attributes, to mitigate any issues with mangled user profile disk (UPD) in guest users, always set **User Identifier** to **user.mail**.
--
-## Dropbox Business
-
-To enable users to sign in using their organization account, you must manually configure Dropbox Business to use Azure AD as a Security Assertion Markup Language (SAML) identity provider. If Dropbox Business has not been configured to do so, it cannot prompt or otherwise allow users to sign in using Azure AD.
-
-1. To add the Dropbox Business app into Azure AD, select **Enterprise applications** in the left pane, and then click **Add**.
-
- ![The "Add" button on the Enterprise applications page](media/configure-saas-apps/add-dropbox.png)
-
-2. In the **Add an application** window, enter **dropbox** in the search box, and then select **Dropbox for Business** in the results list.
-
- ![Search for "dropbox" on the Add an application page](media/configure-saas-apps/add-app-dialog.png)
-
-3. On the **Single sign-on** page, select **Single sign-on** in the left pane, and then enter **user.mail** in the **User Identifier** box. (It's set as UPN by default.)
-
- ![Configuring single sign-on for the app](media/configure-saas-apps/configure-app-sso.png)
-
-4. To download the certificate to use for Dropbox configuration, select **Configure DropBox**, and then select **SAML Single Sign On Service URL** in the list.
-
- ![Downloading the certificate for Dropbox configuration](media/configure-saas-apps/download-certificate.png)
-
-5. Sign in to Dropbox with the sign-on URL from the **Single sign-on** page.
-
- ![Screenshot showing the Dropbox sign-in page](media/configure-saas-apps/sign-in-to-dropbox.png)
-
-6. On the menu, select **Admin Console**.
-
- ![The "Admin Console" link on the Dropbox menu](media/configure-saas-apps/dropbox-menu.png)
-
-7. In the **Authentication** dialog box, select **More**, upload the certificate and then, in the **Sign in URL** box, enter the SAML single sign-on URL.
-
- ![The "More" link in the collapsed Authentication dialog box](media/configure-saas-apps/dropbox-auth-01.png)
-
- ![The "Sign in URL" in the expanded Authentication dialog box](media/configure-saas-apps/paste-single-sign-on-URL.png)
-
-8. To configure automatic user setup in the Azure portal, select **Provisioning** in the left pane, select **Automatic** in the **Provisioning Mode** box, and then select **Authorize**.
-
- ![Configuring automatic user provisioning in the Azure portal](media/configure-saas-apps/set-up-automatic-provisioning.png)
-
-After guest or member users have been set up in the Dropbox app, they receive a separate invitation from Dropbox. To use Dropbox single sign-on, invitees must accept the invitation by clicking a link in it.
-
-## Box
-You can enable users to authenticate Box guest users with their Azure AD account by using federation that's based on the SAML protocol. In this procedure, you upload metadata to Box.com.
-
-1. Add the Box app from the enterprise apps.
-
-2. Configure single sign-on in the following order:
-
- ![Screenshot showing the single sign-on configuration settings](media/configure-saas-apps/configure-box-sso.png)
-
- a. In the **Sign on URL** box, ensure that the sign-on URL is set appropriately for Box in the Azure portal. This URL is the URL of your Box.com tenant. It should follow the naming convention *https://.box.com*.
- The **Identifier** does not apply to this app, but it still appears as a mandatory field.
-
- b. In the **User identifier** box, enter **user.mail** (for SSO for guest accounts).
-
- c. Under **SAML Signing Certificate**, click **Create new certificate**.
-
- d. To begin configuring your Box.com tenant to use Azure AD as an identity provider, download the metadata file and then save it to your local drive.
-
- e. Forward the metadata file to the Box support team, which configures single sign-on for you.
-
-3. For Azure AD automatic user setup, in the left pane, select **Provisioning**, and then select **Authorize**.
-
- ![Authorize Azure AD to connect to Box](media/configure-saas-apps/auth-azure-ad-to-connect-to-box.png)
-
-Like Dropbox invitees, Box invitees must redeem their invitation from the Box app.
-
-## Next steps
-
-See the following articles on Azure AD B2B collaboration:
--- [What is Azure AD B2B collaboration?](what-is-b2b.md)-- [Dynamic groups and B2B collaboration](use-dynamic-groups.md)-- [B2B collaboration user claims mapping](claims-mapping.md)-- [Microsoft 365 external sharing](o365-external-user.md)-
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md
ContentType: application/json
- [Add Azure Active Directory B2B collaboration users by using PowerShell](customize-invitation-api.md#powershell) - [Properties of an Azure AD B2B guest user](user-properties.md)-- [B2B for Azure AD integrated apps](configure-saas-apps.md)
active-directory Concept Secure Remote Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md
The guidance helps:
This guide assumes that your cloud only or hybrid identities have been established in Azure AD already. For help with choosing your identity type see the article, [Choose the right authentication method for your Azure Active Directory hybrid identity solution](../hybrid/choose-ad-authn.md)
+### Guided walkthrough
+
+For a guided walkthrough of many of the recommendations in this article, see the [Set up Azure AD](https://go.microsoft.com/fwlink/?linkid=2221308) guide.
+ ## Guidance for Azure AD Free, Office 365, or Microsoft 365 customers. There are many recommendations that Azure AD Free, Office 365, or Microsoft 365 app customers should take to protect their user identities. The following table is intended to highlight key actions for the following license subscriptions:
active-directory How To Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md
Previously updated : 03/01/2023 Last updated : 03/24/2023
When users authenticate into your corporate intranet or web-based applications, Azure Active Directory (Azure AD) provides the identity and access management (IAM) service. You can add company branding that applies to all these sign-in experiences to create a consistent experience for your users.
-The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. Before you customize any settings, the default Microsoft branding appears in your sign-in pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a custom CSS.
+The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. Before you customize any settings, the default Microsoft branding appears in your sign-in pages. You can customize this default experience with a custom background image and/or color, favicon, layout, header, and footer. You can also upload a custom CSS.
> [!NOTE] > Instructions for the legacy company branding customization process can be found in the **[Customize branding](customize-branding.md)** article.<br><br>The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-## User experience
-
-You can customize the sign-in pages when users access your organization's tenant-specific apps. For Microsoft and SaaS applications (multi-tenant apps) such as <https://myapps.microsoft.com>, or <https://outlook.com> the customized sign-in page appears only after the user types their **Email**, or **Phone**, and select **Next**.
-
-Some of the Microsoft applications support the home realm discovery `whr` query string parameter, or a domain variable. With the home realm discovery and domain parameter, the customized sign-in page appears immediately in the first step.
-
-In the following examples replace the contoso.com with your own tenant name, or verified domain name:
--- For Microsoft Outlook `https://outlook.com/contoso.com` -- For SharePoint online `https://contoso.sharepoint.com`-- For my app portal `https://myapps.microsoft.com/?whr=contoso.com` -- Self-service password reset `https://passwordreset.microsoftonline.com/?whr=contoso.com`-
-## Role and license requirements
+## License requirements
Adding custom branding requires one of the following licenses:
The **Global Administrator** role is required to customize company branding.
**Use Microsoft Graph with Azure AD company branding.** Company branding can be viewed and managed using Microsoft Graph on the `/beta` endpoint and the `organizationalBranding` resource type. For more information, see the [organizational branding API documentation](/graph/api/resources/organizationalbranding?view=graph-rest-beta&preserve-view=true).
+The branding elements are called out in the following example. Text descriptions are provided following the image.
++
+1. **Favicon**: Small icon that appears on the left side of the browser tab.
+1. **Header logo**: Space across the top of the web page, below the web browser navigation area.
+1. **Background image** and **page background color**: The entire space behind the sign-in box.
+1. **Banner logo**: The logo that appears in the upper-left corner of the sign-in box.
+1. **Username hint and text**: The text that appears before a user enters their information.
+1. **Sign-in page text**: Additional text you can add below the username field.
+1. **Self-service password reset**: A link you can add below the sign-in page text for password resets.
+1. **Template**: The layout of the page and sign-in boxes.
+1. **Footer**: Text in the lower-right corner of the page where you can add Terms of use or privacy information.
+
+### User experience
+
+When customizing the sign-in pages that users see when accessing your organization's tenant-specific applications, there are some user experience scenarios you may need to consider.
+
+For Microsoft, Software as a Service (SaaS), and multi-tenant applications such as <https://myapps.microsoft.com>, or <https://outlook.com>, the customized sign-in page appears only after the user types their **Email** or **Phone number** and selects the **Next** button.
+
+Some Microsoft applications support [Home Realm Discovery](../manage-apps/home-realm-discovery-policy.md) for authentication. In these scenarios, when a customer signs in to an Azure AD common sign-in page, Azure AD can use the customer's user name to determine where they should sign in.
+
+For customers who access applications from a custom URL, the `whr` query string parameter, or a domain variable, can be used to apply company branding at the initial sign-in screen, not just after adding the email or phone number. For example, `whr=contoso.com` would appear in the custom URL for the app. With the Home Realm Discover and domain parameter included, the company branding appears immediately in the first sign-in step. Other domain hints can be included.
+
+In the following examples replace the contoso.com with your own tenant name, or verified domain name:
+
+- For Microsoft Outlook `https://outlook.com/contoso.com`
+- For SharePoint online `https://contoso.sharepoint.com`
+- For my app portal `https://myapps.microsoft.com/?whr=contoso.com`
+- Self-service password reset `https://passwordreset.microsoftonline.com/?whr=contoso.com`
+
+> [!NOTE]
+> The settings to manage the 'Stay signed in?' prompt can now be found in the User settings area of Azure AD. Go to **Azure AD** > **Users** > **User settings**.
+<br><br>
+For more information on the 'Stay signed in?' prompt, see [How to manage user profile information](how-to-manage-user-profile-info.md#learn-about-the-stay-signed-in-prompt).
+ ## How to navigate the company branding process 1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory.
The sign-in experience process is grouped into sections. At the end of each sect
- **Favicon**: Select a PNG or JPG of your logo that appears in the web browser tab.
+ ![Screenshot of sample favicons in a web browser.](media/how-to-customize-branding/favicon-example.png)
+ - **Background image**: Select a PNG or JPG to display as the main image on your sign-in page. This image scales and crops according to the window size, but may be partially blocked by the sign-in prompt. - **Page background color**: If the background image isn't able to load because of a slower connection, your selected background color appears instead.
The sign-in experience process is grouped into sections. At the end of each sect
- Choose one of two **Templates**: Full-screen or partial-screen background. The full-screen background could obscure your background image, so choose the partial-screen background if your background image is important. - The details of the **Header** and **Footer** options are set on the next two sections of the process.
+
+ ![Screenshot of the Layout tab.](media/how-to-customize-branding/layout-visual-templates.png)
-- **Custom CSS**: Upload custom CSS to replace the Microsoft default style of the page. [Download the CSS template](https://download.microsoft.com/download/7/2/7/727f287a-125d-4368-a673-a785907ac5ab/custom-styles-template-013023.css).
+- **Custom CSS**: Upload custom CSS to replace the Microsoft default style of the page.
+ - [Download the CSS template](https://download.microsoft.com/download/7/2/7/727f287a-125d-4368-a673-a785907ac5ab/custom-styles-template-013023.css).
+ - View the [CSS template reference guide](reference-company-branding-css-template.md).
## Header If you haven't enabled the header, go to the **Layout** section and select **Show header**. Once enabled, select a PNG or JPG to display in the header of the sign-in page.
+![Screenshot of the message indicating that the header needs to be enabled.](media/how-to-customize-branding/disabled-header-message.png)
+ ## Footer If you haven't enabled the footer, go to the **Layout** section and select **Show footer**. Once enabled, adjust the following settings.
If you haven't enabled the footer, go to the **Layout** section and select **Sho
Uncheck this option to hide the default Microsoft link. Optionally provide your own **Display text** and **URL**. The text and links don't have to be related to privacy and cookies. -- **Show 'Terms of Use'**: This option is also elected by default and displays the [Microsoft 'Terms of Use'](https://www.microsoft.com/servicesagreement/) link.
+- **Show 'Terms of Use'**: This option is also selected by default and displays the [Microsoft 'Terms of Use'](https://www.microsoft.com/servicesagreement/) link.
Uncheck this option to hide the default Microsoft link. Optionally provide your own **Display text** and **URL**. The text and links don't have to be related to your terms of use.
To create an inclusive experience for all of your users, you can customize the s
The process for customizing the experience is the same as the [default sign-in experience](#basics) process, except you must select a language from the dropdown list in the **Basics** section. We recommend adding custom text in the same areas as your default sign-in experience.
+Azure AD supports right-to-left functionality for languages such as Arabic and Hebrew that are read right-to-left. The layout adjusts automatically, based on the user's browser settings.
+
+![Screenshot of the sign-in experience in Hebrew, demonstrating the right-to-left layout.](media/how-to-customize-branding/right-to-left-language-example.png)
+ ## Next steps
+- [View the CSS template reference guide](reference-company-branding-css-template.md).
- [Learn more about default user permissions in Azure AD](../fundamentals/users-default-permissions.md)--- [Manage the 'stay signed in' prompt](active-directory-users-profile-azure-portal.md#learn-about-the-stay-signed-in-prompt)
+- [Manage the 'stay signed in' prompt](how-to-manage-user-profile-info.md#learn-about-the-stay-signed-in-prompt)
active-directory How To Manage User Profile Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-user-profile-info.md
+
+ Title: How to manage user profile information
+description: Instructions about how to manage a user's profile and settings in Azure Active Directory.
++++++++ Last updated : 03/23/2023+++++
+# Add or update a user's profile information and settings
+A user's profile information and settings can be managed on an individual basis and for all users in your directory. When you look at these settings together, you can see how permissions, restrictions, and other connections work together.
+
+This article covers how to add user profile information, such as a profile picture and job-specific information. You can also choose to allow users to connect their LinkedIn accounts or restrict access to the Azure AD administration portal. Some settings may be managed in more than one area of Azure AD. For more information about adding new users, see [How to add or delete users in Azure Active Directory](add-users-azure-active-directory.md).
+
+## Add or change profile information
+When new users are created, only some details are added to their user profile. If your organization needs more details, they can be added after the user is created.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization.
+
+1. Go to **Azure Active Directory** > **Users** and select a user.
+
+1. There are two ways to edit user profile details. Either select **Edit properties** from the top of the page or select **Properties**.
+
+ ![Screenshot of the overview page for a selected user, with the edit options highlighted.](media/active-directory-users-profile-azure-portal/user-profile-overview.png)
+
+1. After making any changes, select the **Save** button.
+
+If you selected the **Edit properties option**:
+ - The full list of properties appears in edit mode on the **All** category.
+ - To edit properties based on the category, select a category from the top of the page.
+ - Select the **Save** button at the bottom of the page to save any changes.
+
+ ![Screenshot a selected user's details, with the detail categories and save button highlighted.](media/active-directory-users-profile-azure-portal/user-profile-properties-tabbed-view.png)
+
+If you selected the **Properties tab option**:
+ - The full list of properties appears for you to review.
+ - To edit a property, select the pencil icon next to the category heading.
+ - Select the **Save** button at the bottom of the page to save any changes.
+
+ ![Screenshot the Properties tab, with the edit options highlighted.](media/active-directory-users-profile-azure-portal/user-profile-properties-single-page-view.png)
+
+### Profile categories
+There are six categories of profile details you may be able to edit.
+
+- **Identity:** Add or update other identity values for the user, such as a married last name. You can set this name independently from the values of First name and Last name. For example, you could use it to include initials, a company name, or to change the sequence of names shown. If you have two users with the same name, such as ΓÇÿChris Green,ΓÇÖ you could use the Identity string to set their names to 'Chris B. Green' and 'Chris R. Green.'
+
+- **Job information:** Add any job-related information, such as the user's job title, department, or manager.
+
+- **Contact info:** Add any relevant contact information for the user.
+
+- **Parental controls:** For organizations like K-12 school districts, the user's age group may need to be provided. *Minors* are 12 and under, *Not adult* are 13-18 years old, and *Adults* are 18 and over. The combination of age group and consent provided by parent options determine the Legal age group classification. The Legal age group classification may limit the user's access and authority.
+
+- **Settings:** Decide whether the user can sign in to the Azure Active Directory tenant. You can also specify the user's global location.
+
+- **On-premises:** Accounts synced from Windows Server Active Directory include other values not applicable to Azure AD accounts.
+
+ >[!Note]
+ >You must use Windows Server Active Directory to update the identity, contact info, or job info for users whose source of authority is Windows Server Active Directory. After you complete your update, you must wait for the next synchronization cycle to complete before you'll see the changes.
+
+### Add or edit the profile picture
+On the user's overview page, select the camera icon in the lower-right corner of the user's thumbnail. If no image has been added, the user's initials appear here. This picture appears in Azure Active Directory and on the user's personal pages, such as the myapps.microsoft.com page.
+
+All your changes are saved for the user.
+
+>[!Note]
+> If you're having issues updating a user's profile picture, please ensure that your Office 365 Exchange Online Enterprise App is Enabled for users to sign in.
+
+## Manage settings for all users
+In the **User settings** area of Azure AD, you can adjust several settings that affect all users, such as restricting access to the Azure AD administration portal, how external collaboration is managed, and providing users the option to connect their LinkedIn account. Some settings are managed in a separate area of Azure AD and linked from this page.
+
+Go to **Azure AD** > **User settings**.
+
+### Learn about the 'Stay signed in?' prompt
+
+The **Stay signed in?** prompt appears after a user successfully signs in. This process is known as **Keep me signed in** (KMSI). If a user answers **Yes** to this prompt, a persistent authentication cookie is issued. The cookie must be stored in session for KMSI to work. KMSI won't work with locally stored cookies. If KMSI isn't enabled, a non-persistent cookie is issued and lasts for 24 hours or until the browser is closed.
+
+The following diagram shows the user sign-in flow for a managed tenant and federated tenant using the KMSI in prompt. This flow contains smart logic so that the **Stay signed in?** option won't be displayed if the machine learning system detects a high-risk sign-in or a sign-in from a shared device. For federated tenants, the prompt will show after the user successfully authenticates with the federated identity service.
+
+The KMSI setting is available in **User settings**. Some features of SharePoint Online and Office 2010 depend on users being able to choose to remain signed in. If you uncheck the **Show option to remain signed in** option, your users may see other unexpected prompts during the sign-in process.
+
+![Diagram showing the user sign-in flow for a managed vs. federated tenant](media/customize-branding/kmsi-workflow.png)
+
+Configuring the 'keep me signed in' (KMSI) option requires one of the following licenses:
+
+- Azure AD Premium 1
+- Azure AD Premium 2
+- Office 365 (for Office apps)
+- Microsoft 365
+
+#### Troubleshoot 'Stay signed in?' issues
+
+If a user doesn't act on the **Stay signed in?** prompt but abandons the sign-in attempt, a sign-in log entry appears in the Azure AD **Sign-ins** page. The prompt the user sees is called an "interrupt."
+
+![Sample 'Stay signed in?' prompt](media/customize-branding/kmsi-stay-signed-in-prompt.png)
+
+Details about the sign-in error are found in the **Sign-in logs** in Azure AD. Select the impacted user from the list and locate the following error code details in the **Basic info** section.
+
+* **Sign in error code**: 50140
+* **Failure reason**: This error occurred due to "Keep me signed in" interrupt when the user was signing in.
+
+You can stop users from seeing the interrupt by setting the **Show option to remain signed in** setting to **No** in the user settings. This setting disables the KMSI prompt for all users in your Azure AD directory.
+
+You also can use the [persistent browser session controls in Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to prevent users from seeing the KMSI prompt. This option allows you to disable the KMSI prompt for a select group of users (such as the global administrators) without affecting sign-in behavior for everyone else in the directory.
+
+To ensure that the KMSI prompt is shown only when it can benefit the user, the KMSI prompt is intentionally not shown in the following scenarios:
+
+* User is signed in via seamless SSO and integrated Windows authentication (IWA)
+* User is signed in via Active Directory Federation Services and IWA
+* User is a guest in the tenant
+* User's risk score is high
+* Sign-in occurs during user or admin consent flow
+* Persistent browser session control is configured in a conditional access policy
+
+## Next steps
+- [Add or delete users](add-users-azure-active-directory.md)
+
+- [Assign roles to users](active-directory-users-assign-role-azure-portal.md)
+
+- [Create a basic group and add members](active-directory-groups-create-azure-portal.md)
+
+- [View Azure AD enterprise user management documentation](../enterprise-users/index.yml).
active-directory Reference Company Branding Css Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/reference-company-branding-css-template.md
+
+ Title: CSS reference guide for customizing company branding - Azure AD
+description: Learn about the CSS template selectors for customizing company branding.
++++++++ Last updated : 03/24/2023+++++
+# CSS template reference guide
+
+Configuring your company branding for the user sign-in process provides a seamless experience in your applications that use Azure Active Directory (Azure AD) as the identity and access management service. Use this CSS reference guide if you're using the [CSS template](https://download.microsoft.com/download/7/2/7/727f287a-125d-4368-a673-a785907ac5ab/custom-styles-template-013023.css) as part of the [customize company branding](reference-company-branding-css-template.md) process.
++
+## HTML selectors
+
+The following CSS styles become the default body and link styles for the whole page. Applying styles for other links or text override CSS selectors.
+
+- `body` - Styles for the whole page
+- Styles for links:
+ - `a, a:link` - All links
+ - `a:hover` - When the mouse is over the link
+ - `a:focus` - When the link has focus
+ - `a:focus:hover` - When the link has focus *and* the mouse is over the link
+ - `a:active` - When the link is being clicked
+
+## Azure AD CSS selectors
+
+Use the following CSS selectors to configure the details of the sign-in experience.
+
+- `.ext-background-image` - Container that includes the background image in the default lightbox template
+- `.ext-header` - Header at the top of the container
+- `.ext-header-logo` - Header logo at the top of the container
+
+ ![Screenshot of the sign-in screen with the .ext-header and .ext-header-logo areas highlighted.](media/reference-company-branding-css-template/ext-header-and-logo.png)
+
+- `.ext-middle` - Style for the full-screen background that aligns the sign-in box vertically to the middle and horizontally to the center
+- `.ext-vertical-split-main-section` - Style for the container of the partial-screen background in the vertical split template that contains both a sign-in box and a background (This style is also known as the Active Directory Federation Services (ADFS) template.)
+- `.ext-vertical-split-background-image-container` - Sign-in box background in the vertical split/ADFS template
+- `.ext-sign-in-box` - Sign-in box container
+
+ ![Screenshot of the sign-in box, with the portion of the box that is styled with the .ext-sign-in-box selector.](media/reference-company-branding-css-template/ext-sign-in-box.png)
+
+- `.ext-title` - Title text
+
+ ![Screenshot of the sign-in box, with the "Sign in" text highlighted.](media/reference-company-branding-css-template/ext-sign-in-text.png)
+
+- `.ext-subtitle` - Subtitle text
+
+- Styles for primary buttons:
+ - `.ext-button.ext-primary` - Primary button default style
+ - `.ext-button.ext-primary:hover` - When the mouse is over the button
+ - `.ext-button.ext-primary:focus` - When the button has focus
+ - `.ext-button.ext-primary:focus:hover` - When the button has focus *and* the mouse is over the button
+ - `.ext-button.ext-primary:active` - When the button is being clicked
+
+ ![Screenshot of the sign-in box with the primary - Next - button highlighted.](media/reference-company-branding-css-template/ext-primary-button.png)
+
+- Styles for secondary buttons:
+ - `.ext-button.ext-secondary` - Secondary buttons
+ - `.ext-button.ext-secondary:hover` - When the mouse is over the button
+ - `.ext-button.ext-secondary:focus` When the button has focus
+ - `.ext-button.ext-secondary:focus:hover` - When the button has focus *and* the mouse is over the button
+ - `.ext-button.ext-secondary:active` - When the button is being clicked
+
+ ![Screenshot of the sign-in box at the Sign-in options step, with the secondary - Back - button highlighted.](media/reference-company-branding-css-template/ext-secondary-button.png)
+
+- `.ext-error` - Error text
+
+ ![Screenshot of the sign-in box with error text highlighted.](media/reference-company-branding-css-template/ext-error-text.png)
+
+- Styles for text boxes:
+ - `.ext-input.ext-text-box` - Text boxes
+ - `.ext-input.ext-text-box.ext-has-error` - When there's a validation error associated with the text box
+ - `.ext-input.ext-text-box:hover` - When the mouse is over the text box
+ - `.ext-input.ext-text-box:focus` - When the text box has focus
+ - `.ext-input.ext-text-box:focus:hover` - When the text box has focus *and* the mouse is over the text box
+
+ ![Screenshot of the sign-in box with the text box with sample text highlighted.](media/reference-company-branding-css-template/ext-text-box.png)
+
+- `.ext-boilerplate-text` - Custom message text at the bottom of the sign-in box
+
+ ![Screenshot of the sign-in box with the optional boilerplate text area highlighted.](media/reference-company-branding-css-template/ext-boilerplate-text.png)
+
+- `.ext-promoted-fed-cred-box` - Sign-in options text box
+
+ ![Screenshot of the sign-in box with the federated sign-in option highlighted.](media/reference-company-branding-css-template/ext-promoted-federated-credential-box.png)
+
+- Styles for the footer:
+ - `.ext-footer` - Footer area at the bottom of the page
+ - `.ext-footer-links` - Links area in the footer at the bottom of the page
+ - `.ext-footer-item` - Link items (such as "Terms of use" or "Privacy & cookies") in the footer at the bottom of the page
+ - `.ext-debug-item` - Debug details ellipsis in the footer at the bottom of the page
+
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
The set of default permissions depends on whether the user is a native member of
| | - | - Users and contacts | <ul><li>Enumerate the list of all users and contacts<li>Read all public properties of users and contacts</li><li>Invite guests<li>Change their own password<li>Manage their own mobile phone number<li>Manage their own photo<li>Invalidate their own refresh tokens</li></ul> | <ul><li>Read their own properties<li>Read display name, email, sign-in name, photo, user principal name, and user type properties of other users and contacts<li>Change their own password<li>Search for another user by object ID (if allowed)<li>Read manager and direct report information of other users</li></ul> | <ul><li>Read their own properties<li>Change their own password</li><li>Manage their own mobile phone number</li></ul> Groups | <ul><li>Create security groups<li>Create Microsoft 365 groups<li>Enumerate the list of all groups<li>Read all properties of groups<li>Read non-hidden group memberships<li>Read hidden Microsoft 365 group memberships for joined groups<li>Manage properties, ownership, and membership of groups that the user owns<li>Add guests to owned groups<li>Manage dynamic membership settings<li>Delete owned groups<li>Restore owned Microsoft 365 groups</li></ul> | <ul><li>Read properties of non-hidden groups, including membership and ownership (even non-joined groups)<li>Read hidden Microsoft 365 group memberships for joined groups<li>Search for groups by display name or object ID (if allowed)</li></ul> | <ul><li>Read object ID for joined groups<li>Read membership and ownership of joined groups in some Microsoft 365 apps (if allowed)</li></ul>
-Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>List permissions granted to applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications</li><li>List permissions granted to applications</li></ul>
+Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications</li><li>List permissions granted to applications</li></ul>
Devices</li></ul> | <ul><li>Enumerate the list of all devices<li>Read all properties of devices<li>Manage all properties of owned devices</li></ul> | No permissions | No permissions Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> Roles and scopes | <ul><li>Read all administrative roles and memberships<li>Read all properties and membership of administrative units</li></ul> | No permissions | No permissions
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
- Deprecated functionality - Plans for changes ++
+## September 2022
+
+### General Availability - SSPR writeback is now available for disconnected forests using Azure AD Connect cloud sync
+++
+**Type:** New feature
+**Service category:** Azure AD Connect Cloud Sync
+**Product capability:** Identity Lifecycle Management
+
+Azure AD Connect Cloud Sync Password writeback now provides customers the ability to synchronize Azure AD password changes made in the cloud to an on-premises directory in real time. This can be accomplished using the lightweight Azure AD cloud provisioning agent. For more information, see: [Tutorial: Enable cloud sync self-service password reset writeback to an on-premises environment](../authentication/tutorial-enable-cloud-sync-sspr-writeback.md).
+++
+### General Availability - Device-based conditional access on Linux Desktops
+++
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** SSO
+++
+This feature empowers users on Linux clients to register their devices with Azure AD, enroll into Intune management, and satisfy device-based Conditional Access policies when accessing their corporate resources.
+
+- Users can register their Linux devices with Azure AD.
+- Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops.
+- If compliant, users can use Microsoft Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies.
+
+For more information, see:
+
+- [Azure AD registered devices](../devices/concept-azure-ad-register.md)
+- [Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)
+++
+### General Availability - Azure AD SCIM Validator
+++
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Outbound to SaaS Applications
+++
+Independent Software Vendors(ISVs) and developers can self-test their SCIM endpoints for compatibility: We have made it easier for ISVs to validate that their endpoints are compatible with the SCIM-based Azure AD provisioning services. This is now in general availability (GA) status.
+
+For more information, see: [Tutorial: Validate a SCIM endpoint](../app-provisioning/scim-validator-tutorial.md)
+++
+### General Availability - prevent accidental deletions
+++
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Outbound to SaaS Applications
+++
+Accidental deletion of users in any system could be disastrous. WeΓÇÖre excited to announce the general availability of the accidental deletions prevention capability as part of the Azure AD provisioning service. When the number of deletions to be processed in a single provisioning cycle spikes above a customer defined threshold the following will happen. The Azure AD provisioning service pauses, provide you with visibility into the potential deletions, and allow you to accept or reject the deletions. This functionality has historically been available for Azure AD Connect, and Azure AD Connect Cloud Sync. It's now available across the various provisioning flows, including both HR-driven provisioning and application provisioning.
+
+For more information, see: [Enable accidental deletions prevention in the Azure AD provisioning service](../app-provisioning/accidental-deletions.md)
+++
+### General Availability - Identity Protection Anonymous and Malicious IP for ADFS on-premises logins
+++
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+++
+Identity protection expands its Anonymous and Malicious IP detections to protect ADFS sign-ins. This automatically applies to all customers who have AD Connect Health deployed and enabled, and show up as the existing "Anonymous IP" or "Malicious IP" detections with a token issuer type of "AD Federation Services".
+
+For more information, see: [What is risk?](../identity-protection/concept-identity-protection-risks.md)
++++
+### New Federated Apps available in Azure AD Application gallery - September 2022
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+++
+In September 2022 we've added the following 15 new applications in our App gallery with Federation support:
+
+[RocketReach SSO](../saas-apps/rocketreach-sso-tutorial.md), [Arena EU](../saas-apps/arena-eu-tutorial.md), [Zola](../saas-apps/zola-tutorial.md), [FourKites SAML2.0 SSO for Tracking](../saas-apps/fourkites-tutorial.md), [Syniverse Customer Portal](../saas-apps/syniverse-customer-portal-tutorial.md), [Rimo](https://rimo.app/), [Q Ware CMMS](https://qware.app/), [Mapiq (OIDC)](https://app.mapiq.com/), [NICE Cxone](../saas-apps/nice-cxone-tutorial.md), [dominKnow|ONE](../saas-apps/dominknowone-tutorial.md), [Waynbo for Azure AD](https://webportal-eu.waynbo.com/Login), [innDex](https://web.inndex.co.uk/azure/authorize), [Profiler Software](https://www.profiler.net.au/), [Trotto go links](https://trot.to/_/auth/login), [AsignetSSOIntegration](../saas-apps/asignet-sso-tutorial.md).
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
+++ ## August 2022
IT Admins can start using the new "Hybrid Admin" role as the least privileged ro
In May 2020, we've added the following 36 new applications in our App gallery with Federation support:
-[Moula](https://moula.com.au/pay/merchants), [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/), [TackleBox](https://tacklebox.in/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md)
+[Moula](https://moula.com.au/pay/merchants), [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
For more information about group-based licensing, see [What is group-based licen
In November 2018, we've added these 26 new apps with Federation support to the app gallery:
-[CoreStack](https://cloud.corestack.io/site/login), [HubSpot](../saas-apps/hubspot-tutorial.md), [GetThere](../saas-apps/getthere-tutorial.md), [Gra-Pe](../saas-apps/grape-tutorial.md), [eHour](https://getehour.com/try-now), [Consent2Go](../saas-apps/consent2go-tutorial.md), [Appinux](../saas-apps/appinux-tutorial.md), [DriveDollar](https://azuremarketplace.microsoft.com/marketplace/apps/savitas.drivedollar-azuread?tab=Overview), [Useall](../saas-apps/useall-tutorial.md), [Infinite Campus](../saas-apps/infinitecampus-tutorial.md), [Alaya](https://alayagood.com), [HeyBuddy](../saas-apps/heybuddy-tutorial.md), [Wrike SAML](../saas-apps/wrike-tutorial.md), [Drift](../saas-apps/drift-tutorial.md), [Zenegy for Business Central 365](https://accounting.zenegy.com/), [Everbridge Member Portal](../saas-apps/everbridge-tutorial.md), [Ivanti Service Manager (ISM)](../saas-apps/ivanti-service-manager-tutorial.md), [Peakon](../saas-apps/peakon-tutorial.md), [Allbound SSO](../saas-apps/allbound-sso-tutorial.md), [Plex Apps - Classic Test](https://test.plexonline.com/signon), [Plex Apps ΓÇô Classic](https://www.plexonline.com/signon), [Plex Apps - UX Test](https://test.cloud.plex.com/sso), [Plex Apps ΓÇô UX](https://cloud.plex.com/sso), [Plex Apps ΓÇô IAM](https://accounts.plex.com/), [CRAFTS - Childcare Records, Attendance, & Financial Tracking System](https://getcrafts.ca/craftsregistration)
+[CoreStack](https://cloud.corestack.io/site/login), [HubSpot](../saas-apps/hubspot-tutorial.md), [GetThere](../saas-apps/getthere-tutorial.md), [Gra-Pe](../saas-apps/grape-tutorial.md), [eHour](https://getehour.com/try-now), [Consent2Go](../saas-apps/consent2go-tutorial.md), [Appinux](../saas-apps/appinux-tutorial.md), [DriveDollar](https://azuremarketplace.microsoft.com/marketplace/apps/savitas.drivedollar-azuread?tab=Overview), [Useall](../saas-apps/useall-tutorial.md), [Infinite Campus](../saas-apps/infinitecampus-tutorial.md), [Alaya](https://alayagood.com), [HeyBuddy](../saas-apps/heybuddy-tutorial.md), [Wrike SAML](../saas-apps/wrike-tutorial.md), [Drift](../saas-apps/drift-tutorial.md), [Zenegy for Business Central 365](https://accounting.zenegy.com/), [Everbridge Member Portal](../saas-apps/everbridge-tutorial.md), [Ivanti Service Manager (ISM)](../saas-apps/ivanti-service-manager-tutorial.md), [Peakon](../saas-apps/peakon-tutorial.md), [Allbound SSO](../saas-apps/allbound-sso-tutorial.md), [Plex Apps - Classic Test](https://test.plexonline.com/signon), [Plex Apps ΓÇô Classic](https://www.plexonline.com/signon), [Plex Apps - UX Test](https://test.cloud.plex.com/sso), [Plex Apps ΓÇô UX](https://cloud.plex.com/sso), [Plex Apps ΓÇô IAM](https://accounts.plex.com/)
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
This connector version is gradually being rolled out through November. This new
For more information, see [Understand Azure AD Application Proxy connectors](../app-proxy/application-proxy-connectors.md). -
-## February 2018
-
-### Improved navigation for managing users and groups
-
-**Type:** Plan for change
-**Service category:** Directory Management
-**Product capability:** Directory
-
-The navigation experience for managing users and groups has been streamlined. You can now navigate from the directory overview directly to the list of all users, with easier access to the list of deleted users. You can also navigate from the directory overview directly to the list of all groups, with easier access to group management settings. And also from the directory overview page, you can search for a user, group, enterprise application, or app registration.
---
-### Availability of sign-ins and audit reports in Microsoft Azure operated by 21Vianet (Azure China 21Vianet)
-
-**Type:** New feature
-**Service category:** Azure Stack
-**Product capability:** Monitoring & Reporting
-
-Azure AD Activity log reports are now available in Microsoft Azure operated by 21Vianet (Azure China 21Vianet) instances. The following logs are included:
--- **Sign-ins activity logs** - Includes all the sign-ins logs associated with your tenant.--- **Self service Password Audit Logs** - Includes all the SSPR audit logs.--- **Directory Management Audit logs** - Includes all the directory management-related audit logs like User management, App Management, and others.-
-With these logs, you can gain insights into how your environment is doing. The provided data enables you to:
--- Determine how your apps and services are utilized by your users.--- Troubleshoot issues preventing your users from getting their work done.-
-For more information about how to use these reports, see [Azure Active Directory reporting](../reports-monitoring/overview-reports.md).
---
-### Use "Reports Reader" role (non-admin role) to view Azure AD Activity Reports
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-As part of customers feedback to enable non-admin roles to have access to Azure AD activity logs, we've enabled the ability for users who are in the "Reports Reader" role to access Sign-ins and Audit activity within the Azure portal as well as using the Microsoft Graph API.
-
-For more information, how to use these reports, see [Azure Active Directory reporting](../reports-monitoring/overview-reports.md).
---
-### EmployeeID claim available as user attribute and user identifier
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-You can configure **EmployeeID** as the User identifier and User attribute for member users and B2B guests in SAML-based sign-on applications from the Enterprise application UI.
-
-For more information, see [Customizing claims issued in the SAML token for enterprise applications in Azure Active Directory](../develop/active-directory-saml-claims-customization.md).
---
-### Simplified Application Management using Wildcards in Azure AD Application Proxy
-
-**Type:** New feature
-**Service category:** App Proxy
-**Product capability:** User Authentication
-
-To make application deployment easier and reduce your administrative overhead, we now support the ability to publish applications using wildcards. To publish a wildcard application, you can follow the standard application publishing flow, but use a wildcard in the internal and external URLs.
-
-For more information, see [Wildcard applications in the Azure Active Directory application proxy](../app-proxy/application-proxy-wildcard.md)
---
-### New cmdlets to support configuration of Application Proxy
-
-**Type:** New feature
-**Service category:** App Proxy
-**Product capability:** Platform
-
-The latest release of the AzureAD PowerShell Preview module contains new cmdlets that allow customers to configure Application Proxy Applications using PowerShell.
-
-The new cmdlets are:
--- Get-AzureADApplicationProxyApplication-- Get-AzureADApplicationProxyApplicationConnectorGroup-- Get-AzureADApplicationProxyConnector-- Get-AzureADApplicationProxyConnectorGroup-- Get-AzureADApplicationProxyConnectorGroupMembers-- Get-AzureADApplicationProxyConnectorMemberOf-- New-AzureADApplicationProxyApplication-- New-AzureADApplicationProxyConnectorGroup-- Remove-AzureADApplicationProxyApplication-- Remove-AzureADApplicationProxyApplicationConnectorGroup-- Remove-AzureADApplicationProxyConnectorGroup-- Set-AzureADApplicationProxyApplication-- Set-AzureADApplicationProxyApplicationConnectorGroup-- Set-AzureADApplicationProxyApplicationCustomDomainCertificate-- Set-AzureADApplicationProxyApplicationSingleSignOn-- Set-AzureADApplicationProxyConnector-- Set-AzureADApplicationProxyConnectorGroup---
-### New cmdlets to support configuration of groups
-
-**Type:** New feature
-**Service category:** App Proxy
-**Product capability:** Platform
-
-The latest release of the AzureAD PowerShell module contains cmdlets to manage groups in Azure AD. These cmdlets were previously available in the AzureADPreview module and are now added to the AzureAD module
-
-The Group cmdlets that are now release for General Availability are:
--- Get-AzureADMSGroup-- New-AzureADMSGroup-- Remove-AzureADMSGroup-- Set-AzureADMSGroup-- Get-AzureADMSGroupLifecyclePolicy-- New-AzureADMSGroupLifecyclePolicy-- Remove-AzureADMSGroupLifecyclePolicy-- Add-AzureADMSLifecyclePolicyGroup-- Remove-AzureADMSLifecyclePolicyGroup-- Reset-AzureADMSLifeCycleGroup-- Get-AzureADMSLifecyclePolicyGroup---
-### A new release of Azure AD Connect is available
-
-**Type:** New feature
-**Service category:** AD Sync
-**Product capability:** Platform
-
-Azure AD Connect is the preferred tool to synchronize data between Azure AD and on premises data sources, including Windows Server Active Directory and LDAP.
-
->[!Important]
->This build introduces schema and sync rule changes. The Azure AD Connect Synchronization Service triggers a Full Import and Full Synchronization steps after an upgrade. For information on how to change this behavior, see [How to defer full synchronization after upgrade](../hybrid/how-to-upgrade-previous-version.md#how-to-defer-full-synchronization-after-upgrade).
-
-This release has the following updates and changes:
-
-**Fixed issues**
--- Fix timing window on background tasks for Partition Filtering page when switching to next page.--- Fixed a bug that caused Access violation during the ConfigDB custom action.--- Fixed a bug to recover from sql connection timeout.--- Fixed a bug where certificates with SAN wildcards fail pre-req check.--- Fixed a bug that causes miiserver.exe crash during Azure AD connector export.--- Fixed a bug where a bad password attempt logged on DC when running caused the Azure AD connect wizard to change configuration-
-**New features and improvements**
--- Application telemetry - Administrators can switch this class of data on/off.--- Azure AD Health data - Administrators must visit the health portal to control their health settings. Once the service policy has been changed, the agents will read and enforce it.--- Added device writeback configuration actions and a progress bar for page initialization.--- Improved general diagnostics with HTML report and full data collection in a ZIP-Text / HTML Report.--- Improved reliability of auto upgrade and added additional telemetry to ensure the health of the server can be determined.--- Restrict permissions available to privileged accounts on AD Connector account. For new installations, the wizard restricts the permissions that privileged accounts have on the MSOL account after creating the MSOL account. The changes affect express installations and custom installations with Auto-Create account.--- Changed the installer to not require SA privilege on clean install of AADConnect.--- New utility to troubleshoot synchronization issues for a specific object. Currently, the utility checks for the following things:-
- - UserPrincipalName mismatch between synchronized user object and the user account in Azure AD Tenant.
-
- - If the object is filtered from synchronization due to domain filtering
-
- - If the object is filtered from synchronization due to organizational unit (OU) filtering
--- New utility to synchronize the current password hash stored in the on-premises Active Directory for a specific user account. The utility does not require a password change.---
-### Applications supporting Intune App Protection policies added for use with Azure AD application-based Conditional Access
-
-**Type:** Changed feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-We have added more applications that support application-based Conditional Access. Now, you can get access to Office 365 and other Azure AD-connected cloud apps using these approved client apps.
-
-The following applications will be added by the end of February:
--- Microsoft Power BI--- Microsoft Launcher--- Microsoft Invoicing-
-For more information, see:
--- [Approved client app requirement](../conditional-access/concept-conditional-access-conditions.md#client-apps)-- [Azure AD app-based Conditional Access](../conditional-access/app-based-conditional-access.md)---
-### Terms of use update to mobile experience
-
-**Type:** Changed feature
-**Service category:** Terms of use
-**Product capability:** Compliance
-
-When the terms of use are displayed, you can now select **Having trouble viewing? Click here**. Clicking this link opens the terms of use natively on your device. Regardless of the font size in the document or the screen size of device, you can zoom and read the document as needed.
---
-## January 2018
-
-### New Federated Apps available in Azure AD app gallery
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In January 2018, the following new apps with federation support were added in the app gallery:
-
-[IBM OpenPages](../saas-apps/ibmopenpages-tutorial.md), [OneTrust Privacy Management Software](../saas-apps/onetrust-tutorial.md), [Dealpath](../saas-apps/dealpath-tutorial.md), [IriusRisk Federated Directory, and [Fidelity NetBenefits](../saas-apps/fidelitynetbenefits-tutorial.md).
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Sign in with additional risk detected
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-The insight you get for a detected risk detection is tied to your Azure AD subscription. With the Azure AD Premium P2 edition, you get the most detailed information about all underlying detections.
-
-With the Azure AD Premium P1 edition, detections that aren't covered by your license appear as the risk detection Sign-in with additional risk detected.
-
-For more information, see [Azure Active Directory risk detections](../identity-protection/overview-identity-protection.md).
---
-### Hide Office 365 applications from end user's access panels
-
-**Type:** New feature
-**Service category:** My Apps
-**Product capability:** SSO
-
-You can now better manage how Office 365 applications show up on your user's access panels through a new user setting. This option is helpful for reducing the number of apps in a user's access panels if you prefer to only show Office apps in the Office portal. The setting is located in the **User Settings** and is labeled, **Users can only see Office 365 apps in the Office 365 portal**.
-
-For more information, see [Hide an application from user's experience in Azure Active Directory](../manage-apps/hide-application-from-user-portal.md).
---
-### Seamless sign into apps enabled for Password SSO directly from app's URL
-
-**Type:** New feature
-**Service category:** My Apps
-**Product capability:** SSO
-
-The My Apps browser extension is now available via a convenient tool that gives you the My Apps single-sign on capability as a shortcut in your browser. After installing, user's will see a waffle icon in their browser that provides them quick access to apps. Users can now take advantage of:
--- The ability to directly sign in to password-SSO based apps from the app's sign-in page-- Launch any app using the quick search feature-- Shortcuts to recently used apps from the extension-- The extension is available for Microsoft Edge, Chrome, and Firefox.-
-For more information, see [My Apps Secure Sign-in Extension](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510#download-and-install-the-my-apps-secure-sign-in-extension).
---
-### Azure AD administration experience in Azure Classic Portal has been retired
-
-**Type:** Deprecated
-**Service category:** Azure AD
-**Product capability:** Directory
-
-As of January 8, 2018, the Azure AD administration experience in the Azure classic portal has been retired. This took place in conjunction with the retirement of the Azure classic portal itself. In the future, you should use the [Azure portal](https://portal.azure.com) for all your portal-based administration of Azure AD.
---
-### The PhoneFactor web portal has been retired
-
-**Type:** Deprecated
-**Service category:** Azure AD
-**Product capability:** Directory
-
-As of January 8, 2018, the PhoneFactor web portal has been retired. This portal was used for the administration of multi-factor authentication (MFA) server, but those functions have been moved into the Azure portal at portal.azure.com.
-
-The multifactor authentication (MFA) configuration is located at: **Azure Active Directory \> multi-factor authentication (MFA) Server**
---
-### Deprecate Azure AD reports
-
-**Type:** Deprecated
-**Service category:** Reporting
-**Product capability:** Identity Lifecycle Management
--
-With the general availability of the new Azure Active Directory Administration console and new APIs now available for both activity and security reports, the report APIs under "/reports" endpoint have been retired as of end of December 31, 2017.
-
-**What's available?**
-
-As part of the transition to the new admin console, we have made 2 new APIs available for retrieving Azure AD Activity Logs. The new set of APIs provides richer filtering and sorting functionality in addition to providing richer audit and sign-in activities. The data previously available through the security reports can now be accessed through the Identity Protection risk detections API in Microsoft Graph.
-
-For more information, see:
--- [Get started with the Azure Active Directory reporting API](../reports-monitoring/concept-reporting-api.md)--- [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md)--
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
To see the default behavior in your environment for newly created groups, use th
You can also use the PowerShell cmdlet [AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md).
-> Example: `(Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"} | FL *).values`
+> Example: `(Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"} | Select-Object -ExpandProperty Values`
> If nothing is returned, you're using the default directory settings. Newly created Microsoft 365 groups *will automatically* be written back.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later - **note that Windows Server 2022 is not yet supported**. You can deploy Azure AD Connect on Windows Server 2016 but since Windows Server 2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration. We recommend the usage of domain joined Windows Server 2019.
+- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later. You can deploy Azure AD Connect on Windows Server 2016 but since Windows Server 2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration. We recommend the usage of domain joined Windows Server 2022.
- The minimum .NET Framework version required is 4.6.2, and newer versions of .Net are also supported. - Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported.
active-directory Reference Connect Health Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-version-history.md
The Azure Active Directory team regularly updates Azure AD Connect Health with n
Azure AD Connect Health for Sync is integrated with Azure AD Connect installation. Read more about [Azure AD Connect release history](./reference-connect-version-history.md) For feature feedback, vote at [Connect Health User Voice channel](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)
+## 27 March 2023
+**Agent Update**
+
+Azure AD Connect Health ADDS and ADFS Health Agents (version 3.2.2256.26)
+
+- We created a fix for so that the agents would be FIPS compliant
+ - the change was to have the agents use ΓÇÿCloudStorageAccount.UseV1MD5 = falseΓÇÖ so the agent only uses only FIPS compliant cryptography, otherwise azure blob client causes FIPs exceptions to be thrown.
+- Update of Newtonsoft.json library from 12.0.1 to 13.0.1 to resolve a component governance alert.
+- In ADFS health agent, the TestADFSDuplicateSPN test was disabled as the test was unreliable, it would generate misleading alerts when server experienced transient connectivity issues.
+ ## 19 January 2023 **Agent Update** - Azure AD Connect Health agent for Azure AD Connect (version 3.2.2188.23)
active-directory Reference Connect Sync Attributes Synchronized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-attributes-synchronized.md
In this case, start with the list of attributes in this topic and identify those
| targetAddress |X |X | | | | telephoneAssistant |X |X | | | | telephoneNumber |X |X | | |
-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
+| thumbnailphoto |X |X | |Synced to M365 profile photo periodically. Admins can set the frequency of the sync by changing the Azure AD Connect value. Please note that if users change their photo both on-premises and in cloud in a time span that is less than the Azure AD Connect value, we do not guarantee that the latest photo will be served.|
| title |X |X | | | | unauthOrig |X |X |X | | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. |
In this case, start with the list of attributes in this topic and identify those
| targetAddress |X |X | | | | telephoneAssistant |X |X | | | | telephoneNumber |X |X | | |
-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
+| thumbnailphoto |X |X | |Synced to M365 profile photo periodically. Admins can set the frequency of the sync by changing the Azure AD Connect value. Please note that if users change their photo both on-premises and in cloud in a time span that is less than the Azure AD Connect value, we do not guarantee that the latest photo will be served.|
| title |X |X | | | | unauthOrig |X |X |X | | | url |X |X | | |
In this case, start with the list of attributes in this topic and identify those
| st |X |X | | | | streetAddress |X |X | | | | telephoneNumber |X |X | | |
-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.|
+| thumbnailphoto |X |X | |Synced to M365 profile photo periodically. Admins can set the frequency of the sync by changing the Azure AD Connect value. Please note that if users change their photo both on-premises and in cloud in a time span that is less than the Azure AD Connect value, we do not guarantee that the latest photo will be served.|
| title |X |X | | | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. | | userPrincipalName |X | | |UPN is the login ID for the user. Most often the same as [mail] value. |
active-directory Howto Enforce Signed Saml Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-enforce-signed-saml-authentication.md
If enabled Azure Active Directory will validate the requests against the public
- Key identifier in request is missing and two most recently added certificates don't match with the request signature. - Request signed but algorithm missing. - No certificate matching with provided key identifier. -- Signature algorithm not allowed. Only RSA-SHA256 is supported.
+- Signature algorithm not allowed. Only RSA-SHA256 is supported.
+
+> [!NOTE]
+> A `Signature` element in `AuthnRequest` elements is optional. If `Require Verification certificates` is not checked, Azure AD does not validate signed authentication requests if a signature is present. Requestor verification is provided for by only responding to registered Assertion Consumer Service URLs.
+
+> If `Require Verification certificates` is checked, SAML Request Signature Verification will work for SP-initiated(service provider/relying party initiated) authentication requests only. Only the application configured by the service provider will have the access to to the private and public keys for signing the incoming SAML Authentication Reqeusts from the applicaiton. The public key should be uploaded to allow the verification of the request, in which case AAD will have access to only the public key.
+
+> Enabling `Require Verification certificates` will not allow IDP-initiated authentication requests (like SSO testing feature, MyApps or M365 app launcher) to be validated as the IDP would not possess the same private keys as the registered applicaiton.
## To configure SAML Request Signature Verification in the Azure portal
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md
Your Azure AD reporting and monitoring solution depends on your legal, security,
You can clean up access to applications. For example, [removing a userΓÇÖs access](methods-for-removing-user-access.md). You can also [disable how a user signs in](disable-user-sign-in-portal.md). And finally, you can delete the application if it's no longer needed for the organization. For more information on how to delete an enterprise application from your Azure AD tenant, see [Quickstart: Delete an enterprise application](delete-application-portal.md).
+## Guided walkthrough
+
+For a guided walkthrough of many of the recommendations in this article, see the [Microsoft 365 Secure your cloud apps with Single Sign On (SSO) guided walkthrough](https://go.microsoft.com/fwlink/?linkid=2221502).
+ ## Next steps - Get started by adding your first enterprise application with the [Quickstart: Add an enterprise application](add-application-portal.md).
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
Title: Sign-in logs (preview) in Azure Active Directory
-description: Conceptual information about Azure AD sign-in logs, including new features in preview.
+ Title: Sign-in logs (preview)
+description: Conceptual information about sign-in logs, including new features in preview.
Previously updated : 01/12/2023 Last updated : 03/24/2023
You can customize the list view by clicking **Columns** in the toolbar.
![Screenshot customize columns button.](./media/concept-all-sign-ins/sign-in-logs-columns-preview.png)
+#### Considerations for MFA sign-ins
+
+When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`.
+ ### Non-interactive user sign-ins
-Like interactive user sign-ins, non-interactive sign-ins are done on behalf of a user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user will perceive these sign-ins as happening in the background.
+Like interactive user sign-ins, non-interactive sign-ins are done on behalf of a user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user perceives these sign-ins as happening in the background.
**Report size:** Large </br> **Examples:**
You can't customize the fields shown in this report.
To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period. The non-interactive sign-ins share the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the state of the user or client doesn't change, the IP address, resource, and all other information is the same for each access token request. The only state that does change is the date and time of the sign-in.
-When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins will be from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) will have a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
+When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) have a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps.
Sign-ins are aggregated in the non-interactive users when the following data matches:
The IP address of non-interactive sign-ins doesn't match the actual source IP of
### Service principal sign-ins
-Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any non-user account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources.
+Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any nonuser account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources.
**Report size:** Large </br>
Select the **Add filters** option from the top of the table to get started.
![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-all-sign-ins/sign-in-logs-filter-preview.png)
-There are several filter options to choose from. Below are some notable options and details.
+There are several filter options to choose from:
- **User:** The *user principal name* (UPN) of the user in question. - **Status:** Options are *Success*, *Failure*, and *Interrupted*.
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
Previously updated : 01/12/2023 Last updated : 03/24/2023
Select the **Add filters** option from the top of the table to get started.
![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-sign-ins/sign-in-logs-filter.png)
-There are several filter options to choose from. Below are some notable options and details.
+There are several filter options to choose from:
- **User:** The *user principal name* (UPN) of the user in question. - **Status:** Options are *Success*, *Failure*, and *Interrupted*.
There are several filter options to choose from. Below are some notable options
- *Not applied:* No policy applied to the user and application during sign-in. - *Success:* One or more CA policies applied to the user and application (but not necessarily the other conditions) during sign-in. - *Failure:* The sign-in satisfied the user and application condition of at least one CA policy and grant controls are either not satisfied or set to block access.-- **IP addresses:** There is no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
+- **IP addresses:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.
The following table provides the options and descriptions for the **Client app** filter option.
Now that your sign-in logs table is formatted appropriately, you can more effect
### Sign-in error codes
-If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we cannot document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue.
+If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we can't document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue.
![Screenshot of a sign-in error code.](./media/concept-sign-ins/error-code.png)
When analyzing authentication details, take note of the following details:
- The **Primary authentication** row isn't initially logged. - If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting.
+#### Considerations for MFA sign-ins
+
+When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`.
+ ## Sign-in data used by other services Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage.
active-directory Recommendation Migrate Apps From Adfs To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-apps-from-adfs-to-azure-ad.md
Previously updated : 03/07/2023 Last updated : 03/25/2023 - # Azure AD recommendation: Migrate apps from ADFS to Azure AD
-[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
+[Azure AD recommendations](overview-recommendations.md) provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
This article covers the recommendation to migrate apps from Active Directory Federated Services (AD FS) to Azure Active Directory (Azure AD). This recommendation is called `adfsAppsMigration` in the recommendations API in Microsoft Graph.
Using Azure AD gives you granular per-application access controls to secure acce
## Action plan 1. [Install Azure AD Connect Health](../hybrid/how-to-connect-install-roadmap.md) on your AD FS server.
+1. [Review the AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md) to get insights about your AD FS applications.
+1. Read the solution guide for [migrating applications to Azure AD](../manage-apps/migrate-adfs-apps-to-azure.md).
+1. Migrate applications to Azure AD. For more information, see the article [Migrate from federation to cloud authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md).
-2. [Review the AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md) to get insights about your AD FS applications.
+### Guided walkthrough
-3. Read the solution guide for [migrating applications to Azure AD](../manage-apps/migrate-adfs-apps-to-azure.md).
+For a guided walkthrough of many of the recommendations in this article, see the migration guide [Migrate from AD FS to Microsoft Azure Active Directory for identity management](https://setup.microsoft.com/azure/migrate-ad-fs-to-microsoft-azure-ad).
-4. Migrate applications to Azure AD. For more information, use [the deployment plan for enabling single sign-on](https://go.microsoft.com/fwlink/?linkid=2110877&amp;clcid=0x409).
-
## Next steps - [Review the Azure AD recommendations overview](overview-recommendations.md)
active-directory Citi Program Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citi-program-tutorial.md
+
+ Title: Azure Active Directory SSO integration with CITI Program
+description: Learn how to configure single sign-on between Azure Active Directory and CITI Program.
++++++++ Last updated : 03/26/2023++++
+# Azure Active Directory SSO integration with CITI Program
+
+In this article, you learn how to integrate CITI Program with Azure Active Directory (Azure AD). The CITI Program identifies education and training needs in the communities we serve and provides high quality, peer-reviewed, web-based educational materials to meet those needs. When you integrate CITI Program with Azure AD, you can:
+
+* Control in Azure AD who has access to CITI Program.
+* Enable your users to be automatically signed-in to CITI Program with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for CITI Program in a test environment. CITI Program supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with CITI Program, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* CITI Program single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the CITI Program application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add CITI Program from the Azure AD gallery
+
+Add CITI Program from the Azure AD application gallery to configure single sign-on with CITI Program. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **CITI Program** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://www.citiprogram.org/shibboleth`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://www.citiprogram.org/Shibboleth.sso/SAML2/POST`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://www.citiprogram.org/Shibboleth.sso/Login?target=https://www.citiprogram.org/Secure/Welcome.cfm?inst=<InstitutionID>&entityID=<EntityID>`
+
+ > [!NOTE]
+ > This value is not real. Update this value with the actual Sign on URL. Contact [CITI Program support team](mailto:shibboleth@citiprogram.org) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. CITI Program application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, CITI Program application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | urn:oid:1.3.6.1.4.1.5923.1.1.1.6 | user.userprincipalname |
+ | urn:oid:0.9.2342.19200300.100.1.3 | user.userprincipalname |
+ | urn:oid:2.5.4.42 | user.givenname |
+ | urn:oid:2.5.4.4 | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up CITI Program** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure CITI Program SSO
+
+To configure single sign-on on **CITI Program** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CITI Program support team](mailto:shibboleth@citiprogram.org). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create CITI Program test user
+
+In this section, a user called B.Simon is created in CITI Program. CITI Program supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in CITI Program, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to CITI Program Sign-on URL where you can initiate the login flow.
+
+* Go to CITI Program Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the CITI Program tile in the My Apps, this will redirect to CITI Program Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure CITI Program you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Infor Cloudsuite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infor-cloudsuite-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
9. Review the user attributes that are synchronized from Azure AD to Infor CloudSuite in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Infor CloudSuite for update operations. Select the **Save** button to commit any changes.
- ![Infor CloudSuite User Attributes](media/infor-cloudsuite-provisioning-tutorial/userattributes.png)
+ |Attribute|Type|Supported for filtering|Required by Infor CloudSuite|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |displayName|String||
+ |externalId|String||
+ |name.familyName|String||
+ |name.givenName|String||
+ |displayName|String||
+ |title|String||
+ |emails[type eq "work"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:infor:2.0:User:actorId|String||
+ |urn:ietf:params:scim:schemas:extension:infor:2.0:User:federationId|String||
+ |urn:ietf:params:scim:schemas:extension:infor:2.0:User:ifsPersonId|String||
+ |urn:ietf:params:scim:schemas:extension:infor:2.0:User:inUser|String||
+ |urn:ietf:params:scim:schemas:extension:infor:2.0:User:userAlias|String||
+ 10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Infor CloudSuite**.
This section guides you through the steps to configure the Azure AD provisioning
11. Review the group attributes that are synchronized from Azure AD to Infor CloudSuite in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Infor CloudSuite for update operations. Select the **Save** button to commit any changes.
- ![Infor CloudSuite Group Attributes](media/infor-cloudsuite-provisioning-tutorial/groupattributes.png)
+ |Attribute|Type|Supported for filtering|Required by Infor CloudSuite|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+ |externalId|String||
12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
This section guides you through the steps to configure the Azure AD provisioning
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Infor CloudSuite.
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
-For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+## Change log
+02/15/2023 - Added support for custom extension user attributes **urn:ietf:params:scim:schemas:extension:infor:2.0:User:actorId**, **urn:ietf:params:scim:schemas:extension:infor:2.0:User:federationId**, **urn:ietf:params:scim:schemas:extension:infor:2.0:User:ifsPersonId**, **urn:ietf:params:scim:schemas:extension:infor:2.0:User:inUser**, and **urn:ietf:params:scim:schemas:extension:infor:2.0:User:userAlias**.
-## Additional resources
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Intradiem Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intradiem-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Intradiem
+description: Learn how to configure single sign-on between Azure Active Directory and Intradiem.
++++++++ Last updated : 03/26/2023++++
+# Azure Active Directory SSO integration with Intradiem
+
+In this article, you learn how to integrate Intradiem with Azure Active Directory (Azure AD). AI-Powered Productivity Solution that Integrates with Call Center and Workforce Management Software to Improve Savings, Productivity, and Engagement. When you integrate Intradiem with Azure AD, you can:
+
+* Control in Azure AD who has access to Intradiem.
+* Enable your users to be automatically signed-in to Intradiem with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Intradiem in a test environment. Intradiem supports only **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Intradiem, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Intradiem single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Intradiem application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Intradiem from the Azure AD gallery
+
+Add Intradiem from the Azure AD application gallery to configure single sign-on with Intradiem. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Intradiem** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ ||
+ | `https://<CustomerName>.intradiem.com/auth/realms/<CustomerName>` |
+ | `https://<CustomerName>auth.intradiem.com/auth/realms/<CustomerName>` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://<CustomerName>auth.intradiem.com/auth/realms/<CustomerName>/broker/<CustomerName>/endpoint` |
+ | `https://<CustomerName>.intradiem.com/auth/realms/<CustomerName>/broker/<CustomerName>/endpoint` |
+
+ c. In the **Sign on URL** textbox, type a URL using one of the following patterns:
+
+ | **Sign on URL** |
+ |-|
+ | `https://<CustomerName>auth.intradiem.com` |
+ | `https://<CustomerName>.intradiem.com` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Intradiem support team](mailto:support@intradiem.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Intradiem SSO
+
+To configure single sign-on on **Intradiem** side, you need to send the **App Federation Metadata Url** to [Intradiem support team](mailto:support@intradiem.com). They set this setting to have the SAML SSO connection set properly on both sides
+
+### Create Intradiem test user
+
+In this section, you create a user called Britta Simon in Intradiem. Work with [Intradiem support team](mailto:support@intradiem.com) to add the users in the Intradiem platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Intradiem Sign-on URL where you can initiate the login flow.
+
+* Go to Intradiem Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Intradiem tile in the My Apps, this will redirect to Intradiem Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Intradiem you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Lambda Test Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lambda-test-single-sign-on-tutorial.md
+
+ Title: Azure Active Directory SSO integration with LambdaTest Single Sign on
+description: Learn how to configure single sign-on between Azure Active Directory and LambdaTest Single Sign on.
++++++++ Last updated : 03/26/2023++++
+# Azure Active Directory SSO integration with LambdaTest Single Sign on
+
+In this article, you learn how to integrate LambdaTest Single Sign on with Azure Active Directory (Azure AD). LambdaTest's Single Sign-on application enables you to self-configure SSO with your Azure AD instance. When you integrate LambdaTest Single Sign on with Azure AD, you can:
+
+* Control in Azure AD who has access to LambdaTest Single Sign on.
+* Enable your users to be automatically signed-in to LambdaTest Single Sign on with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for LambdaTest Single Sign on in a test environment. LambdaTest Single Sign on supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with LambdaTest Single Sign on, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* LambdaTest Single Sign on single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the LambdaTest Single Sign on application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add LambdaTest Single Sign on from the Azure AD gallery
+
+Add LambdaTest Single Sign on from the Azure AD application gallery to configure single sign-on with LambdaTest Single Sign on. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **LambdaTest Single Sign on** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:lambdatest:<CustomerName>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://lambdatest.auth0.com/login/callback?connection=<CustomerName>`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://accounts.lambdatest.com/auth0/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [LambdaTest Single Sign on Client support team](mailto:support@lambdatest.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up LambdaTest Single Sign on** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure LambdaTest Single Sign on SSO
+
+To configure single sign-on on **LambdaTest Single Sign on** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [LambdaTest Single Sign on support team](mailto:support@lambdatest.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create LambdaTest Single Sign on test user
+
+In this section, a user called B.Simon is created in LambdaTest Single Sign on. LambdaTest Single Sign on supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in LambdaTest Single Sign on, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to LambdaTest Single Sign on Sign-on URL where you can initiate the login flow.
+
+* Go to LambdaTest Single Sign on Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the LambdaTest Single Sign on for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the LambdaTest Single Sign on tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the LambdaTest Single Sign on for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure LambdaTest Single Sign on you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Sauce Labs Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sauce-labs-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Sauce Labs
+description: Learn how to configure single sign-on between Azure Active Directory and Sauce Labs.
++++++++ Last updated : 03/26/2023++++
+# Azure Active Directory SSO integration with Sauce Labs
+
+In this article, you learn how to integrate Sauce Labs with Azure Active Directory (Azure AD). App integration for single sign-on and automatic account provisioning at Sauce Labs. When you integrate Sauce Labs with Azure AD, you can:
+
+* Control in Azure AD who has access to Sauce Labs.
+* Enable your users to be automatically signed-in to Sauce Labs with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Sauce Labs in a test environment. Sauce Labs supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Sauce Labs, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Sauce Labs single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Sauce Labs application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Sauce Labs from the Azure AD gallery
+
+Add Sauce Labs from the Azure AD application gallery to configure single sign-on with Sauce Labs. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Sauce Labs** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user doesn't have to perform any step as the app is already preintegrated with Azure.
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://accounts.saucelabs.com/`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Sauce Labs** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Sauce Labs SSO
+
+To configure single sign-on on **Sauce Labs** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Sauce Labs support team](mailto:support@saucelabs.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Sauce Labs test user
+
+In this section, a user called B.Simon is created in Sauce Labs. Sauce Labs supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Sauce Labs, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Sauce Labs Sign-on URL where you can initiate the login flow.
+
+* Go to Sauce Labs Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Sauce Labs for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Sauce Labs tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Sauce Labs for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Sauce Labs you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
This article covers integration with a public load balancer on AKS. For internal
## Before you begin
-Azure Load Balancer is available in two SKUs: *Basic* and *Standard*. The *Standard* SKU is used by default when you create an AKS cluster. The *Standard* SKU gives you access to added functionality, such as a larger backend pool, [multiple node pools](use-multiple-node-pools.md), [Availability Zones](availability-zones.md), and is [secure by default][azure-lb]. It's the recommended load balancer SKU for AKS.
-
-For more information on the *Basic* and *Standard* SKUs, see [Azure Load Balancer SKU comparison][azure-lb-comparison].
-
-This article assumes you have an AKS cluster with the *Standard* SKU Azure Load Balancer. If you need an AKS cluster, you can create one [using Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [the Azure portal][aks-quickstart-portal].
+* Azure Load Balancer is available in two SKUs: *Basic* and *Standard*. The *Standard* SKU is used by default when you create an AKS cluster. The *Standard* SKU gives you access to added functionality, such as a larger backend pool, [multiple node pools](use-multiple-node-pools.md), [Availability Zones](availability-zones.md), and is [secure by default][azure-lb]. It's the recommended load balancer SKU for AKS. For more information on the *Basic* and *Standard* SKUs, see [Azure Load Balancer SKU comparison][azure-lb-comparison].
+* This article assumes you have an AKS cluster with the *Standard* SKU Azure Load Balancer. If you need an AKS cluster, you can create one [using Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [the Azure portal][aks-quickstart-portal].
+* AKS manages the lifecycle and operations of agent nodes. Modifying the IaaS resources associated with the agent nodes isn't supported. An example of an unsupported operation is making manual changes to the load balancer resource group.
> [!IMPORTANT] > If you'd prefer to use your own gateway, firewall, or proxy to provide outbound connection, you can skip the creation of the load balancer outbound pool and respective frontend IP by using [**outbound type as UserDefinedRouting (UDR)**](egress-outboundtype.md). The outbound type defines the egress method for a cluster and defaults to type `LoadBalancer`.
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Private cluster is available in public regions, Azure Government, and Azure Chin
* The `aks-preview` extension 0.5.29 or higher. * If using Azure Resource Manager (ARM) or the Azure REST API, the AKS API version must be 2021-05-01 or higher. * Azure Private Link service is supported on Standard Azure Load Balancer only. Basic Azure Load Balancer isn't supported.
-* To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16]
+* To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server, and make sure to add this public IP address as the *first* DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16]
## Limitations
az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --lo
The API server endpoint has no public IP address. To manage the API server, you'll need to use a VM that has access to the AKS cluster's Azure Virtual Network (VNet). There are several options for establishing network connectivity to the private cluster.
-* Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster.
+* Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster using the [`az vm create`][az-vm-create] command with the `--vnet-name` parameter.
* Use a VM in a separate network and set up [Virtual network peering][virtual-network-peering]. See the section below for more information on this option. * Use an [Express Route or VPN][express-route-or-VPN] connection. * Use the [AKS `command invoke` feature][command-invoke]. * Use a [private endpoint][private-endpoint-service] connection.
-Creating a VM in the same VNET as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges.
+Creating a VM in the same VNet as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges.
## Virtual network peering
Virtual network peering is one way to access your private cluster. To use virtua
1. In the Azure portal, navigate to the resource group that contains your cluster's virtual network. 1. In the right pane, select the virtual network. The virtual network name is in the form *aks-vnet-\**. 1. In the left pane, select **Peerings**.
-1. Select **Add**, add the virtual network of the VM, and then create the peering.
-1. Go to the virtual network where you have the VM and select **Peerings**. Select the AKS virtual network, and then create the peering. If the address ranges on the AKS virtual network and the VM's virtual network clash, peering fails. For more information, see [Virtual network peering][virtual-network-peering].
+1. Select **Add**, add the virtual network of the VM, and then create the peering. For more information, see [Virtual network peering][virtual-network-peering].
## Hub and spoke with custom DNS
For associated best practices, see [Best practices for network connectivity and
[install-azure-cli]: /cli/azure/install-azure-cli [private-dns-zone-contributor-role]: ../role-based-access-control/built-in-roles.md#dns-zone-contributor [network-contributor-role]: ../role-based-access-control/built-in-roles.md#network-contributor
+[az-vm-create]: /cli/azure/vm#az-vm-create
aks Stop Api Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/stop-api-upgrade.md
+
+ Title: Stop cluster upgrades on API breaking changes in Azure Kubernetes Service (AKS) (preview)
+description: Learn how to stop minor version change Azure Kubernetes Service (AKS) cluster upgrades on API breaking changes.
+++ Last updated : 03/24/2023++
+# Stop cluster upgrades on API breaking changes in Azure Kubernetes Service (AKS)
+
+To stay within a supported Kubernetes version, you usually have to upgrade your version at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes and deprecations and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
+
+Azure Kubernetes Service (AKS) now supports fail fast on minor version change cluster upgrades. This feature alerts you with an error message if it detects usage on deprecated APIs in the goal version.
++
+## Fail fast on control plane minor version manual upgrades in AKS (preview)
+
+AKS will fail fast on minor version change cluster manual upgrades if it detects usage on deprecated APIs in the goal version. This will only happen if the following criteria are true:
+
+- It's a minor version change for the cluster control plane.
+- Your Kubernetes goal version is >= 1.26.0.
+- The PUT MC request uses a preview API version of >= 2023-01-02-preview.
+- The usage is performed within the last 1-12 hours. We record usage hourly, so usage within the last hour isn't guaranteed to appear in the detection.
+
+If the previous criteria are true and you attempt an upgrade, you'll receive an error message similar to the following example error message:
+
+```
+Bad Request({
+
+ "code": "ValidationError",
+
+ "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set IgnoreKubernetesDeprecations in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n",
+
+ "subcode": "UpgradeBlockedOnDeprecatedAPIUsage"
+
+})
+```
+
+After receiving the error message, you have two options:
+
+- Remove usage on your end and wait 12 hours for the current record to expire.
+- Bypass the validation to ignore API changes.
+
+### Remove usage on API breaking changes
+
+Remove usage on API breaking changes using the following steps:
+
+1. Remove the deprecated API, which is listed in the error message.
+2. Wait 12 hours for the current record to expire.
+3. Retry your cluster upgrade.
+
+### Bypass validation to ignore API changes
+
+To bypass validation to ignore API breaking changes, update the `"properties":` block of `Microsoft.ContainerService/ManagedClusters` `PUT` operation with the following settings:
+
+> [!NOTE]
+> The date and time you specify for `"until"` has to be in the future. `Z` stands for timezone. The following example is in GMT. For more information, see [Combined date and time representations](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
+
+```
+{
+ "properties": {
+ "upgradeSettings": {
+ "overrideSettings": {
+ "controlPlaneOverrides": [
+ "IgnoreKubernetesDeprecations"
+ ],
+ "until": "2023-04-01T13:00:00Z"
+ }
+ }
+ }
+}
+```
+
+## Next steps
+
+In this article, you learned how AKS detects deprecated APIs before an update is triggered and fails the upgrade operation upfront. To learn more about AKS cluster upgrades, see:
+
+- [Upgrade an AKS cluster][upgrade-cluster]
+- [Use Planned Maintenance to schedule and control upgrades for your AKS clusters (preview)][planned-maintenance-aks]
+
+<!-- INTERNAL LINKS -->
+[upgrade-cluster]: upgrade-cluster.md
+[planned-maintenance-aks]: planned-maintenance.md
aks Use Metrics Server Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-metrics-server-vertical-pod-autoscaler.md
Title: Configure Metrics Server VPA in Azure Kubernetes Service (AKS) description: Learn how to vertically autoscale your Metrics Server pods on an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/21/2023 Last updated : 03/27/2023 # Configure Metrics Server VPA in Azure Kubernetes Service (AKS)
To update the coefficient values, create a ConfigMap in the overlay *kube-system
1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest. ```yml
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: metrics-server-config
- namespace: kube-system
- labels:
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: EnsureExists
- data:
- NannyConfiguration: |-
- apiVersion: nannyconfig/v1alpha1
- kind: NannyConfiguration
- baseCPU: 100m
- cpuPerNode: 1m
- baseMemory: 100Mi
- memoryPerNode: 8Mi
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: metrics-server-config
+ namespace: kube-system
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: EnsureExists
+ data:
+ NannyConfiguration: |-
+ apiVersion: nannyconfig/v1alpha1
+ kind: NannyConfiguration
+ baseCPU: 100m
+ cpuPerNode: 1m
+ baseMemory: 100Mi
+ memoryPerNode: 8Mi
``` In the ConfigMap example, the resource limit and request are changed to the following:
If you would like to bypass VPA for Metrics Server and manually control its reso
1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest. ```yml
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: metrics-server-config
- namespace: kube-system
- labels:
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: EnsureExists
- data:
- NannyConfiguration: |-
- apiVersion: nannyconfig/v1alpha1
- kind: NannyConfiguration
- baseCPU: 100m
- cpuPerNode: 0m
- baseMemory: 100Mi
- memoryPerNode: 0Mi
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: metrics-server-config
+ namespace: kube-system
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: EnsureExists
+ data:
+ NannyConfiguration: |-
+ apiVersion: nannyconfig/v1alpha1
+ kind: NannyConfiguration
+ baseCPU: 100m
+ cpuPerNode: 0m
+ baseMemory: 100Mi
+ memoryPerNode: 0Mi
``` In this ConfigMap example, it changes the resource limit and request to the following:
If you would like to bypass VPA for Metrics Server and manually control its reso
kubectl -n kube-system delete po metrics-server-pod-name ```
-4. To verify the updated resources took affect, run the following command to review the Metrics Server VPA log.
+4. To verify the updated resources took effect, run the following command to review the Metrics Server VPA log.
```bash kubectl -n kube-system logs metrics-server-pod-name -c metrics-server-vpa
If you would like to bypass VPA for Metrics Server and manually control its reso
1. If you use the following configmap, the Metrics Server VPA customizations aren't applied. You need add a unit for `baseCPU`. ```yml
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: metrics-server-config
- namespace: kube-system
- labels:
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: EnsureExists
- data:
- NannyConfiguration: |-
- apiVersion: nannyconfig/v1alpha1
- kind: NannyConfiguration
- baseCPU: 100
- cpuPerNode: 1m
- baseMemory: 100Mi
- memoryPerNode: 8Mi
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: metrics-server-config
+ namespace: kube-system
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: EnsureExists
+ data:
+ NannyConfiguration: |-
+ apiVersion: nannyconfig/v1alpha1
+ kind: NannyConfiguration
+ baseCPU: 100
+ cpuPerNode: 1m
+ baseMemory: 100Mi
+ memoryPerNode: 8Mi
``` The following example output resembles the results showing the updated throttling settings aren't applied.
Metrics Server is a component in the core metrics pipeline. For more information
[metrics-server-api-design]: https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md <! INTERNAL LINKS >
-[horizontal-pod-autoscaler]: concepts-scale.md#horizontal-pod-autoscaler
+[horizontal-pod-autoscaler]: concepts-scale.md#horizontal-pod-autoscaler
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service (AKS) description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 03/14/2023 Last updated : 03/27/2023
Azure AD workload identity supports the following mappings related to a service
If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think of a service account as an Azure Identity, except a service account is part of the core Kubernetes API, rather than a [Custom Resource Definition][custom-resource-definition] (CRD). The following describes a list of available labels and annotations that can be used to configure the behavior when exchanging the service account token for an Azure AD access token.
-### Service account labels
-
-|Label |Description |Recommended value |Required |
-|||||
-|`azure.workload.identity/use` |Represents the service account<br> is to be used for workload identity. |true |Yes |
- ### Service account annotations |Annotation |Description |Default |
If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think
|Label |Description |Recommended value |Required | |||||
-|`azure.workload.identity/use` | Represents the pod is to be used for workload identity. |true |Yes |
+|`azure.workload.identity/use` | This label is required in the pod template spec. Only pods with this label will be mutated by the azure-workload-identity mutating admission webhook to inject the Azure specific environment variables and the projected service account token volume. |true |Yes |
### Pod annotations |Annotation |Description |Default | |--||--|
-|`azure.workload.identity/use` |Represents the service account<br> is to be used for workload identity. | |
|`azure.workload.identity/service-account-token-expiration` |Represents the `expirationSeconds` field for the projected service account token. It's an optional field that you configure to prevent any downtime caused by errors during service account token refresh. Kubernetes service account token expiry isn't correlated with Azure AD tokens. Azure AD tokens expire in 24 hours after they're issued. <sup>1</sup> |3600<br> Supported range is 3600-86400. | |`azure.workload.identity/skip-containers` |Represents a semi-colon-separated list of containers to skip adding projected service account token volume. For example `container1;container2`. |By default, the projected service account token volume is added to all containers if the service account is labeled with `azure.workload.identity/use: true`. | |`azure.workload.identity/inject-proxy-sidecar` |Injects a proxy init container and proxy sidecar into the pod. The proxy sidecar is used to intercept token requests to IMDS and acquire an Azure AD token on behalf of the user with federated identity credential. |true |
api-management Api Management Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md
Previously updated : 02/07/2022 Last updated : 02/06/2023
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
| -- | -- | | -- | -- | - | | Azure AD integration<sup>1</sup> | No | Yes | No | Yes | Yes | | Virtual Network (VNet) support | No | Yes | No | No | Yes |
+| Private endpoint support for inbound connections | No | Yes | Yes | Yes | Yes |
| Multi-region deployment | No | No | No | No | Yes | | Availability zones | No | No | No | No | Yes | | Multiple custom domain names | No | Yes | No | No | Yes |
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
| Built-in cache | No | Yes | Yes | Yes | Yes | | Built-in analytics | No | Yes | Yes | Yes | Yes | | [Self-hosted gateway](self-hosted-gateway-overview.md)<sup>3</sup> | No | Yes | No | No | Yes |
+| [Workspaces](workspaces-overview.md) | No | Yes | No | Yes | Yes |
| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | Yes | Yes | Yes | Yes | Yes | | [External cache](./api-management-howto-cache-external.md) | Yes | Yes | Yes | Yes | Yes | | [Client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) | Yes | Yes | Yes | Yes | Yes |
-| [Policies](api-management-howto-policies.md)<sup>4</sup> | Yes | Yes | Yes | Yes | Yes |
+| [Policies](api-management-howto-policies.md)<sup>4</sup> | Yes | Yes | Yes | Yes | Yes |
| [Backup and restore](api-management-howto-disaster-recovery-backup-restore.md) | No | Yes | Yes | Yes | Yes | | [Management over Git](api-management-configuration-repository-git.md) | No | Yes | Yes | Yes | Yes | | Direct management API | No | Yes | Yes | Yes | Yes |
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
<sup>1</sup> Enables the use of Azure AD (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/> <sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/>
-<sup>4</sup> The following policies aren't available in the Consumption tier: rate limit by key and quota by key. <br/>
+<sup>4</sup> See [Gateway overview](api-management-gateways-overview.md#policies) for differences in policy support in the dedicated, consumption, and self-hosted gateways. <br/>
<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier.
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
Previously updated : 08/04/2022 Last updated : 02/06/2023
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
Title: Set up private endpoint for Azure API Management Preview
-description: Learn how to restrict access to an Azure API Management instance by using an Azure private endpoint and Azure Private Link.
+ Title: Set up inbound private endpoint for Azure API Management
+description: Learn how to restrict inbound access to an Azure API Management instance by using an Azure private endpoint and Azure Private Link.
Previously updated : 03/31/2022 Last updated : 03/20/2023
-# Connect privately to API Management using a private endpoint
+# Connect privately to API Management using an inbound private endpoint
-You can configure a [private endpoint](../private-link/private-endpoint-overview.md) for your API Management instance to allow clients in your private network to securely access the instance over [Azure Private Link](../private-link/private-link-overview.md).
+You can configure an inbound [private endpoint](../private-link/private-endpoint-overview.md) for your API Management instance to allow clients in your private network to securely access the instance over [Azure Private Link](../private-link/private-link-overview.md).
-* The private endpoint uses an IP address from your Azure VNet address space.
+* The private endpoint uses an IP address from an Azure VNet in which it's hosted.
* Network traffic between a client on your private network and API Management traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. * Configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. -
-With a private endpoint and Private Link, you can:
--- Create multiple Private Link connections to an API Management instance. --- Use the private endpoint to send inbound traffic on a secure connection. --- Use policy to distinguish traffic that comes from the private endpoint. --- Limit incoming traffic only to private endpoints, preventing data exfiltration. [!INCLUDE [api-management-private-endpoint](../../includes/api-management-private-endpoint.md)]
With a private endpoint and Private Link, you can:
## Limitations
-* Only the API Management instance's Gateway endpoint currently supports Private Link connections.
-* Each API Management instance currently supports at most 100 Private Link connections.
-* Connections are not supported on the [self-hosted gateway](self-hosted-gateway-overview.md).
+* Only the API Management instance's Gateway endpoint supports inbound Private Link connections.
+* Each API Management instance supports at most 100 Private Link connections.
+* Connections aren't supported on the [self-hosted gateway](self-hosted-gateway-overview.md).
## Prerequisites
When you use the Azure portal to create a private endpoint, as shown in the next
1. In the left-hand menu, select **Network**.
-1. Select **Private endpoint connections** > **+ Add endpoint**.
+1. Select **Inbound private endpoint connections** > **+ Add endpoint**.
:::image type="content" source="media/private-endpoint/add-endpoint-from-instance.png" alt-text="Add a private endpoint using Azure portal":::
When you use the Azure portal to create a private endpoint, as shown in the next
| Subscription | Select your subscription. | | Resource group | Select an existing resource group, or create a new one. It must be in the same region as your virtual network.| | **Instance details** | |
- | Name | Enter a name for the endpoint such as **myPrivateEndpoint**. |
+ | Name | Enter a name for the endpoint such as *myPrivateEndpoint*. |
+ | Network Interface Name | Enter a name for the network interface, such as *myInterface* |
| Region | Select a location for the private endpoint. It must be in the same region as your virtual network. It may differ from the region where your API Management instance is hosted. | 1. Select the **Resource** tab or the **Next: Resource** button at the bottom of the page. The following information about your API Management instance is already populated:
When you use the Azure portal to create a private endpoint, as shown in the next
:::image type="content" source="media/private-endpoint/create-private-endpoint.png" alt-text="Create a private endpoint in Azure portal":::
-1. Select the **Configuration** tab or the **Next: Configuration** button at the bottom of the screen.
+1. Select the **Virtual Network** tab or the **Next: Virtual Network** button at the bottom of the screen.
-1. In **Configuration**, enter or select this information:
+1. In **Networking**, enter or select this information:
| Setting | Value | | - | -- |
- | **Networking** | |
| Virtual network | Select your virtual network. | | Subnet | Select your subnet. |
- | **Private DNS integration** | |
+ | Private IP configuration | In most cases, select **Dynamically allocate IP address.** |
+ | Application security group | Optionally select an [application security group](../virtual-network/application-security-groups.md). |
+
+1. Select the **DNS** tab or the **Next: DNS** button at the bottom of the screen.
+
+1. In **Private DNS integration**, enter or select this information:
+
+ | Setting | Value |
+ | - | -- |
| Integrate with private DNS zone | Leave the default of **Yes**. | | Subscription | Select your subscription. | | Resource group | Select your resource group. |
- | Private DNS zones | Leave the default of **(new) privatelink.azure-api.net**.
+ | Private DNS zones | The default value is displayed: **(new) privatelink.azure-api.net**.
-1. Select **Review + create**.
+1. Select the **Tags** tab or the **Next: Tabs** button at the bottom of the screen. If you desire, enter tags to organize your Azure resources.
+
+1. Select **Review + create**.
1. Select **Create**. ### List private endpoint connections to the instance
-After the private endpoint is created, it appears in the list on the API Management instance's **Private endpoint connections** page in the portal.
+After the private endpoint is created, it appears in the list on the API Management instance's **Inbound private endpoint connections** page in the portal.
You can also use the [Private Endpoint Connection - List By Service](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-by-service) REST API to list private endpoint connections to the service instance.
Use the following JSON body:
After the private endpoint is created, confirm its DNS settings in the portal:
-1. In the portal, navigate to the **Private Link Center**.
-1. Select **Private endpoints** and select the private endpoint you created.
+1. Navigate to your API Management service in the [Azure portal](https://portal.azure.com/).
+
+1. In the left-hand menu, select **Network** > **Inbound private endpoint connections**, and select the private endpoint you created.
+ 1. In the left-hand navigation, select **DNS configuration**.+ 1. Review the DNS records and IP address of the private endpoint. The IP address is a private address in the address space of the subnet where the private endpoint is configured. ### Test in virtual network
To connect to 'Microsoft.ApiManagement/service/my-apim-service', please use the
## Next steps * Use [policy expressions](api-management-policy-expressions.md#ref-context-request) with the `context.request` variable to identify traffic from the private endpoint.
-* Learn more about [private endpoints](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md).
+* Learn more about [private endpoints](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md), including [Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
* Learn more about [managing private endpoint connections](../private-link/manage-private-endpoint.md). * [Troubleshoot Azure private endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md). * Use a [Resource Manager template](https://azure.microsoft.com/resources/templates/api-management-private-endpoint/) to create an API Management instance and a private endpoint with private DNS integration.
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
Previously updated : 05/26/2022 Last updated : 03/09/2023
API Management provides several options to secure access to your API Management
You can choose one of two integration modes: *external* or *internal*. They differ in whether inbound connectivity to the gateway and other API Management endpoints is allowed from the internet or only from within the virtual network.
-* **Enabling secure and private connectivity** to the API Management gateway using a *private endpoint* (preview).
+* **Enabling secure and private inbound connectivity** to the API Management gateway using a *private endpoint*.
The following table compares virtual networking options. For more information, see later sections of this article and links to detailed guidance.
The following table compares virtual networking options. For more information, s
|||||-| |**[Virtual network - external](#virtual-network-integration)** | Developer, Premium | Azure portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to internet, peered virtual networks, Express Route, and S2S VPN connections. | External access to private and on-premises backends |**[Virtual network - internal](#virtual-network-integration)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository. | Inbound and outbound traffic can be allowed to peered virtual networks, Express Route, and S2S VPN connections. | Internal access to private and on-premises backends
-|**[Private endpoint (preview)](#private-endpoint)** | Developer, Basic, Standard, Premium | Gateway only (managed gateway supported, self-hosted gateway not supported). | Only inbound traffic can be allowed from internet, peered virtual networks, Express Route, and S2S VPN connections. | Secure client connection to API Management gateway |
+|**[Inbound private endpoint](#inbound-private-endpoint)** | Developer, Basic, Standard, Premium | Gateway only (managed gateway supported, self-hosted gateway not supported). | Only inbound traffic can be allowed from internet, peered virtual networks, Express Route, and S2S VPN connections. | Secure client connection to API Management gateway |
## Virtual network integration With Azure virtual networks (VNets), you can place ("inject") your API Management instance in a non-internet-routable network to which you control access. In a virtual network, your API Management instance can securely access other networked Azure resources and also connect to on-premises networks using various VPN technologies. To learn more about Azure VNets, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
Some virtual network limitations differ depending on the version (`stv2` or `stv
* A subnet containing API Management instances can't be moved across subscriptions. * For multi-region API Management deployments configured in internal VNet mode, users own the routing and are responsible for managing the load balancing across multiple regions. * To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address.
-* Due to platform limitations, connectivity between a resource in a globally peered VNet in another region and an API Management service in internal mode won't work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
+* Due to platform limitations, connectivity between a resource in a globally peered VNet in another region and an API Management service in internal mode doesn't work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
-## Private endpoint
+## Inbound private endpoint
-API Management supports [private endpoints](../private-link/private-endpoint-overview.md). A private endpoint enables secure client connectivity to your API Management instance using a private IP address from your virtual network and Azure Private Link.
+API Management supports [private endpoints](../private-link/private-endpoint-overview.md) for secure inbound client connections to your API Management instance. Each secure connection uses a private IP address from your virtual network and Azure Private Link.
:::image type="content" source="media/virtual-network-concepts/api-management-private-endpoint.png" alt-text="Diagram showing a secure connection to API Management using private endpoint." lightbox="media/virtual-network-concepts/api-management-private-endpoint.png":::
-With a private endpoint and Private Link, you can:
-
-* Create multiple Private Link connections to an API Management instance.
-* Use the private endpoint to send inbound traffic on a secure connection.
-* Use policy to distinguish traffic that comes from the private endpoint.
-* Limit incoming traffic only to private endpoints, preventing data exfiltration.
- [!INCLUDE [api-management-private-endpoint](../../includes/api-management-private-endpoint.md)]
-For more information, see [Connect privately to API Management using a private endpoint](private-endpoint.md).
+For more information, see [Connect privately to API Management using an inbound private endpoint](private-endpoint.md).
## Advanced networking configurations
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
Therefore, the following sample scenarios aren't currently supported in workspac
* Specifying API authorization server information (for example, for the developer portal)
+Workspace APIs can't be published to self-hosted gateways.
+ All resources in an API Management service need to have unique names, even if they are located in different workspaces. ## Next steps
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-from-template.md
ms.assetid: 6eb7d43d-e820-4a47-818c-80ff7d3b6f8e Previously updated : 01/20/2023 Last updated : 03/27/2023
If you want to make an ASE, use this Resource Manager template [ASEv2][quickstar
* *existingVirtualNetworkResourceGroup*: his parameter defines the resource group name of the existing virtual network and subnet where ASE will reside. * *subnetName*: This parameter defines the subnet name of the existing virtual network and subnet where ASE will reside. * *internalLoadBalancingMode*: In most cases, set this to 3, which means both HTTP/HTTPS traffic on ports 80/443, and the control/data channel ports listened to by the FTP service on the ASE, will be bound to an ILB-allocated virtual network internal address. If this property is set to 2, only the FTP service-related ports (both control and data channels) are bound to an ILB address. If this property is set to 0, the HTTP/HTTPS traffic remains on the public VIP.
-* *dnsSuffix*: This parameter defines the default root domain that's assigned to the ASE. In the public variation of Azure App Service, the default root domain for all web apps is *azurewebsites.net*. Because an ILB ASE is internal to a customer's virtual network, it doesn't make sense to use the public service's default root domain. Instead, an ILB ASE should have a default root domain that makes sense for use within a company's internal virtual network. For example, Contoso Corporation might use a default root domain of *internal-contoso.com* for apps that are intended to be resolvable and accessible only within Contoso's virtual network.
+* *dnsSuffix*: This parameter defines the default root domain that's assigned to the ASE. In the public variation of Azure App Service, the default root domain for all web apps is *azurewebsites.net*. Because an ILB ASE is internal to a customer's virtual network, it doesn't make sense to use the public service's default root domain. Instead, an ILB ASE should have a default root domain that makes sense for use within a company's internal virtual network. For example, Contoso Corporation might use a default root domain of *internal-contoso.com* for apps that are intended to be resolvable and accessible only within Contoso's virtual network. To specify custom root domain you need to use api version `2018-11-01` or earlier versions.
* *ipSslAddressCount*: This parameter automatically defaults to a value of 0 in the *azuredeploy.json* file because ILB ASEs only have a single ILB address. There are no explicit IP-SSL addresses for an ILB ASE. Hence, the IP-SSL address pool for an ILB ASE must be set to zero. Otherwise, a provisioning error occurs. After the *azuredeploy.parameters.json* file is filled in, create the ASE by using the PowerShell code snippet. Change the file paths to match the Resource Manager template-file locations on your machine. Remember to supply your own values for the Resource Manager deployment name and the resource group name:
app-service Create Ilb Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md
description: Learn how to create an App Service environment with an internal loa
ms.assetid: 0f4c1fa4-e344-46e7-8d24-a25e247ae138 Previously updated : 02/28/2023 Last updated : 03/27/2023
To learn more about how to configure your ILB ASE with a WAF device, see [Confi
## ILB ASEs made before May 2019
-ILB ASEs that were made before May 2019 required you to set the domain suffix during ASE creation. They also required you to upload a default certificate that was based on that domain suffix. Also, with an older ILB ASE you can't perform single sign-on to the Kudu console with apps in that ILB ASE. When configuring DNS for an older ILB ASE, you need to set the wildcard A record in a zone that matches to your domain suffix.
+ILB ASEs that were made before May 2019 required you to set the domain suffix during ASE creation. They also required you to upload a default certificate that was based on that domain suffix. Also, with an older ILB ASE you can't perform single sign-on to the Kudu console with apps in that ILB ASE. When configuring DNS for an older ILB ASE, you need to set the wildcard A record in a zone that matches to your domain suffix. Creating or changing ILB ASE with custom domain suffix requires you to use Azure Resource Manager templates and an api version prior to 2019. Last support api version is `2018-11-01`.
## Get started ##
app-service Using https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using.md
Title: Use an App Service Environment
description: Learn how to use your App Service Environment to host isolated applications. Previously updated : 02/14/2022 Last updated : 03/27/2023
To configure DNS in Azure DNS private zones:
1. Create an A record in that zone that points @ to the inbound IP address. 1. Create an A record in that zone that points *.scm to the inbound IP address.
-The DNS settings for the default domain suffix of your App Service Environment don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an App Service Environment. If you then want to create a zone named `contoso.net`, you can do so and point it to the inbound IP address. The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
+The DNS settings for the default domain suffix of your App Service Environment don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an App Service Environment. If you then want to create a zone named `contoso.net`, you can do so and point it to the inbound IP address. The custom domain name works for app requests, and if the custom domain suffix certificate includes a wildcard SAN for scm, custom domain name also work for `scm` site and you can create a `*.scm` record and point it to the inbound IP address.
## Publishing
If you have multiple App Service Environments, you might want some of them to be
- **None**: Azure upgrades in no particular batch. This value is the default. - **Early**: Upgrade in the first half of the App Service upgrades. - **Late**: Upgrade in the second half of the App Service upgrades.
+- **Manual**: Get [15 days window](./how-to-upgrade-preference.md) to deploy the upgrade manually.
Select the value you want, and then select **Save**.
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
The **Backups** page shows you the status of each backup. To get log details reg
| A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server). | Check that the connection string is valid. Allow the app's [outbound IPs](overview-inbound-outbound-ips.md) in the database server settings. | | Cannot open server "\<name>" requested by the login. The login failed. | Check that the connection string is valid. | | Missing mandatory parameters for valid Shared Access Signature. | Delete the backup schedule and reconfigure it. |
-| SSL connection is required. Please specify SSL options and retry. when trying to connect. | SSL connectivity to Azure Database for MySQL and Azure Database for PostgreSQL isn't supported for database backups. Use the native backup feature in the respective database instead. |
+| SSL connection is required. Please specify SSL options and retry when trying to connect. | SSL connectivity to Azure Database for MySQL and Azure Database for PostgreSQL isn't supported for database backups. Use the native backup feature in the respective database instead. |
## Automate with scripts
app-service Manage Create Arc Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md
Title: 'Set up Azure Arc for App Service, Functions, and Logic Apps'
description: For your Azure Arc-enabled Kubernetes clusters, learn how to enable App Service apps, function apps, and logic apps. Previously updated : 11/02/2021 Last updated : 03/24/2023 # Set up an Azure Arc-enabled Kubernetes cluster to run App Service, Functions, and Logic Apps (Preview)
The [custom location](../azure-arc/kubernetes/custom-locations.md) in Azure is u
<!-- --kubeconfig ~/.kube/config # needed for non-Azure -->
+ > [!NOTE]
+ > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](../azure-arc/kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with an Azure Active Directory user with restricted permissions on the cluster resource.
+ >
3. Validate that the custom location is successfully created with the following command. The output should show the `provisioningState` property as `Succeeded`. If not, run it again after a minute.
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
In addition to configuring the Health check options, you can also configure the
| App setting name | Allowed values | Description | |-|-|-| |`WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | 2 - 10 | The required number of failed requests for an instance to be deemed unhealthy and removed from the load balancer. For example, when set to `2`, your instances will be removed after `2` failed pings. (Default value is `10`) |
-|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 1 - 100 | By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. <br /> To override this behavior, set app setting to a value between `0` and `100`. A higher value means more unhealthy instances will be removed (default value is `50`). |
+|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 1 - 100 | By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. <br /> To override this behavior, set app setting to a value between `1` and `100`. A higher value means more unhealthy instances will be removed (default value is `50`). |
#### Authentication and security
app-service Overview Inbound Outbound Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-inbound-outbound-ips.md
az webapp show --resource-group <group_name> --name <app_name> --query possibleO
``` ## Get a static outbound IP
-You can control the IP address of outbound traffic from your app by using regional VNet integration together with a virtual network NAT gateway to direct traffic through a static public IP address. [Regional VNet integration](./overview-vnet-integration.md) is available on **Standard**, **Premium**, **PremiumV2** and **PremiumV3** App Service plans. To learn more about this setup, see [NAT gateway integration](./networking/nat-gateway-integration.md).
+You can control the IP address of outbound traffic from your app by using regional VNet integration together with a virtual network NAT gateway to direct traffic through a static public IP address. [Regional VNet integration](./overview-vnet-integration.md) is available on **Basic**, **Standard**, **Premium**, **PremiumV2** and **PremiumV3** App Service plans. To learn more about this setup, see [NAT gateway integration](./networking/nat-gateway-integration.md).
## Next steps Learn how to restrict inbound traffic by source IP addresses. > [!div class="nextstepaction"]
-> [Static IP restrictions](app-service-ip-restrictions.md)
+> [Static IP restrictions](app-service-ip-restrictions.md)
app-service Overview Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md
When your app needs to resolve a domain name using DNS, the app sends a name res
The individual app allows you to override the DNS configuration by specifying the `dnsServers` property in the `dnsConfiguration` site property object. You can specify up to five custom DNS servers. You can configure custom DNS servers using the Azure CLI: ```azurecli-interactive
-az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --set properties.dnsConfiguration.dnsServers="['168.63.169.16','1.1.1.1']"
+az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --set properties.dnsConfiguration.dnsServers="['168.63.129.16','xxx.xxx.xxx.xxx']"
``` You can still use the existing `WEBSITE_DNS_SERVER` app setting, and you can add custom DNS servers with either setting. If you want to add multiple DNS servers using the app setting, you must separate the servers by commas with no blank spaces added.
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
This article uses the [Azure portal](https://portal.azure.com) and Azure CLI to
| Failed request tracing | Windows | App Service file system | Detailed tracing information on failed requests, including a trace of the IIS components used to process the request and the time taken in each component. It's useful if you want to improve site performance or isolate a specific HTTP error. One folder is generated for each failed request, which contains the XML log file, and the XSL stylesheet to view the log file with. | | Deployment logging | Windows, Linux | App Service file system | Logs for when you publish content to an app. Deployment logging happens automatically and there are no configurable settings for deployment logging. It helps you determine why a deployment failed. For example, if you use a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script), you might use deployment logging to determine why the script is failing. |
+When stored in the App Service file system, logs are subject to the available storage for your pricing tier (see [App Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits)).
+ > [!NOTE] > App Service provides a dedicated, interactive diagnostics tool to help you troubleshoot your application. For more information, see [Azure App Service diagnostics overview](overview-diagnostics.md). >
app-service Tutorial Connect App Access Sql Database As User Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-sql-database-as-user-dotnet.md
+
+ Title: 'Tutorial - Web app accesses SQL Database as the user'
+description: Secure database connectivity with Azure Active Directory authentication from .NET web app, using the signed-in user. Learn how to apply it to other Azure services.
+++++
+ms.devlang: csharp
+ Last updated : 04/21/2023+
+# Tutorial: Connect an App Service app to SQL Database on behalf of the signed-in user
+
+This tutorial shows you how to enable [built-in authentication](overview-authentication-authorization.md) in an [App Service](overview.md) app using the Azure Active Directory authentication provider, then extend it by connecting it to a back-end Azure SQL Database by impersonating the signed-in user (also known as the [on-behalf-of flow](../active-directory/develop/v2-oauth2-on-behalf-of-flow.md)). This is a more advanced connectivity approach to [Tutorial: Access data with managed identity](tutorial-connect-msi-sql-database.md) and has the following advantages in enterprise scenarios:
+
+- Eliminates connection secrets to back-end services, just like the managed identity approach.
+- Gives the back-end database (or any other Azure service) more control over who or how much to grant access to its data and functionality.
+- Lets the app tailor its data presentation to the signed-in user.
+
+In this tutorial, you add Azure Active Directory authentication to the sample web app you deployed in one of the following tutorials:
+
+- [Tutorial: Build an ASP.NET app in Azure with Azure SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md)
+- [Tutorial: Build an ASP.NET Core and Azure SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md)
+
+When you're finished, your sample app will authenticate users connect to SQL Database securely on behalf of the signed-in user.
++
+> [!NOTE]
+> The steps covered in this tutorial support the following versions:
+>
+> - .NET Framework 4.8 and higher
+> - .NET 6.0 and higher
+>
+
+What you will learn:
+
+> [!div class="checklist"]
+> * Enable built-in authentication for Azure SQL Database
+> * Disable other authentication options in Azure SQL Database
+> * Enable App Service authentication
+> * Use Azure Active Directory as the identity provider
+> * Access Azure SQL Database on behalf of the signed-in Azure AD user
+
+> [!NOTE]
+>Azure AD authentication is _different_ from [Integrated Windows authentication](/previous-versions/windows/it-pro/windows-server-2003/cc758557(v=ws.10)) in on-premises Active Directory (AD DS). AD DS and Azure AD use completely different authentication protocols. For more information, see [Azure AD Domain Services documentation](../active-directory-domain-services/index.yml).
++
+## Prerequisites
+
+This article continues where you left off in either one of the following tutorials:
+
+- [Tutorial: Build an ASP.NET app in Azure with SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md)
+- [Tutorial: Build an ASP.NET Core and SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md).
+
+If you haven't already, follow one of the two tutorials first. Alternatively, you can adapt the steps for your own .NET app with SQL Database.
+
+Prepare your environment for the Azure CLI.
++
+## 1. Configure database server with Azure AD authentication
+
+First, enable Azure Active Directory authentication to SQL Database by assigning an Azure AD user as the admin of the server. This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Azure AD. For more information on allowed Azure AD users, see [Azure AD features and limitations in SQL Database](/azure/azure-sql/database/authentication-aad-overview#azure-ad-features-and-limitations).
+
+1. If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md).
+
+1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az_ad_user_list) and replace *\<user-principal-name>*. The result is saved to a variable.
+
+ ```azurecli-interactive
+ azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].id --output tsv)
+ ```
+
+ > [!TIP]
+ > To see the list of all user principal names in Azure AD, run `az ad user list --query [].userPrincipalName`.
+ >
+
+1. Add this Azure AD user as an Active Directory admin using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az_sql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<server-name>* with the server name (without the `.database.windows.net` suffix).
+
+ ```azurecli-interactive
+ az sql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name ADMIN --object-id $azureaduser
+ ```
+
+1. Restrict the database server authentication to Active Directory authentication. This step effectively disables SQL authentication.
+
+ ```azurecli-interactive
+ az sql server ad-only-auth enable --resource-group <group-name> --server-name <server-name>
+ ```
+
+For more information on adding an Active Directory admin, see [Provision Azure AD admin (SQL Database)](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database).
+
+## 2. Enable user authentication for your app
+
+You enable authentication with Azure Active Directory as the identity provider. For more information, see [Configure Azure Active Directory authentication for your App Services application](configure-authentication-provider-aad.md).
+
+1. In the [Azure portal](https://portal.azure.com) menu, select **Resource groups** or search for and select *Resource groups* from any page.
+
+1. In **Resource groups**, find and select your resource group, then select your app.
+
+1. In your app's left menu, select **Authentication**, and then select **Add identity provider**.
+
+1. In the **Add an identity provider** page, select **Microsoft** as the **Identity provider** to sign in Microsoft and Azure AD identities.
+
+1. Accept the default settings and select **Add**.
+
+ :::image type="content" source="./media/tutorial-connect-app-access-sql-database-as-user-dotnet/add-azure-ad-provider.png" alt-text="Screenshot showing the add identity provider page." lightbox="./media/tutorial-connect-app-access-sql-database-as-user-dotnet/add-azure-ad-provider.png":::
+
+> [!TIP]
+> If you run into errors and reconfigure your app's authentication settings, the tokens in the token store may not be regenerated from the new settings. To make sure your tokens are regenerated, you need to sign out and sign back in to your app. An easy way to do it is to use your browser in private mode, and close and reopen the browser in private mode after changing the settings in your apps.
+
+## 3. Configure user impersonation to SQL Database
+
+Currently, your Azure app connects to SQL Database uses SQL authentication (username and password) managed as app settings. In this step, you give the app permissions to access SQL Database on behalf of the signed-in Azure AD user.
+
+1. In the **Authentication** page for the app, select your app name under **Identity provider**. This app registration was automatically generated for you. Select **API permissions** in the left menu.
+
+1. Select **Add a permission**, then select **APIs my organization uses**.
+
+1. Type *Azure SQL Database* in the search box and select the result.
+
+1. In the **Request API permissions** page for Azure SQL Database, select **Delegated permissions** and **user_impersonation**, then select **Add permissions**.
+
+ :::image type="content" source="./media/tutorial-connect-app-access-sql-database-as-user-dotnet/select-permission.png" alt-text="Screenshot of the Request API permissions page showing Delegated permissions, user_impersonation, and the Add permission button selected." lightbox="./media/tutorial-connect-app-access-sql-database-as-user-dotnet/select-permission.png":::
+
+## 4. Configure App Service to return a usable access token
+
+The app registration in Azure Active Directory now has the required permissions to connect to SQL Database by impersonating the signed-in user. Next, you configure your App Service app to give you a usable access token.
+
+In the Cloud Shell, run the following commands on the app to add the `scope` parameter to the authentication setting `identityProviders.azureActiveDirectory.login.loginParameters`.
+
+```azurecli-interactive
+authSettings=$(az webapp auth show --resource-group <group-name> --name <app-name>)
+authSettings=$(echo "$authSettings" | jq '.properties' | jq '.identityProviders.azureActiveDirectory.login += {"loginParameters":["scope=openid profile email offline_access https://database.windows.net/user_impersonation"]}')
+az webapp auth set --resource-group <group-name> --name <app-name> --body "$authSettings"
+```
+
+The commands effectively add a `loginParameters` property with extra custom scopes. Here's an explanation of the requested scopes:
+
+- `openid`, `profile`, and `email` are requested by App Service by default already. For information, see [OpenID Connect Scopes](../active-directory/develop/v2-permissions-and-consent.md#openid-connect-scopes).
+- `https://database.windows.net/user_impersonation` refers to Azure SQL Database. It's the scope that gives you a JWT token that includes SQL Database as a [token audience](https://wikipedia.org/wiki/JSON_Web_Token).
+- [offline_access](../active-directory/develop/v2-permissions-and-consent.md#offline_access) is included here for convenience (in case you want to [refresh tokens](#what-happens-when-access-tokens-expire)).
+
+> [!TIP]
+> To configure the required scopes using a web interface instead, see the Microsoft steps at [Refresh auth tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens).
+
+Your apps are now configured. The app can now generate a token that SQL Database accepts.
+
+## 5. Use the access token in your application code
+
+The steps you follow for your project depends on whether you're using [Entity Framework](/ef/ef6/) (default for ASP.NET) or [Entity Framework Core](/ef/core/) (default for ASP.NET Core).
+
+# [Entity Framework](#tab/ef)
+
+1. In Visual Studio, open the Package Manager Console and update Entity Framework:
+
+ ```powershell
+ Update-Package EntityFramework
+ ```
+
+1. In your DbContext object (in *Models/MyDbContext.cs*), add the following code to the default constructor.
+
+ ```csharp
+ var conn = (System.Data.SqlClient.SqlConnection)Database.Connection;
+ conn.AccessToken = System.Web.HttpContext.Current.Request.Headers["X-MS-TOKEN-AAD-ACCESS-TOKEN"];
+ ```
+
+# [Entity Framework Core](#tab/efcore)
+
+In your `DbContext` object (in *Models/MyDbContext.cs*), change the default constructor to the following.
+
+```csharp
+public MyDatabaseContext (DbContextOptions<MyDatabaseContext> options, IHttpContextAccessor accessor)
+ : base(options)
+{
+ var conn = Database.GetDbConnection() as SqlConnection;
+ conn.AccessToken = accessor.HttpContext.Request.Headers["X-MS-TOKEN-AAD-ACCESS-TOKEN"];
+}
+```
+
+--
+
+> [!NOTE]
+> The code adds the access token supplied by App Service authentication to the connection object.
+>
+> This code change doesn't work locally. For more information, see [How do I debug locally when using App Service authentication?](#how-do-i-debug-locally-when-using-app-service-authentication).
+
+## 6. Publish your changes
+
+# [ASP.NET](#tab/dotnet)
+
+1. **If you came from [Tutorial: Build an ASP.NET app in Azure with SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md)**, you set a connection string in App Service using SQL authentication, with a username and password. Use the following command to remove the connection secrets, but replace *\<group-name>*, *\<app-name>*, *\<db-server-name>*, and *\<db-name>* with yours.
+
+ ```azurecli-interactive
+ az webapp config connection-string set --resource-group <group-name> --name <app-name> --type SQLAzure --settings MyDbConnection="server=tcp:<db-server-name>.database.windows.net;database=<db-name>;"
+ ```
+
+1. Publish your changes in Visual Studio. In the **Solution Explorer**, right-click your **DotNetAppSqlDb** project and select **Publish**.
+
+ :::image type="content" source="./media/app-service-web-tutorial-dotnet-sqldatabase/solution-explorer-publish.png" alt-text="Screenshot showing how to publish from the Solution Explorer in Visual Studio." lightbox="./media/app-service-web-tutorial-dotnet-sqldatabase/solution-explorer-publish.png":::
+
+1. In the publish page, select **Publish**.
+
+# [ASP.NET Core](#tab/dotnetcore)
+
+1. **If you came from [Tutorial: Build an ASP.NET Core and SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md)**, you have a connection string called `defaultConnection` in App Service using SQL authentication, with a username and password. Use the following command to remove the connection secrets, but replace *\<group-name>*, *\<app-name>*, *\<db-server-name>*, and *\<db-name>* with yours.
+
+ ```azurecli-interactive
+ az webapp config connection-string set --resource-group <group-name> --name <app-name> --type SQLAzure --settings defaultConnection="server=tcp:<db-server-name>.database.windows.net;database=<db-name>;"
+ ```
+
+1. You would have made your code changes in your GitHub fork, with Visual Studio Code in the browser. From the left menu, select **Source Control**.
+
+1. Type in a commit message like `OBO connect` and select **Commit**.
+
+ The commit triggers a GitHub Actions deployment to App Service. Wait a few minutes for the deployment to finish.
+
+--
+
+When the new webpage shows your to-do list, your app is connecting to the database on behalf of the signed-in Azure AD user.
+
+![Azure app after Code First Migration](./media/app-service-web-tutorial-dotnet-sqldatabase/this-one-is-done.png)
+
+You should now be able to edit the to-do list as before.
+
+## 7. Clean up resources
+
+In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
+
+```azurecli-interactive
+az group delete --name <group-name>
+```
+
+This command may take a minute to run.
+
+## Frequently asked questions
+
+- [Why do I get a `Login failed for user '<token-identified principal>'.` error?](#why-do-i-get-a-login-failed-for-user-token-identified-principal-error)
+- [How do I add other Azure AD users or groups in Azure SQL Database?](#how-do-i-add-other-azure-ad-users-or-groups-in-azure-sql-database)
+- [How do I debug locally when using App Service authentication?](#how-do-i-debug-locally-when-using-app-service-authentication)
+- [What happens when access tokens expire?](#what-happens-when-access-tokens-expire)
+
+#### Why do I get a `Login failed for user '<token-identified principal>'.` error?
+
+The most common causes of this error are:
+
+- You're running the code locally, and there's no valid token in the `X-MS-TOKEN-AAD-ACCESS-TOKEN` request header. See [How do I debug locally when using App Service authentication?](#how-do-i-debug-locally-when-using-app-service-authentication).
+- Azure AD authentication isn't configured on your SQL Database.
+- The signed-in user isn't permitted to connect to the database. See [How do I add other Azure AD users or groups in Azure SQL Database?](#how-do-i-add-other-azure-ad-users-or-groups-in-azure-sql-database).
+
+#### How do I add other Azure AD users or groups in Azure SQL Database?
+
+1. Connect to your database server, such as with [sqlcmd](/azure/azure-sql/database/authentication-aad-configure#sqlcmd) or [SSMS](/azure/azure-sql/database/authentication-aad-configure#connect-to-the-database-using-ssms-or-ssdt).
+1. [Create contained users mapped to Azure AD identities](/azure/azure-sql/database/authentication-aad-configure#create-contained-users-mapped-to-azure-ad-identities) in SQL Database documentation.
+
+ The following Transact-SQL example adds an Azure AD identity to SQL Server and gives it some database roles:
+
+ ```sql
+ CREATE USER [<user-or-group-name>] FROM EXTERNAL PROVIDER;
+ ALTER ROLE db_datareader ADD MEMBER [<user-or-group-name>];
+ ALTER ROLE db_datawriter ADD MEMBER [<user-or-group-name>];
+ ALTER ROLE db_ddladmin ADD MEMBER [<user-or-group-name>];
+ GO
+ ```
+
+#### How do I debug locally when using App Service authentication?
+
+Because App Service authentication is a feature in Azure, it's not possible for the same code to work in your local environment. Unlike the app running in Azure, your local code doesn't benefit from the authentication middleware from App Service. You have a few alternatives:
+
+- Connect to SQL Database from your local environment with [`Active Directory Interactive`](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-interactive-authentication). The authentication flow doesn't sign in the user to the app itself, but it does connect to the back-end database with the signed-in user, and allows you to test database authorization locally.
+- Manually copy the access token from `https://<app-name>.azurewebsites.net/.auth/me` into your code, in place of the `X-MS-TOKEN-AAD-ACCESS-TOKEN` request header.
+- If you deploy from Visual Studio, use remote debugging of your App Service app.
+
+#### What happens when access tokens expire?
+
+Your access token expires after some time. For information on how to refresh your access tokens without requiring users to reauthenticate with your app, see [Refresh identity provider tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens).
+
+## Next steps
+
+What you learned:
+
+> [!div class="checklist"]
+> * Enable built-in authentication for Azure SQL Database
+> * Disable other authentication options in Azure SQL Database
+> * Enable App Service authentication
+> * Use Azure Active Directory as the identity provider
+> * Access Azure SQL Database on behalf of the signed-in Azure AD user
+
+> [!div class="nextstepaction"]
+> [Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Access Microsoft Graph from a secured .NET app as the app](scenario-secure-app-access-microsoft-graph-as-app.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Isolate back-end communication with Virtual Network integration](tutorial-networking-isolate-vnet.md)
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
# Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity
-[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure SQL Database](/azure/sql-database/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. In this tutorial, you'll add managed identity to the sample web app you built in one of the following tutorials:
+[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure SQL Database](/azure/sql-database/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. In this tutorial, you add managed identity to the sample web app you built in one of the following tutorials:
- [Tutorial: Build an ASP.NET app in Azure with Azure SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md) - [Tutorial: Build an ASP.NET Core and Azure SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md)
The steps you follow for your project depends on whether you're using [Entity Fr
conn.AccessToken = token.Token; ```
- This code uses [Azure.Identity.DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) to get a useable token for SQL Database from Azure Active Directory and then adds it to the database connection. While you can customize `DefaultAzureCredential`, by default it's already very versatile. When running in App Service, it uses app's system-assigned managed identity. When running locally, it can get a token using the logged-in identity of Visual Studio, Visual Studio Code, Azure CLI, and Azure PowerShell.
+ This code uses [Azure.Identity.DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) to get a useable token for SQL Database from Azure Active Directory and then adds it to the database connection. While you can customize `DefaultAzureCredential`, by default it's already versatile. When it runs in App Service, it uses app's system-assigned managed identity. When it runs locally, it can get a token using the logged-in identity of Visual Studio, Visual Studio Code, Azure CLI, and Azure PowerShell.
1. In *Web.config*, find the connection string called `MyDbConnection` and replace its `connectionString` value with `"server=tcp:<server-name>.database.windows.net;database=<db-name>;"`. Replace _\<server-name>_ and _\<db-name>_ with your server name and database name. This connection string is used by the default constructor in *Models/MyDbContext.cs*.
- That's every thing you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app.
+ That's every thing you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app.
1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
The steps you follow for your project depends on whether you're using [Entity Fr
> The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI. >
- That's everything you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
+ That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
What you learned:
> [!div class="nextstepaction"] > [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md)
+> [!div class="nextstepaction"]
+> [Tutorial: Connect an App Service app to SQL Database on behalf of the signed-in user](tutorial-connect-app-access-sql-database-as-user-dotnet.md)
+ > [!div class="nextstepaction"] > [Tutorial: Connect to Azure databases from App Service without secrets using a managed identity](tutorial-connect-msi-azure-database.md)
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Previously updated : 03/21/2022 Last updated : 03/24/2023
You can monitor Azure Application Gateway resources in the following ways:
* [Logs](#diagnostic-logging): Logs allow for performance, access, and other data to be saved or consumed from a resource for monitoring purposes.
-* [Metrics](application-gateway-metrics.md): Application Gateway has several metrics which help you verify that your system is performing as expected.
+* [Metrics](application-gateway-metrics.md): Application Gateway has several metrics that help you verify your system is performing as expected.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
The access log is generated only if you've enabled it on each Application Gatewa
|httpVersion | HTTP version of the request. | |receivedBytes | Size of packet received, in bytes. | |sentBytes| Size of packet sent, in bytes.|
-|clientResponseTime| Length of time (in **seconds**) that it takes for the first byte of a client request to be processed and the first byte sent in the response to the client. |
+|clientResponseTime| Time difference (in **seconds**) between first byte received from the backend to first byte sent to the client. |
|timeTaken| Length of time (in **seconds**) that it takes for the first byte of a client request to be processed and its last-byte sent in the response to the client. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | |WAFEvaluationTime| Length of time (in **seconds**) that it takes for the request to be processed by the WAF. | |WAFMode| Value can be either Detection or Prevention |
The access log is generated only if you've enabled it on each Application Gatewa
|sentBytes| Size of packet sent, in bytes.| |timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | |sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|
-|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name will reflect that.|
+|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name reflects that.|
|originalHost| The hostname with which the request was received by the Application Gateway from the client.| ```json
application-gateway Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/features.md
Previously updated : 03/17/2023 Last updated : 03/24/2023
For more information, see [Application Gateway redirect overview](redirect-overv
## Session affinity
-The cookie-based session affinity feature is useful when you want to keep a user session on the same server. By using gateway-managed cookies, the Application Gateway can direct subsequent traffic from a user session to the same server for processing. This is important in cases where session state is saved locally on the server for a user session.
+The cookie-based session affinity feature is useful when you want to keep a user session on the same server. Using gateway-managed cookies, the Application Gateway can direct subsequent traffic from a user session to the same server for processing. This is important in cases where session state is saved locally on the server for a user session.
For more information, see [How an application gateway works](how-application-gateway-works.md#modifications-to-the-request).
For more information, see [WebSocket support](application-gateway-websocket.md)
## Connection draining
-Connection draining helps you achieve graceful removal of backend pool members during planned service updates or problems with backend health. This setting is enabled via the [Backend Setting](configuration-http-settings.md) and is applied to all backend pool members during rule creation. Once enabled, the aplication gateway ensures all deregistering instances of a backend pool don't receive any new requests while allowing existing requests to complete within a configured time limit. It applies to cases where backend instances are
-- explicitly removed from the backend pool after a configuration change by a user,
+Connection draining helps you achieve graceful removal of backend pool members during planned service updates or problems with backend health. This setting is enabled via the [Backend Setting](configuration-http-settings.md) and is applied to all backend pool members during rule creation. Once enabled, the application gateway ensures all deregistering instances of a backend pool don't receive any new requests while allowing existing requests to complete within a configured time limit. It applies to cases where backend instances are:
+- explicitly removed from the backend pool after a configuration change by a user
- reported as unhealthy by the health probes, or-- removed during a scale-in operation.
+- removed during a scale-in operation
-The only exception is when requests continue to be proxied to the deregistering instances because of gateway-managed session affinity.
+The only exception is when requests continue to be proxied to the deregistering instances because of gateway-managed session affinity.
-The connection draining is honored for WebSocket connections as well. For information on time limits, see [Backend Settings configuration](configuration-http-settings.md#connection-draining).
+The connection draining is honored for WebSocket connections as well. Connection draining is invoked for every single update to the gateway. To prevent connection loss to existing members of the backend pool, make sure to enable connection draining.
+
+For information on time limits, see [Backend Settings configuration](configuration-http-settings.md#connection-draining).
## Custom error pages
HTTP headers allow the client and server to pass additional information with the
- Removing response header fields that can reveal sensitive information. - Stripping port information from X-Forwarded-For headers.
-Application Gateway and WAF v2 SKU supports the capability to add, remove, or update HTTP request and response headers, while the request and response packets move between the client and backend pools. You can also rewrite URLs, query string parameters and host name. With URL rewrite and URL path-based routing, you can choose to either route requests to one of the backend pools based on the original path or the rewritten path, using the re-evaluate path map option.
+Application Gateway and WAF v2 SKU supports the capability to add, remove, or update HTTP request and response headers, while the request and response packets move between the client and backend pools. You can also rewrite URLs, query string parameters and host name. With URL rewrite and URL path-based routing, you can choose to either route requests to one of the backend pools based on the original path or the rewritten path, using the reevaluate path map option.
It also provides you with the capability to add conditions to ensure the specified headers or URL are rewritten only when certain conditions are met. These conditions are based on the request and response information.
applied-ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-classifier.md
recommendations: false
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
+> [!IMPORTANT]
+>
+> Custom classification model is currently in public preview. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
+>
+ Custom classification models are deep-learning-model types that combine layout and language features to accurately detect and identify documents you process within your application. Custom classification models can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file. ## Model capabilities
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
See how data, including customer information, vendor details, and line items, is
| Supported languages | Details | |:-|:|
-| &bullet; English (en) | United States (us), Australia (-au), Canada (-ca), Great Britain (-gb), India (-in)|
+| &bullet; English (en) | United States (us), Australia (-au), Canada (-ca), United Kingdom (-uk), India (-in)|
| &bullet; Spanish (es) |Spain (es)| | &bullet; German (de) | Germany (de)| | &bullet; French (fr) | France (fr) |
applied-ai-services Build A Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-classifier.md
monikerRange: 'form-recog-3.0.0'
recommendations: false
-# Build and train a custom classification model
+# Build and train a custom classification model (preview)
[!INCLUDE [applies to v3.0](../includes/applies-to-v3-0.md)]
+> [!IMPORTANT]
+>
+> Custom classification model is currently in public preview. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
+>
+ Custom classification models can classify each page in an input file to identify the document(s) within. Classifier models can also identify multiple documents or multiple instances of a single document in the input file. Form Recognizer custom models require as few as five training documents per document class to get started. To get started training a custom classification model, you need at least **five documents** for each class and **two classes** of documents. ## Custom classification model input requirements
The Form Recognizer Studio provides and orchestrates all the API calls required
:::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot showing how to select the Form Recognizer resource.":::
-1. Training a custom classifier requires the output from the Layout model for each document in your dataset. Run layout on all documents as an optional step to speed up the model training process.
+1. **Training a custom classifier requires the output from the Layout model for each document in your dataset**. Run layout on all documents prior to the model training process.
1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed.
Once the model training is complete, you can test your model by selecting the mo
Congratulations you've trained a custom classification model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
+## Troubleshoot
+
+The [classification model](../concept-custom-classifier.md) requires results from the [layout model](../concept-layout.md) for each training document. If you haven't provided the layout results, the Studio attempts to run the layout model for each document prior to training the classifier. This process is throttled and can result in a 429 response.
+
+In the Studiio, prior to training with the classification model, run the [layout model](https://formrecognizer.appliedai.azure.com/studio/layout) on each document and upload it to the same location as the original document. Once the layout results are added, you can train the classifier model with your documents.
+ ## Next steps > [!div class="nextstepaction"]
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
The following customers and partners have adopted Form Recognizer across a wide
||-|-| | **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | [Customer story](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure) | | **Air Canada** | In September 2021, [**Air Canada**](https://www.aircanada.com/) was tasked with verifying the COVID-19 vaccination status of thousands of worldwide employees in only two months. After realizing manual verification would be too costly and complex within the time constraint, Air Canada turned to its internal AI team for an automated solution. The AI team partnered with Microsoft and used Form Recognizer to roll out a fully functional, accurate solution within weeks. This partnership met the government mandate on time and saved thousands of hours of manual work. | [Customer story](https://customers.microsoft.com/story/1505667713938806113-air-canada-travel-transportation-azure-form-recognizer)|
-|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, Turkey's leading holding institution and operating in 23 countries. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) |
+|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, T├╝rkiye's leading holding institution and operating in 23 countries. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) |
|**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | [Customer story](https://customers.microsoft.com/story/811346-automation-anywhere-partner-professional-services-azure-cognitive-services) | |**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)| |**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. | [Customer story](https://customers.microsoft.com/story/737482-blue-prism-partner-professional-services-azure) |
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* Portuguese - Brazil (pt-BR) * Prebuilt invoice model - added languages supported. The invoice model now supports these added languages and locales
- * English - United States (en-US), Australia (en-AU), Canada (en-CA), Great Britain (en-GB), India (en-IN)
+ * English - United States (en-US), Australia (en-AU), Canada (en-CA), United Kingdom (en-UK), India (en-IN)
* Spanish - Spain (es-ES) * French - France (fr-FR) * Italian - Italy (it-IT)
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
The **prebuilt invoice model** now has added support for the following languages:
- * English - Australia (en-AU), Canada (en-CA), Great Britain (en-GB), India (en-IN)
+ * English - Australia (en-AU), Canada (en-CA), United Kingdom (en-UK), India (en-IN)
* Portuguese - Brazil (pt-BR) The **prebuilt invoice model** now has added support for the following field extractions:
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/language-support.md
This article lists supported human languages for Immersive Reader features.
| Thai | th | | Thai (Thailand) | th-TH | | Turkish | tr |
-| Turkish (Turkey) | tr-TR |
+| Turkish (T├╝rkiye) | tr-TR |
| Ukrainian | uk | | Ukrainian (Ukraine) | uk-UA | | Urdu | ur |
This article lists supported human languages for Immersive Reader features.
| Tigrinya | ti | | Tongan | to | | Turkish | tr |
-| Turkish (Turkey) | tr-TR |
+| Turkish (T├╝rkiye) | tr-TR |
| Turkmen | tk | | Ukrainian | uk | | UpperSorbian | hsb |
This article lists supported human languages for Immersive Reader features.
| Thai | th | | Thai (Thailand) | th-TH | | Turkish | tr |
-| Turkish (Turkey) | tr-TR |
+| Turkish (T├╝rkiye) | tr-TR |
| Ukrainian | uk | | Vietnamese | vi | | Vietnamese (Vietnam) | vi-VN |
This article lists supported human languages for Immersive Reader features.
| Swedish | sv | | Swedish (Sweden) | sv-SE | | Turkish | tr |
-| Turkish (Turkey) | tr-TR |
+| Turkish (T├╝rkiye) | tr-TR |
| Ukrainian | uk | | Welsh | cy |
automation Automation Graphical Authoring Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-graphical-authoring-intro.md
Title: Author graphical runbooks in Azure Automation
description: This article tells how to author a graphical runbook without working with code. Previously updated : 10/21/2021 Last updated : 03/07/2023
The following example uses output from an activity called `Get Twitter Connectio
## Authenticate to Azure resources
-Runbooks in Azure Automation that manage Azure resources require authentication to Azure. The [Run As account](./automation-security-overview.md), also referred to as a service principal, is the default mechanism that an Automation runbook uses to access Azure Resource Manager resources in your subscription. You can add this functionality to a graphical runbook by adding the `AzureRunAsConnection` connection asset, which uses the PowerShell [Get-AutomationConnection](/system-center/smlet. This scenario is illustrated in the following example.
+Runbooks in Azure Automation that manage Azure resources require authentication to Azure. [Managed Identities](enable-managed-identity-for-automation.md) is the default mechanism that an Automation runbook uses to access Azure Resource Manager resources in your subscription. You can add this functionality to a graphical runbook by importing the following runbook into the automation account, which leverages the system-assigned Managed Identity of the automation account to authenticate and access Azure resources.
-![Run As Authentication Activities](media/automation-graphical-authoring-intro/authenticate-run-as-account.png)
-
-The `Get Run As Connection` activity, or `Get-AutomationConnection`, is configured with a constant value data source named `AzureRunAsConnection`.
-
-![Run As Connection Configuration](media/automation-graphical-authoring-intro/authenticate-runas-parameterset.png)
-
-The next activity, `Connect-AzAccount`, adds the authenticated Run As account for use in the runbook.
-
-![Connect-AzAccount Parameter Set](media/automation-graphical-authoring-intro/authenticate-conn-to-azure-parameter-set.png)
-
->[!NOTE]
->For PowerShell runbooks, `Add-AzAccount` and `Add-AzureRMAccount` are aliases for `Connect-AzAccount`. Note that these aliases are not available for your graphical runbooks. A graphical runbook can only use `Connect-AzAccount` itself.
-
-For the parameter fields **APPLICATIONID**, **CERTIFICATETHUMBPRINT**, and **TENANTID**, specify the name of the property for the field path, since the activity outputs an object with multiple properties. Otherwise, when the runbook executes, it fails while attempting to authenticate. This is what you need at a minimum to authenticate your runbook with the Run As account.
-
-Some subscribers create an Automation account using an [Azure AD user account](./shared-resources/credentials.md) to manage Azure classic deployment or for Azure Resource Manager resources. To maintain backward compatibility for these subscribers, the authentication mechanism to use in your runbook is the `Add-AzureAccount` cmdlet with a [credential asset](./shared-resources/credentials.md). The asset represents an Active Directory user with access to the Azure account.
-
-You can enable this functionality for your graphical runbook by adding a credential asset to the canvas, followed by an `Add-AzureAccount` activity that uses the credential asset for its input. See the following example.
-
-![Authentication activities](media/automation-graphical-authoring-intro/authentication-activities.png)
-
-The runbook must authenticate at its start and after each checkpoint. Thus you must use an `Add-AzureAccount` activity after any `Checkpoint-Workflow` activity. You do not need to use an additional credential activity.
-
-![Activity output](media/automation-graphical-authoring-intro/authentication-activity-output.png)
+```powershell-interactive
+wget https://raw.githubusercontent.com/azureautomation/runbooks/master/Utility/AzMI/AzureAutomationTutorialWithIdentityGraphical.graphrunbook -outfile AzureAutomationTutorialWithIdentityGraphical.graphrunbook
+```
## Export a graphical runbook
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
Title: Azure Automation Hybrid Runbook Worker overview
description: Know about Hybrid Runbook Worker. How to install and run the runbooks on machines in your local datacenter or cloud provider. Previously updated : 03/15/2023 Last updated : 03/21/2023
There are two types of Runbook Workers - system and user. The following table de
|Type | Description | |--|-| |**System** |Supports a set of hidden runbooks used by the Update Management feature that are designed to install user-specified updates on Windows and Linux machines.<br> This type of Hybrid Runbook Worker isn't a member of a Hybrid Runbook Worker group, and therefore doesn't run runbooks that target a Runbook Worker group. |
-|**User** |Supports user-defined runbooks intended to run directly on the Windows and Linux machine that are members of one or more Runbook Worker groups. |
+|**User** |Supports user-defined runbooks intended to run directly on the Windows and Linux machines. |
Agent-based (V1) Hybrid Runbook Workers rely on the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md). The workspace isn't only to collect monitoring data from the machine, but also to download the components required to install the agent-based Hybrid Runbook Worker.
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md
description: This article provides an overview of Azure Automation account authe
keywords: automation security, secure automation; automation authentication Previously updated : 11/05/2021 Last updated : 03/07/2023
For details on using managed identities, see [Enable managed identity for Azure
## Run As accounts
+> [!IMPORTANT]
+> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
+ Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. There are two types of Run As accounts in Azure Automation: - Azure Run As Account - Azure Classic Run As Account
To create or renew a Run As account, permissions are needed at three levels:
- Azure Active Directory (Azure AD), and - Automation account
-> [!NOTE]
-> Azure Automation does not automatically create the Run As account, it has been replaced by using managed identities. However, we continue to support a RunAs account for existing and new Automation accounts. You can [create a Run As account](create-run-as-account.md) in your Automation account from the Azure portal or by using PowerShell.
### Subscription permissions
When you create a Run As account, it performs the following tasks:
### Azure Classic Run As account
+> [!IMPORTANT]
+> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
+ When you create an Azure Classic Run As account, it performs the following tasks: > [!NOTE]
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
# Start/Stop VMs during off-hours overview > [!NOTE]
-> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared soon.
+> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared soon.
The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
The purpose of the Extension-based approach is to simplify the installation and
### Supported operating systems
-| Windows | Linux (x64)|
+| Windows | Linux |
|||
-| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro| &#9679; Debian GNU/Linux 10 and 11 <br> &#9679; Ubuntu 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7 and 8ΓÇ»|
+| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro| &#9679; Debian GNU/Linux 8,9,10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, and 8 </br> *Hybrid Worker extension would follow support timelines of the OS vendor.ΓÇ»|
### Other Requirements
-| Windows | Linux (x64)|
+| Windows | Linux |
||| | Windows PowerShell 5.1 (download WMF 5.1). PowerShell Core isn't supported.| Linux Hardening must not be enabled.ΓÇ» | | .NET Framework 4.6.2 or later.ΓÇ»| |
automation Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/certificates.md
The following example shows how to access certificates in Python 2 runbooks.
```python # get a reference to the Azure Automation certificate
-cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
+cert = automationassets.get_automation_certificate("MyCertificate")
# returns the binary cert content print cert
The following example shows how to access certificates in Python 3 runbooks (pre
```python # get a reference to the Azure Automation certificate
-cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
+cert = automationassets.get_automation_certificate("MyCertificate")
# returns the binary cert content print (cert)
azure-cache-for-redis Cache Best Practices Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-scale.md
description: Learn how to scale your Azure Cache for Redis.
Previously updated : 04/06/2022 Last updated : 03/28/2023
If you're using TLS and you have a high number of connections, consider scaling
## Scaling and memory
-You can scale your cache instances in the Azure portal. Also, you can programatically scale your cache using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML).
+You can scale your cache instances in the Azure portal. Also, you can programmatically scale your cache using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML).
When you scale a cache up or down in the portal, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically updated to 6 GB during scaling. When you scale down, the reverse happens. When you scale a cache up or down programmatically, using PowerShell, CLI or Rest API, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed.
-For more information on scaling and memory, see [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation).
+For more information on scaling and memory, depending on your tier see either:
+- [How to scale - Basic, Standard, and Premium tiers](cache-how-to-scale.md#how-to-scalebasic-standard-and-premium-tiers), or
+- [How to scale up and out - Enterprise and Enterprise Flash tiers](cache-how-to-scale.md#how-to-scale-up-and-outenterprise-and-enterprise-flash-tiers).
> [!NOTE] > When you scale a cache up or down programmatically, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed.
azure-cache-for-redis Cache How To Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-encryption.md
+
+ Title: Configure active encryption for Enterprise Azure Cache for Redis instances
+description: Learn about encryption for your Azure Cache for Redis Enterprise instances across Azure regions.
++++ Last updated : 03/24/2023++++
+# Configure disk encryption for Azure Cache for Redis instances using customer managed keys (preview)
+
+In this article, you learn how to configure disk encryption using Customer Managed Keys (CMK). The Enterprise and Enterprise Flash tiers of Azure Cache for Redis offer the ability to encrypt the OS and data persistence disks with customer-managed key encryption. Platform-managed keys (PMKs), also know as Microsoft-managed keys (MMKs), are used to encrypt the data. However, customer managed keys (CMK) can also be used to wrap the MMKs to control access to these keys. This makes the CMK a _key encryption key_ or KEK. For more information, see [key management in Azure](/azure/security/fundamentals/key-management).
+
+Data in a Redis server is stored in memory by default. This data isn't encrypted. You can implement your own encryption on the data before writing it to the cache. In some cases, data can reside on-disk, either due to the operations of the operating system, or because of deliberate actions to persist data using [export](cache-how-to-import-export-data.md) or [data persistence](cache-how-to-premium-persistence.md).
+
+> [!NOTE]
+> Operating system disk encryption is more important on the Premium tier because open-source Redis can page cache data to disk. The Enterprise tiers does not do page cache data to disk, which is an advantage of the Enterprise and Enterprise Flash tiers.
+>
+
+## Scope of availability for CMK disk encryption
+
+|: Tier :| Basic, Standard, Premium | Enterprise, Enterprise Flash |
+|--|||
+|Microsoft managed keys (MMK) | Yes | Yes |
+|Customer managed keys (CMK) | No | Yes (preview) |
+
+> [!NOTE]
+> By default, all Azure Cache for Redis tiers use Microsoft managed keys to encrypt disks mounted to cache instances. However, in the Basic and Standard tiers, the C0 and C1 SKUs do not support any disk encryption.
+>
+
+> [!IMPORTANT]
+> On the Premium tier, data persistence streams data directly to Azure Storage, so disk encryption is less important. Azure Storage offers a [variety of encryption methods](../storage/common/storage-service-encryption.md) to be used instead.
+>
+
+## Encryption coverage
+
+### Enterprise tiers
+
+In the **Enterprise** tier, disk encryption is used to encrypt the persistence disk, temporary files, and the OS disk:
+
+- persistence disk: holds persisted RDB or AOF files as part of [data persistence](cache-how-to-premium-persistence.md)
+- temporary files used in _export_: temporary data used exported is encrypted. When you [export](cache-how-to-import-export-data.md) data, the encryption of the final exported data is controlled by settings in the storage account.
+- the OS disk
+
+MMK is used to encrypt these disks by default, but CMK can also be used.
+
+In the **Enterprise Flash** tier, keys and values are also partially stored on-disk using nonvolatile memory express (NVMe) flash storage. However, this disk isn't the same as the one used for persisted data. Instead, it's ephemeral, and data isn't persisted after the cache is stopped, deallocated, or rebooted. only MMK is only supported on this disk because this data is transient and ephemeral.
+
+| Data stored |Disk |Encryption Options |
+|-||-|
+|Persistence files | Persistence disk | MMK or CMK |
+|RDB files waiting to be exported | OS disk and Persistence disk | MMK or CMK |
+|Keys & values (Enterprise Flash tier only) | Transient NVMe disk | MMK |
+
+### Other tiers
+
+In the **Basic, Standard, and Premium** tiers, the OS disk is encrypted using MMK. There's no persistence disk mounted and Azure Storage is used instead.
+
+## Prerequisites and limitations
+
+### General prerequisites and limitations
+
+- Disk encryption isn't available in the Basic and Standard tiers for the C0 or C1 SKUs
+- Only user assigned managed identity is supported to connect to Azure Key Vault
+- Changing between MMK and CMK on an existing cache instance triggers a long-running maintenance operation. We don't recommend this for production use because a service disruption occurs.
+
+### Azure Key Vault prerequisites and limitations
+
+- The Azure Key Vault resource containing the customer managed key must be in the same region as the cache resource.
+- [Purge protection and soft-delete](../key-vault/general/soft-delete-overview.md) must be enabled in the Azure Key Vault instance. Purge protection isn't enabled by default.
+- When you use firewall rules in the Azure Key Vault, the Key Vault instance must be configured to [allow trusted services](/azure/key-vault/general/network-security).
+- Only RSA keys are supported
+- The user assigned managed identity must be given the permissions _Get_, _Unwrap Key_, and _Wrap Key_ in the Key Vault access policies, or the equivalent permissions within Azure Role Based Access Control. A recommended built-in role definition with the least privileges needed for this scenario is called [KeyVault Crypto Service Encryption User](../role-based-access-control/built-in-roles.md#key-vault-crypto-service-encryption-user).
+
+## How to configure CMK encryption on Enterprise caches
+
+### Use the portal to create a new cache with CMK enabled
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and start the [Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md) quickstart guide.
+
+1. On the **Advanced** page, go to the section titled **Customer-managed key encryption at rest** and enable the **Use a customer-managed key** option.
+
+ :::image type="content" source="media/cache-how-to-encryption/cache-use-key-encryption.png" alt-text="Screenshot of the advanced settings with customer-managed key encryption checked and in a red box.":::
+
+1. Select **Add** to assign a [user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to the resource. This managed identity is used to connect to the [Azure Key Vault](../key-vault/general/overview.md) instance that holds the customer managed key.
+
+ :::image type="content" source="media/cache-how-to-encryption/cache-managed-identity-user-assigned.png" alt-text="Screenshot showing user managed identity in the working pane.":::
+
+1. Select your chosen user assigned managed identity, and then choose the key input method to use.
+
+1. If using the **Select Azure key vault and key** input method, choose the Key Vault instance that holds your customer managed key. This instance must be in the same region as your cache.
+
+ > [!NOTE]
+ > For instructions on how to set up an Azure Key Vault instance, see the [Azure Key Vault quickstart guide](../key-vault/secrets/quick-create-portal.md). You can also select the _Create a key vault_ link beneath the Key Vault selection to create a new Key Vault instance.
+
+1. Choose the specific key and version using the **Customer-managed key (RSA)** and **Version** drop-downs.
+
+ :::image type="content" source="media/cache-how-to-encryption/cache-managed-identity-version.png" alt-text="Screenshot showing the select identity and key fields completed.":::
+
+1. If using the **URI** input method, enter the Key Identifier URI for your chosen key from Azure Key Vault.
+
+1. When you've entered all the information for your cache, select **Review + create**.
+
+### Add CMK encryption to an existing Enterprise cache
+
+1. Go to the **Encryption** in the Resource menu of your cache instance. If CMK is already set up, you see the key information.
+
+1. If you haven't set up or if you want to change CMK settings, select **Change encryption settings**
+ :::image type="content" source="media/cache-how-to-encryption/cache-encryption-existing-use.png" alt-text="Screenshot encryption selected in the Resource menu for an Enterprise tier cache.":::
+
+1. Select **Use a customer-managed key** to see your configuration options.
+
+1. Select **Add** to assign a [user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to the resource. This managed identity is used to connect to the [Azure Key Vault](../key-vault/general/overview.md) instance that holds the customer managed key.
+
+1. Select your chosen user assigned managed identity, and then choose which key input method to use.
+
+1. If using the **Select Azure key vault and key** input method, choose the Key Vault instance that holds your customer managed key. This instance must be in the same region as your cache.
+
+ > [!NOTE]
+ > For instructions on how to set up an Azure Key Vault instance, see the [Azure Key Vault quickstart guide](../key-vault/secrets/quick-create-portal.md). You can also select the _Create a key vault_ link beneath the Key Vault selection to create a new Key Vault instance.
+
+1. Choose the specific key using the **Customer-managed key (RSA)** drop-down. If there are multiple versions of the key to choose from, use the **Version** drop-down.
+ :::image type="content" source="media/cache-how-to-encryption/cache-encryption-existing-key.png" alt-text="Screenshot showing the select identity and key fields completed for Encryption.":::
+
+1. If using the **URI** input method, enter the Key Identifier URI for your chosen key from Azure Key Vault.
+
+1. Select **Save**
+
+## Next steps
+
+Learn more about Azure Cache for Redis features:
+
+- [Data persistence](cache-how-to-premium-persistence.md)
+- [Import/Export](cache-how-to-import-export-data.md)
azure-cache-for-redis Cache How To Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md
Title: Import and Export data in Azure Cache for Redis description: Learn how to import and export data to and from blob storage with your premium Azure Cache for Redis instances + Previously updated : 03/10/2023 Last updated : 03/24/2023 # Import and Export data in Azure Cache for Redis
-Import/Export is an Azure Cache for Redis data management operation. It allows you to import data into a cache instance or export data from a cache instance. You import and export an Azure Cache for Redis Database (RDB) snapshot from a cache to a blob in an Azure Storage Account. Import/Export is supported in the Premium, Enterprise, and Enterprise Flash tiers.
+Use the import and export functionality in Azure Cache for Redis as a data management operation. You import data into your cache instance or export data from a cache instance using an Azure Cache for Redis Database (RDB) snapshot. The snapshots are imported or exported using a blob in an Azure Storage Account.
-- **Export** - you can export your Azure Cache for Redis RDB snapshots to a Page Blob.-- **Import** - you can import your Azure Cache for Redis RDB snapshots from either a Page Blob or a Block Blob.
+Import/Export is supported in the Premium, Enterprise, and Enterprise Flash tiers:
+- _Export_ - you can export your Azure Cache for Redis RDB snapshots to a Page Blob (Premium tier) or Block Blob (Enterprise tiers).
+- _Import_ - you can import your Azure Cache for Redis RDB snapshots from either a Page Blob or a Block Blob.
-Import/Export enables you to migrate between different Azure Cache for Redis instances or populate the cache with data before use.
+You can use Import/Export to migrate between different Azure Cache for Redis instances or populate the cache with data before use.
This article provides a guide for importing and exporting data with Azure Cache for Redis and provides the answers to commonly asked questions.
-For information on which Azure Cache for Redis tiers support import and export, see [feature comparison](cache-overview.md#feature-comparison).
+## Scope of availability
+
+|Tier | Basic, Standard | Premium |Enterprise, Enterprise Flash |
+|||||
+|Available | No | Yes | Yes |
+
+## Compatibility
+
+- Data is exported as an RDB page blob in the _Premium_ tier. In the _Enterprise_ and _Enterprise Flash_ tiers, data is exported as a .gz block blob.
+- Caches running Redis 4.0 support RDB version 8 and below. Caches running Redis 6.0 support RDB version 9 and below.
+- Exported backups from newer versions of Redis (for example, Redis 6.0) can't be imported into older versions of Redis (for example, Redis 4.0)
+- RDB files from _Premium_ tier caches can be imported into _Enterprise_ and _Enterprise Flash_ tier caches.
## Import
Use import to bring Redis compatible RDB files from any Redis server running in
:::image type="content" source="./media/cache-how-to-import-export-data/cache-import-blobs.png" alt-text="Screenshot showing the Import button to select to begin the import."::: You can monitor the progress of the import operation by following the notifications from the Azure portal, or by viewing the events in the [audit log](../azure-monitor/essentials/activity-log.md).
+
+ > [!IMPORTANT]
+ > Audit log support is not yet available in the Enterprise tiers.
+ >
:::image type="content" source="./media/cache-how-to-import-export-data/cache-import-data-import-complete.png" alt-text="Screenshot showing the import progress in the notifications area.":::
Export allows you to export the data stored in Azure Cache for Redis to Redis co
This section contains frequently asked questions about the Import/Export feature. -- [What pricing tiers can use Import/Export?](#what-pricing-tiers-can-use-importexport)
+- [Which tiers support Import/Export?](#which-tiers-support-importexport)
- [Can I import data from any Redis server?](#can-i-import-data-from-any-redis-server) - [What RDB versions can I import?](#what-rdb-versions-can-i-import) - [Is my cache available during an Import/Export operation?](#is-my-cache-available-during-an-importexport-operation)
This section contains frequently asked questions about the Import/Export feature
- [I got an error when exporting my data to Azure Blob Storage. What happened?](#i-got-an-error-when-exporting-my-data-to-azure-blob-storage-what-happened) - [How to export if I have firewall enabled on my storage account?](#how-to-export-if-i-have-firewall-enabled-on-my-storage-account)
-### What pricing tiers can use Import/Export?
+### Which tiers support Import/Export?
-Import/Export is available in the Premium, Enterprise and Enterprise Flash tiers.
+The _import_ and _export_ features are available only in the _Premium_, _Enterprise_, and _Enterprise Flash_ tiers.
### Can I import data from any Redis server?
-Yes, you can import data that was exported from Azure Cache for Redis instances. You can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance. For example, you might want to export the data from your production cache and import it into a cache that is used as part of a staging environment for testing or migration.
+Yes, you can import data that was exported from Azure Cache for Redis instances. You can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance.
+
+For example, you might want to:
+
+1. Export the data from your production cache.
+
+1. Then, import it into a cache that is used as part of a staging environment for testing or migration.
> [!IMPORTANT] > To successfully import data exported from Redis servers other than Azure Cache for Redis when using a page blob, the page blob size must be aligned on a 512 byte boundary. For sample code to perform any required byte padding, see [Sample page blob upload](https://github.com/JimRoberts-MS/SamplePageBlobUpload).
Yes, you can import data that was exported from Azure Cache for Redis instances.
### What RDB versions can I import?
-Azure Cache for Redis supports RDB import up through RDB version 7.
+For more information on supported RDB versions used with import, see the [compatibility section](#compatibility).
### Is my cache available during an Import/Export operation?
Some pricing tiers have different [databases limits](cache-configure.md#database
### How is Import/Export different from Redis persistence?
-Azure Cache for Redis persistence allows you to persist data stored in Redis to Azure Storage. When persistence is configured, Azure Cache for Redis persists a snapshot the cache data in a Redis binary format to disk based on a configurable backup frequency. If a catastrophic event occurs that disables both the primary and replica cache, the cache data is restored automatically using the most recent snapshot. For more information, see [How to configure data persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md).
+The Azure Cache for Redis _persistence_ feature is primarily a data durability feature. Conversely, the _import/export_ functionality is designed as a method to make periodic data backups for point-in-time recovery.
+<!-- Kyle I rewrote this based on another convo. Also I want the primary answer to be in the first paragraph. -->
+When _persistence_ is configured, your cache persists a snapshot of the data to disk, based on a configurable backup frequency. The data is written with a Redis-proprietary binary format. If a catastrophic event occurs that disables both the primary and the replica caches, the cache data is restored automatically using the most recent snapshot.
+
+Data persistence is designed for disaster recovery. It isn't intended as a point-in-time recovery mechanism.
+
+- On the Premium tier, the data persistence file is stored in Azure Storage, but the file can't be imported into a different cache.
+- On the Enterprise tiers, the data persistence file is stored in a mounted disk that isn't user-accessible.
-Import/ Export allows you to bring data into or export from Azure Cache for Redis. It doesn't configure backup and restore using Redis persistence.
+If you want to make periodic data backups for point-in-time recovery, we recommend using the _import/export_ functionality. For more information, see [How to configure data persistence for Azure Cache for Redis](cache-how-to-premium-persistence.md).
### Can I automate Import/Export using PowerShell, CLI, or other management clients?
-Yes, for PowerShell instructions see [To import an Azure Cache for Redis](cache-how-to-manage-redis-cache-powershell.md#to-import-an-azure-cache-for-redis) and [To export an Azure Cache for Redis](cache-how-to-manage-redis-cache-powershell.md#to-export-an-azure-cache-for-redis).
+Yes, see the following instructions for the _Premium_ tier:
+
+- PowerShell instructions [to import Redis data](cache-how-to-manage-redis-cache-powershell.md#to-import-an-azure-cache-for-redis) and [to export Redis data](cache-how-to-manage-redis-cache-powershell.md#to-export-an-azure-cache-for-redis).
+- Azure CLI instructions to [import Redis data](/cli/azure/redis#az-redis-import) and [export Redis data](/cli/azure/redis#az-redis-export)
+
+For the _Enterprise_ and _Enterprise Flash_ tiers:
+
+- PowerShell instructions [to import Redis data](/powershell/module/az.redisenterprisecache/import-azredisenterprisecache) and [to export Redis data](/powershell/module/az.redisenterprisecache/export-azredisenterprisecache).
+- Azure CLI instructions to [import Redis data](/cli/azure/redisenterprise/database#az-redisenterprise-database-import) and [export Redis data](/cli/azure/redisenterprise/database#az-redisenterprise-database-export)
### I received a timeout error during my Import/Export operation. What does it mean?
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
In contrast, for clustered caches, we recommend using the metrics with the suffi
- The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command. - Server Load - The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. If this counter reaches 100, the Redis server has hit a performance ceiling, and the CPU can't process work any faster. If you're seeing high Redis Server Load, then you see timeout exceptions in the client. In this case, you should consider scaling up or partitioning your data into multiple caches.
+
+> [!CAUTION]
+> The Server Load metric can present incorrect data for Enterprise and Enterprise Flash tier caches. Sometimes Server Load is represented as being over 100. We are investigating this issue. We recommend using the CPU metric instead in the meantime.
++ - Sets - The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`. - Total Keys
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Previously updated : 03/23/2023 Last updated : 03/24/2023+
-# Configure data persistence for a Premium Azure Cache for Redis instance
+# Configure data persistence for an Azure Cache for Redis instance
-[Redis persistence](https://redis.io/topics/persistence) allows you to persist data stored in Redis. You can also take snapshots and back up the data. If there's a hardware failure, you load the data. The ability to persist data is a huge advantage over the Basic or Standard tiers where all the data is stored in memory. Data loss is possible if a failure occurs where Cache nodes are down.
+[Redis persistence](https://redis.io/topics/persistence) allows you to persist data stored in cache instance. If there's a hardware failure, the cache instance is rehydrated with data from the persistence file when it comes back online. The ability to persist data is an important way to boost the durability of a cache instance because all cache data is stored in memory. Data loss is possible if a failure occurs when cache nodes are down. Persistence should be a key part of your [high availability and disaster recovery](cache-high-availability.md) strategy with Azure Cache for Redis.
-> [!IMPORTANT]
+> [!WARNING]
>
-> Check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete).
+> If you are using persistence on the Premium tier, check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete).
>
-Azure Cache for Redis offers Redis persistence using the Redis database (RDB) and Append only File (AOF):
+## Scope of availability
-- **RDB persistence** - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence.-- **AOF persistence** - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second into an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.
+|Tier | Basic, Standard | Premium |Enterprise, Enterprise Flash |
+|||||
+|Available | No | Yes | Yes (preview) |
-Azure Cache for Redis persistence features are intended to be used to restore data to the same cache after data loss and the RDB/AOF persisted data files can't be imported to a new cache.
+## Types of data persistence in Redis
-To move data across caches, use the Import/Export feature. For more information, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
+You have two options for persistence with Azure Cache for Redis: the _Redis database_ (RDB) format and _Append only File_ (AOF) format:
-To generate any backups of data that can be added to a new cache, you can write automated scripts using PowerShell or CLI to export data periodically.
+- _RDB persistence_ - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence.
+- _AOF persistence_ - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second in an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica caches, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.
-> [!NOTE]
-> Persistence features are intended to be used to restore data to the same cache after data loss.
->
-> - RDB/AOF persisted data files cannot be imported to a new cache.
-> - Use the Import/Export feature to move data across caches.
-> - Write automated scripts using PowerShell or CLI to create a backup of data that can be added to a new cache.
+Azure Cache for Redis persistence features are intended to be used to restore data to the same cache after data loss. The RDB/AOF persisted data files can't be imported to a new cache. To move data across caches, use the _Import and Export_ feature. For more information, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
-Persistence writes Redis data into an Azure Storage account that you own and manage. You configure the **New Azure Cache for Redis** on the left during cache creation. For existing premium caches, use the **Resource menu**.
+To generate any backups of data that can be added to a new cache, you can write automated scripts using PowerShell or CLI that export data periodically.
-> [!NOTE]
+## Prerequisites and limitations
+
+Persistence features are intended to be used to restore data to the same cache after data loss.
+
+- RDB/AOF persisted data files can't be imported to a new cache. Use the [Import/Export](cache-how-to-import-export-data.md) feature instead.
+- Persistence isn't supported with caches using [passive geo-replication](cache-how-to-geo-replication.md) or [active geo-replication](cache-how-to-active-geo-replication.md).
+- On the _Premium_ tier, AOF persistence isn't supported with [multiple replicas](cache-how-to-multi-replicas.md).
+- On the _Premium_ tier, data must be persisted to a storage account in the same region as the cache instance.
+
+## Differences between persistence in the Premium and Enterprise tiers
+
+On the **Premium** tier, data is persisted directly to an [Azure Storage](../storage/common/storage-introduction.md) account that you own and manage. Azure Storage automatically encrypts data when it's persisted, but you can also use your own keys for the encryption. For more information, see [Customer-managed keys for Azure Storage encryption](../storage/common/customer-managed-keys-overview.md).
+
+> [!WARNING]
+>
+> If you are using persistence on the Premium tier, check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete).
>
-> Azure Storage automatically encrypts data when it is persisted. You can use your own keys for the encryption. For more information, see [Customer-managed keys with Azure Key Vault](../storage/common/storage-service-encryption.md).
-## Set up data persistence
+On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a managed disk attached directly to the cache instance. The location isn't configurable nor accessible to the user. Using a managed disk increases the performance of persistence. The disk is encrypted using Microsoft managed keys (MMK) by default, but customer managed keys (CMK) can also be used. For more information, see [managing data encryption](#managing-data-encryption).
+
+## How to set up data persistence using the Azure portal
+
+### [Using the portal (Premium tier)](#tab/premium)
-1. To create a premium cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. You can create caches in the Azure portal. You can also create them using Resource Manager templates, PowerShell, or Azure CLI. For more information about creating an Azure Cache for Redis, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache).
+1. To create a Premium cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. You can create caches in the Azure portal. You can also create them using Resource Manager templates, PowerShell, or Azure CLI. For more information about creating an Azure Cache for Redis, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache).
:::image type="content" source="media/cache-how-to-premium-persistence/create-resource.png" alt-text="Screenshot that shows a form to create an Azure Cache for Redis resource.":::
Persistence writes Redis data into an Azure Storage account that you own and man
| Setting | Suggested value | Description | | | - | -- |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
+ | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. The *host name* for your cache instance's is `\<DNS name>.redis.cache.windows.net`. |
| **Subscription** | Drop-down and select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. | | **Resource group** | Drop-down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
- | **Location** | Drop-down and select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. |
+ | **Location** | Drop-down and select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that use your cache. |
| **Cache type** | Drop-down and select a premium cache to configure premium features. For details, see [Azure Cache for Redis pricing](https://azure.microsoft.com/pricing/details/cache/). | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). | 4. Select the **Networking** tab or select the **Networking** button at the bottom of the page.
Persistence writes Redis data into an Azure Storage account that you own and man
| Setting | Suggested value | Description | | | - | -- | | **Backup Frequency** | Drop-down and select a backup interval. Choices include **15 Minutes**, **30 minutes**, **60 minutes**, **6 hours**, **12 hours**, and **24 hours**. | This interval starts counting down after the previous backup operation successfully completes. When it elapses, a new backup starts. |
- | **Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, using the soft delete feature on the storage account is strongly discouraged as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). |
+ | **Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, we _strongly_ recommend that you disable the soft delete feature on the storage account as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). |
| **Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. | The first backup starts once the backup frequency interval elapses.
Persistence writes Redis data into an Azure Storage account that you own and man
| Setting | Suggested value | Description | | | - | -- |
- | **First Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, using the soft delete feature on the storage account is strongly discouraged as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). |
+ | **First Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, we _strongly_ recommend that you disable the soft delete feature on the storage account as it leads to increased storage costs. For more information, see [Pricing and billing](/azure/storage/blobs/soft-delete-blob-overview). |
| **First Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. | | **Second Storage Account** | (Optional) Drop-down and select your secondary storage account. | You can optionally configure another storage account. If a second storage account is configured, the writes to the replica cache are written to this second storage account. | | **Second Storage Key** | (Optional) Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. |
Persistence writes Redis data into an Azure Storage account that you own and man
It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
+### [Using the portal (Enterprise tiers)](#tab/enterprise)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and start following the instructions in the [Enterprise tier quickstart guide](quickstart-create-redis-enterprise.md).
+
+1. When you reach the **Advanced** tab, select either _RDB_ or _AOF_ options in the **(PREVIEW) Data Persistence** section.
+
+ :::image type="content" source="media/cache-how-to-premium-persistence/cache-advanced-persistence.png" alt-text="Screenshot that shows the Enterprise tier Advanced tab and Data persistence is highlighted with a red box.":::
+
+1. To enable RDB persistence, select **RDB** and configure the settings.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Backup Frequency** | Use the drop-down and select a backup interval. Choices include **60 Minutes**, **6 hours**, and **12 hours**. | This interval starts counting down after the previous backup operation successfully completes. When it elapses, a new backup starts. |
+
+1. To enable AOF persistence, select **AOF** and configure the settings.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Backup Frequency** | Drop down and select a backup interval. Choices include **Write every second** and **Always write**. | The _Always write_ option will append new entries to the AOF file after every write to the cache. This choice offers the best durability but does lower cache performance. |
+
+1. Finish creating the cache by following the rest of the instructions in the [Enterprise tier quickstart guide](quickstart-create-redis-enterprise.md).
+
+> [!NOTE]
+> You can add persistence to a previously created Enterprise tier cache at any time by navigating to the **Advanced settings** in the Resource menu.
+>
+++
+## How to set up data persistence using PowerShell and Azure CLI
+
+### [Using PowerShell (Premium tier)](#tab/premium)
+
+The [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache) command can be used to create a new Premium-tier cache using data persistence. See examples for [RDB persistence](/powershell/module/az.rediscache/new-azrediscache#example-5-configure-data-persistence-for-a-premium-azure-cache-for-redis) and [AOF persistence](/powershell/module/az.rediscache/new-azrediscache#example-6-configure-data-persistence-for-a-premium-azure-cache-for-redis-aof-backup-enabled)
+
+Existing caches can be updated using the [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache) command. See examples of [adding persistence to an existing cache](/powershell/module/az.rediscache/set-azrediscache#example-3-modify-azure-cache-for-redis-if-you-want-to-add-data-persistence-after-azure-redis-cache-created).
++
+### [Using PowerShell (Enterprise tier)](#tab/enterprise)
+
+The [New-AzRedisEnterpriseCache](/powershell/module/az.redisenterprisecache/new-azredisenterprisecache) command can be used to create a new Enterprise-tier cache using data persistence. Use the `RdbPersistenceEnabled`, `RdbPersistenceFrequency`, `AofPersistenceEnabled`, and `AofPersistenceFrequency` parameters to configure the persistence setup. This example creates a new E10 Enterprise tier cache using RDB persistence with one hour frequency:
+
+```powershell-interactive
+New-AzRedisEnterpriseCache -Name "MyCache" -ResourceGroupName "MyGroup" -Location "West US" -Sku "Enterprise_E10" -RdbPersistenceEnabled -RdbPersistenceFrequency "1h"
+```
+
+Existing caches can be updated using the [Update-AzRedisEnterpriseCacheDatabase](/powershell/module/az.redisenterprisecache/update-azredisenterprisecachedatabase) command. This example adds RDB persistence with 12 hour frequency to an existing cache instance:
+
+```powershell-interactive
+Update-AzRedisEnterpriseCacheDatabase -Name "MyCache" -ResourceGroupName "MyGroup" -RdbPersistenceEnabled -RdbPersistenceFrequency "12h"
+```
+++
+### [Using Azure CLI (Premium tier)](#tab/premium)
+
+The [az redis create](/cli/azure/redis#az-redis-create) command can be used to create a new Premium-tier cache using data persistence. For instance:
+
+```azurecli
+az redis create --location westus2 --name MyRedisCache --resource-group MyResourceGroup --sku Premium --vm-size p1 --redis-configuration @"config_rdb.json"
+```
+
+Existing caches can be updated using the [az redis update](/cli/azure/redis#az-redis-update) command. For instance:
+
+```azurecli
+az redis update --name MyRedisCache --resource-group MyResourceGroup --set "redisConfiguration.rdb-storage-connection-string"="BlobEndpoint=https//..." "redisConfiguration.rdb-backup-enabled"="true" "redisConfiguration.rdb-backup-frequency"="15" "redisConfiguration.rdb-backup-max-snapshot-count"="1"
+```
+
+### [Using Azure CLI (Enterprise tier)](#tab/enterprise)
+
+The [az redisenterprise create](/cli/azure/redisenterprise#az-redisenterprise-create) command can be used to create a new Enterprise-tier cache using data persistence. Use the `rdb-enabled`, `rdb-frequency`, `aof-enabled`, and `aof-frequency` parameters to configure the persistence setup. This example creates a new E10 Enterprise tier cache using RDB persistence with one hour frequency:
+
+```azurecli
+az redisenterprise create --cluster-name "cache1" --resource-group "rg1" --location "East US" --sku "Enterprise_E10" --persistence rdb-enabled=true rdb-frequency="1h"
+```
+
+Existing caches can be updated using the [az redisenterprise update](/cli/azure/redisenterprise#az-redisenterprise-update) command. This example adds RDB persistence with 12 hour frequency to an existing cache instance:
+
+```azurecli
+az redisenterprise database update --cluster-name "cache1" --resource-group "rg1" --persistence rdb-enabled=true rdb-frequency="12h"
+```
+++
+## Managing data encryption
+Because Redis persistence creates data at rest, encrypting this data is an important concern for many users. Encryption options vary based on the tier of Azure Cache for Redis being used.
+
+With the **Premium** tier, data is streamed directly from the cache instance to Azure Storage when persistence is initiated. Various encryption methods can be used with Azure Storage, including Microsoft-managed keys, customer-managed keys, and customer-provided keys. For information on encryption methods, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md).
+
+With the **Enterprise** and **Enterprise Flash** tiers, data is stored on a managed disk mounted to the cache instance. By default, the disk holding the persistence data, and the OS disk are encrypted using Microsoft-managed keys. A customer-managed key (CMK) can also be used to control data encryption. See [Encryption on Enterprise tier caches](cache-how-to-encryption.md) for instructions.
+ ## Persistence FAQ The following list contains answers to commonly asked questions about Azure Cache for Redis persistence.
The following list contains answers to commonly asked questions about Azure Cach
### AOF persistence - [When should I use a second storage account?](#when-should-i-use-a-second-storage-account)-- [Does AOF persistence affect throughout, latency, or performance of my cache?](#does-aof-persistence-affect-throughout-latency-or-performance-of-my-cache)
+- [Does AOF persistence affect throughput, latency, or performance of my cache?](#does-aof-persistence-affect-throughput-latency-or-performance-of-my-cache)
- [How can I remove the second storage account?](#how-can-i-remove-the-second-storage-account) - [What is a rewrite and how does it affect my cache?](#what-is-a-rewrite-and-how-does-it-affect-my-cache) - [What should I expect when scaling a cache with AOF enabled?](#what-should-i-expect-when-scaling-a-cache-with-aof-enabled)
The following list contains answers to commonly asked questions about Azure Cach
### Can I enable persistence on a previously created cache?
-Yes, Redis persistence can be configured both at cache creation and on existing premium caches.
+Yes, persistence can be configured both at cache creation and on existing Premium, Enterprise, or Enterprise Flash caches.
### Can I enable AOF and RDB persistence at the same time?
No, you can enable RDB or AOF, but not both at the same time.
### How does persistence work with geo-replication?
-If you enable data persistence, geo-replication can't be enabled for your premium cache.
+If you enable data persistence, geo-replication can't be enabled for your cache.
### Which persistence model should I choose?
AOF persistence saves every write to a log, which has a significant effect on th
- Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence. - Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.
-For more information on performance when using AOF persistence, see [Does AOF persistence affect throughout, latency, or performance of my cache?](#does-aof-persistence-affect-throughout-latency-or-performance-of-my-cache)
+For more information on performance when using AOF persistence, see [Does AOF persistence affect throughput, latency, or performance of my cache?](#does-aof-persistence-affect-throughput-latency-or-performance-of-my-cache)
+
+### Does AOF persistence affect throughput, latency, or performance of my cache?
+
+AOF persistence does affect throughput. AOF runs on both the primary and replica process, therefore you see higher CPU and Server Load for a cache with AOF persistence than an identical cache without AOF persistence. AOF offers the best consistency with the data in memory because each write and delete is persisted with only a few seconds of delay. The trade-off is that AOF is more compute intensive.
+
+As long as CPU and Server Load are both less than 90%, there is a penalty on throughput, but the cache operates normally, otherwise. Above 90% CPU and Server Load, the throughput penalty can get much higher, and the latency of all commands processed by the cache increases. This is because AOF persistence runs on both the primary and replica process, increasing the load on the node in use, and putting persistence on the critical path of data.
### What happens if I've scaled to a different size and a backup is restored that was made before the scaling operation?
For both RDB and AOF persistence:
- If you've scaled to a larger size, there's no effect. - If you've scaled to a smaller size, and you have a custom [databases](cache-configure.md#databases) setting that is greater than the [databases limit](cache-configure.md#databases) for your new size, data in those databases isn't restored. For more information, see [Is my custom databases setting affected during scaling?](cache-how-to-scale.md#is-my-custom-databases-setting-affected-during-scaling)-- If you've scaled to a smaller size, and there isn't enough room in the smaller size to hold all of the data from the last backup, keys are evicted during the restore process. Typically, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
+- If you've scaled to a smaller size, and there isn't enough room in the smaller size to hold all of the data from the last backup, keys are evicted during the restore process. Typically, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
### Can I use the same storage account for persistence across two different caches? Yes, you can use the same storage account for persistence across two different caches.
-### Will I be charged for the storage being used in Data Persistence?
+### Will I be charged for the storage being used in data persistence?
-Yes, you'll be charged for the storage being used as per the pricing model of the storage account being used.
+- For **Premium** caches, you're charged for the storage being used per the pricing model of the storage account being used.
+- For **Enterprise** and **Enterprise Flash** caches, you aren't charged for the managed disk storage. It's included in the price.
### How frequently does RDB and AOF persistence write to my blobs, and should I enable soft delete?
-Enabling soft delete on storage accounts is strongly discouraged when used with Azure Cache for Redis data persistence. RDB and AOF persistence can write to your blobs as frequently as every hour, every few minutes, or every second. Also, enabling soft delete on a storage account means Azure Cache for Redis can't minimize storage costs by deleting the old backup data. Soft delete quickly becomes expensive with the typical data sizes of a cache and write operations every second. For more information on soft delete costs, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md).
+We recommend that you avoid enabling soft delete on storage accounts when used with Azure Cache for Redis data persistence with the Premium tier. RDB and AOF persistence can write to your blobs as frequently as every hour, every few minutes, or every second. Also, enabling soft delete on a storage account means Azure Cache for Redis can't minimize storage costs by deleting the old backup data.
+
+Soft delete quickly becomes expensive with the typical data sizes of a cache that also performs write operations every second. For more information on soft delete costs, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md).
### Can I change the RDB backup frequency after I create the cache?
-Yes, you can change the backup frequency for RDB persistence on the **Data persistence** on the left. For instructions, see Configure Redis persistence.
+Yes, you can change the backup frequency for RDB persistence using the Azure portal, CLI, or PowerShell.
### Why is there more than 60 minutes between backups when I have an RDB backup frequency of 60 minutes?
The RDB persistence backup frequency interval doesn't start until the previous b
### What happens to the old RDB backups when a new backup is made?
-All RDB persistence backups, except for the most recent one, are automatically deleted. This deletion might not happen immediately, but older backups aren't persisted indefinitely. If soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to reside in the soft delete state.
+All RDB persistence backups, except for the most recent one, are automatically deleted. This deletion might not happen immediately, but older backups aren't persisted indefinitely. If you're using the Premium tier for persistence, and soft delete is turned on for your storage account, the soft delete setting applies, and existing backups continue to reside in the soft delete state.
### When should I use a second storage account?
-Use a second storage account for AOF persistence when you believe you've higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits.
+Use a second storage account for AOF persistence when you think you've higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits. This option is only available for Premium tier caches.
-### Does AOF persistence affect throughout, latency, or performance of my cache?
-AOF persistence affects throughput by about 15% ΓÇô 20% when the cache is below maximum load (CPU and Server Load both under 90%). There shouldn't be latency issues when the cache is within these limits. However, the cache does reach these limits sooner with AOF enabled.
### How can I remove the second storage account?
-You can remove the AOF persistence secondary storage account by setting the second storage account to be the same as the first storage account. For existing caches, the **Data persistence** on the left is accessed from the **Resource menu** for your cache. To disable AOF persistence, select **Disabled**.
+You can remove the AOF persistence secondary storage account by setting the second storage account to be the same as the first storage account. For existing caches, access **Data persistence** from the **Resource menu** for your cache. To disable AOF persistence, select **Disabled**.
### What is a rewrite and how does it affect my cache?
-When the AOF file becomes large enough, a rewrite is automatically queued on the cache. The rewrite resizes the AOF file with the minimal set of operations needed to create the current data set. During rewrites, you can expect to reach performance limits sooner, especially when dealing with large datasets. Rewrites occur less often as the AOF file becomes larger, but will take a significant amount of time when it happens.
+When the AOF file becomes large enough, a rewrite is automatically queued on the cache. The rewrite resizes the AOF file with the minimal set of operations needed to create the current data set. During rewrites, you can expect to reach performance limits sooner, especially when dealing with large datasets. Rewrites occur less often as the AOF file becomes larger, but take a significant amount of time when it happens.
### What should I expect when scaling a cache with AOF enabled?
-If the AOF file at the time of scaling is large, then expect the scale operation to take longer than expected because it will be reloading the file after scaling has finished.
+If the AOF file at the time of scaling is large, then expect the scale operation to take longer than expected because it reloads the file after scaling has finished.
For more information on scaling, see [What happens if I've scaled to a different size and a backup is restored that was made before the scaling operation?](#what-happens-if-ive-scaled-to-a-different-size-and-a-backup-is-restored-that-was-made-before-the-scaling-operation) ### How is my AOF data organized in storage?
-Data stored in AOF files is divided into multiple page blobs per node to increase performance of saving the data to storage. The following table displays how many page blobs are used for each pricing tier:
+When you use the Premium tier, data stored in AOF files is divided into multiple page blobs per node to increase performance of saving the data to storage. The following table displays how many page blobs are used for each pricing tier:
| Premium tier | Blobs | |--|-|
When clustering is enabled, each shard in the cache has its own set of page blob
After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the background and append to the first set of files. Set operations, sent to the cache during the rewrite, append to the second set. A backup is temporarily stored during rewrites if there's a failure. The backup is promptly deleted after a rewrite finishes. If soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to stay in the soft delete state.
-### Will having firewall exceptions on the storage account affect persistence
+### Will having firewall exceptions on the storage account affect persistence?
-Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process.
+Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process. This only applies to persistence in the Premium tier.
### Can I have AOF persistence enabled if I have more than one replica?
-No, you can't use Append-only File (AOF) persistence with multiple replicas (more than one replica).
+With the Premium tier, you can't use Append-only File (AOF) persistence with multiple replicas. In the Enterprise and Enterprise Flash tiers, replica architecture is more complicated, but AOF persistence is supported when Enterprise caches are used in zone redundant deployment.
### How do I check if soft delete is enabled on my storage account?
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md
Previously updated : 03/22/2022 Last updated : 03/24/2023 ms.devlang: csharp
# Scale an Azure Cache for Redis instance
-Azure Cache for Redis has different cache offerings that provide flexibility in the choice of cache size and features. For a Basic, Standard or Premium cache, you can change its size and tier after creating it to match your application needs. This article shows you how to scale your cache using the Azure portal, and tools such as Azure PowerShell, and Azure CLI.
+Azure Cache for Redis has different tier offerings that provide flexibility in the choice of cache size and features. Through scaling, you can change the size, tier, and number of nodes after creating a cache instance to match your application needs. This article shows you how to scale your cache using the Azure portal, plus tools such as Azure PowerShell and Azure CLI.
-## When to scale
+## Types of scaling
-You can use the [monitoring](cache-how-to-monitor.md) features of Azure Cache for Redis to monitor the health and performance of your cache. Use that information determine when to scale the cache.
+There are fundamentally two ways to scale an Azure Cache for Redis Instance:
-You can monitor the following metrics to help determine if you need to scale.
+- _Scaling up_ increases the size of the Virtual Machine (VM) running the Redis server, adding more memory, Virtual CPUs (vCPUs), and network bandwidth. Scaling up is also called _vertical scaling_. The opposite of scaling up is _Scaling down_.
-- Redis Server Load
- - Redis server is a single threaded process. High Redis server load means that the server is unable to keep pace with the requests from all the client connections. In such situations, it helps to enable clustering or increase shard count so overhead functions are distributed across multiple Redis processes. Clustering and larger shard counts distribute TLS encryption and decryption, and distribute TLS connection and disconnection.
- - For more information, see [Set up clustering](cache-how-to-premium-clustering.md#set-up-clustering).
-- Memory Usage
- - High memory usage indicates that your data size is too large for the current cache size. Consider scaling to a cache size with larger memory.
-- Client connections
- - Each cache size has a limit to the number of client connections it can support. If your client connections are close to the limit for the cache size, consider scaling up to a larger tier. Scaling out using clustering does not increase the number of supported client connections.
- - For more information on connection limits by cache size, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/).
-- Network Bandwidth
- - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. Check "Cache Read" and "Cache Write" metrics to see how much server-side bandwidth is being used. If your Redis server is exceeding available network bandwidth, you should consider scaling up to a larger cache size with higher network bandwidth.
- - For more information on network available bandwidth by cache size, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
+- _Scaling out_ divides the cache instance into more nodes of the same size, increasing memory, vCPUs, and network bandwidth through parallelization. Scaling out is also referred to as _horizontal scaling_ or _sharding_. The opposite of scaling out is **Scaling in**. In the Redis community, scaling out is frequently called [_clustering_](https://redis.io/docs/management/scaling/).
-If you determine your cache is no longer meeting your application's requirements, you can scale to an appropriate cache pricing tier for your application. You can choose a larger or smaller cache to match your needs.
-For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
+## Scope of availability
-## Scale a cache
+|Tier | Basic and Standard | Premium | Enterprise and Enterprise Flash |
+||||-|
+|Scale Up | Yes | Yes | Yes (preview) |
+|Scale Down | Yes | Yes | No |
+|Scale Out | No | Yes | Yes (preview) |
+|Scale In | No | Yes | No |
-1. To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** on the left.
+## When to scale
- :::image type="content" source="media/cache-how-to-scale/scale-a-cache.png" alt-text="scale on the resource menu":::
+You can use the [monitoring](cache-how-to-monitor.md) features of Azure Cache for Redis to monitor the health and performance of your cache. Use that information to determine when to scale the cache.
-1. Choose a pricing tier on the right and then choose **Select**.
-
- :::image type="content" source="media/cache-how-to-scale/select-a-tier.png" alt-text="Azure Cache for Redis tiers":::
+You can monitor the following metrics to determine if you need to scale.
+
+- **Redis Server Load**
+ - High Redis server load means that the server is unable to keep pace with requests from all the clients. Because a Redis server is a single threaded process, it's typically more helpful to _scale out_ rather than _scale up_. Scaling out by enabling clustering helps distribute overhead functions across multiple Redis processes. Scaling out also helps distribute TLS encryption/decryption and connection/disconnection, speeding up cache instances using TLS.
+ - Scaling up can still be helpful in reducing server load because background tasks can take advantage of the more vCPUs and free up the thread for the main Redis server process.
+ - The Enterprise and Enterprise Flash tiers use Redis Enterprise rather than open source Redis. One of the advantages of these tiers is that the Redis server process can take advantage of multiple vCPUs. Because of that, both scaling up and scaling out in these tiers can be helpful in reducing server load. For more information, see [Best Practices for the Enterprise and Enterprise Flash tiers of Azure Cache for Redis](cache-best-practices-enterprise-tiers.md).
+ - For more information, see [Set up clustering](cache-how-to-premium-clustering.md#set-up-clustering).
+- **Memory Usage**
+ - High memory usage indicates that your data size is too large for the current cache size. Consider scaling to a cache size with larger memory. Either _scaling up_ or _scaling out_ is effective here.
+- **Client connections**
+ - Each cache size has a limit to the number of client connections it can support. If your client connections are close to the limit for the cache size, consider _scaling up_ to a larger tier. _Scaling out_ doesn't increase the number of supported client connections.
+ - For more information on connection limits by cache size, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/).
+- **Network Bandwidth**
+ - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. Check "Cache Read" and "Cache Write" metrics to see how much server-side bandwidth is being used. If your Redis server is exceeding available network bandwidth, you should consider scaling out or scaling up to a larger cache size with higher network bandwidth.
+ - For Enterprise tier caches using the _Enterprise cluster policy_, scaling out doesn't increase network bandwidth.
+ - For more information on network available bandwidth by cache size, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
+
+For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
> [!NOTE]
-> Scaling is currently not available with Enterprise Tier.
+> For more information on how to optimize the scaling process, see the [best practices for scaling guide](cache-best-practices-scale.md)
>
-You can scale to a different pricing tier with the following restrictions:
+## Prerequisites/limitations of scaling Azure Cache for Redis
+
+You can scale up/down to a different pricing tier with the following restrictions:
- You can't scale from a higher pricing tier to a lower pricing tier.
+ - You can't scale from an **Enterprise** or **Enterprise Flash** cache down to any other tier.
- You can't scale from a **Premium** cache down to a **Standard** or a **Basic** cache. - You can't scale from a **Standard** cache down to a **Basic** cache. - You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can later do a scaling operation to the wanted size. - You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in the next scaling operation. - You can't scale from a larger size down to the **C0 (250 MB)** size. However, you can scale down to any other size within the same pricing tier. For example, you can scale down from C5 Standard to C1 Standard.
+- You can't scale from a **Premium**, **Standard**, or **Basic** cache up to an **Enterprise** or **Enterprise Flash** cache.
+- You can't scale between **Enterprise** and **Enterprise Flash**.
-While the cache is scaling to the new tier, a **Scaling Redis Cache** notification is displayed.
+You can scale out/in with the following restrictions:
+- _Scale out_ is only supported on the **Premium**, **Enterprise**, and **Enterprise Flash** tiers.
+- _Scale in_ is only supported on the **Premium** tier.
+- On the **Premium** tier, clustering must be enabled first before scaling in or out.
+- Only the **Enterprise** and **Enterprise Flash** tiers can scale up and scale out simultaneously.
-When scaling is complete, the status changes from **Scaling** to **Running**.
+## How to scale - Basic, Standard, and Premium tiers
-## How to automate a scaling operation
+### [Scale up and down with Basic, Standard, and Premium](#tab/scale-up-and-down-with-basic-standard-and-premium)
-You can scale your cache instances in the Azure portal. And, you can scale using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML).
+#### Scale up and down using the Azure portal
-When you scale a cache up or down, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically get updated to 6 GB during scaling. When you scale down, the reverse happens.
+1. To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** from the Resource menu.
-> [!NOTE]
-> When you scale a cache up or down programmatically, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed.
+ :::image type="content" source="media/cache-how-to-scale/scale-a-cache.png" alt-text="Screenshot showing Scale on the resource menu.":::
+1. Choose a pricing tier in the working pane and then choose **Select**.
+
+ :::image type="content" source="media/cache-how-to-scale/select-a-tier.png" alt-text="Screenshot showing the Azure Cache for Redis tiers.":::
-- [Scale using PowerShell](#scale-using-powershell)-- [Scale using Azure CLI](#scale-using-azure-cli)-- [Scale using MAML](#scale-using-maml)
+1. While the cache is scaling to the new tier, a **Scaling Redis Cache** notification is displayed.
-### Scale using PowerShell
+ :::image type="content" source="media/cache-how-to-scale/scaling-notification.png" alt-text="Screenshot showing the notification of scaling.":::
+
+1. When scaling is complete, the status changes from **Scaling** to **Running**.
+
+ > [!NOTE]
+ > When you scale a cache up or down using the portal, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size.
+ > For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically get updated to 6 GB during scaling.
+ > When you scale down, the reverse happens.
+ >
+
+#### Scale up and down using PowerShell
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-You can scale your Azure Cache for Redis instances with PowerShell by using the [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache) cmdlet when the `Size`, `Sku`, or `ShardCount` properties are modified. The following example shows how to scale a cache named `myCache` to a 2.5-GB cache.
+You can scale your Azure Cache for Redis instances with PowerShell by using the [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache) cmdlet when the `Size`or `Sku` properties are modified. The following example shows how to scale a cache named `myCache` to a 6-GB cache in the same tier.
```powershell
- Set-AzRedisCache -ResourceGroupName myGroup -Name myCache -Size 2.5GB
+ Set-AzRedisCache -ResourceGroupName myGroup -Name myCache -Size 6GB
+```
+For more information on scaling with PowerShell, see [To scale an Azure Cache for Redis using PowerShell](cache-how-to-manage-redis-cache-powershell.md#scale).
+
+#### Scale up and down using Azure CLI
+
+To scale your Azure Cache for Redis instances using Azure CLI, call the [az redis update](/cli/azure/redis#az-redis-update) command. Use the `sku.capcity` property to scale within a tier, for example from a Standard C0 to Standard C1 cache:
+
+```azurecli
+az redis update --cluster-name myCache --resource-group myGroup --set "sku.capacity"="2"
+```
+
+Use the 'sku.name' and 'sku.family' properties to scale up to a different tier, for instance from a Standard C1 cache to a Premium P1 cache:
+
+```azurecli
+az redis update --cluster-name myCache --resource-group myGroup --set "sku.name"="Premium" "sku.capacity"="1" "sku.family"="P"
+```
+
+For more information on scaling with Azure CLI, see [Change settings of an existing Azure Cache for Redis](cache-manage-cli.md#scale).
+
+> [!NOTE]
+> When you scale a cache up or down programatically (e.g. using PowerShell or Azure CLI), any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed.
+>
+
+### [Scale out and in - Premium only](#tab/scale-out-and-inpremium-only)
+
+#### Create a new cache that is scaled out using clustering
+
+Clustering is enabled during cache creation from the working pane, when you create a new Azure Cache for Redis.
+
+1. Use the [_Create an open-source Redis cache_ quickstart guide](quickstart-create-redis.md) to start creating a new cache using the Azure portal.
+
+1. In the **Advanced** tab for a **premium** cache instance, configure the settings for non-TLS port, clustering, and data persistence. To enable clustering, select **Enable**.
+
+ :::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering.png" alt-text="Clustering toggle.":::
+
+ You can have up to 10 shards in the cluster. After selecting **Enable**, slide the slider or type a number between 1 and 10 for **Shard count** and select **OK**.
+
+ Each shard is a primary/replica cache pair managed by Azure. The total size of the cache is calculated by multiplying the number of shards by the cache size selected in the pricing tier.
+
+ :::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering-selected.png" alt-text="Clustering toggle selected.":::
+
+ Once the cache is created, you connect to it and use it just like a nonclustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics), metrics are captured separately for each shard, and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis using the Resource menu.
+
+1. Finish creating the cache using the [quickstart guide](quickstart-create-redis.md).
+
+It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
+
+> [!NOTE]
+>
+> There are some minor differences required in your client application when clustering is configured. For more information, see [Do I need to make any changes to my client application to use clustering?](#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering)
+>
++
+For sample code on working with clustering with the StackExchange.Redis client, see the [clustering.cs](https://github.com/rustd/RedisSamples/blob/master/HelloWorld/Clustering.cs) portion of the [Hello World](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample.
+
+#### Scale a running Premium cache in or out
+
+To change the cluster size on a premium cache that you created earlier, and is already running with clustering enabled, select **Cluster size** from the Resource menu.
++
+To change the cluster size, use the slider or type a number between 1 and 10 in the **Shard count** text box. Then, select **OK** to save.
+
+Increasing the cluster size increases max throughput and cache size. Increasing the cluster size doesn't increase the max. connections available to clients.
+
+#### Scale out and in using PowerShell
+
+You can scale out your Azure Cache for Redis instances with PowerShell by using the [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache) cmdlet when the `ShardCount` property is modified. The following example shows how to scale out a cache named `myCache` out to use three shards (that is, scale out by a factor of three)
+
+```powershell
+ Set-AzRedisCache -ResourceGroupName myGroup -Name myCache -ShardCount 3
``` For more information on scaling with PowerShell, see [To scale an Azure Cache for Redis using PowerShell](cache-how-to-manage-redis-cache-powershell.md#scale).
-### Scale using Azure CLI
+#### Scale out and in using Azure CLI
+
+To scale your Azure Cache for Redis instances using Azure CLI, call the [az redis update](/cli/azure/redis#az-redis-update) command and use the `shard-count` property. The following example shows how to scale out a cache named `myCache` to use three shards (that is, scale out by a factor of three).
-To scale your Azure Cache for Redis instances using Azure CLI, call the `azure rediscache set` command and pass in the configuration changes you want that include a new size, sku, or cluster size, depending on the scaling operation you wish.
+```azurecli
+az redis update --cluster-name myCache --resource-group myGroup --set shard-count=3
+```
For more information on scaling with Azure CLI, see [Change settings of an existing Azure Cache for Redis](cache-manage-cli.md#scale).
-### Scale using MAML
+> [!NOTE]
+> When you scale a cache up or down programmatically (e.g. using PowerShell or Azure CLI), any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed.
+>
+
+> [!NOTE]
+> Scaling a cluster runs the [MIGRATE](https://redis.io/commands/migrate) command, which is an expensive command. For minimal impact, consider running this operation during non-peak hours. During the migration process, you see a spike in server load. Scaling a cluster is a long running process and the amount of time taken depends on the number of keys and size of the values associated with those keys.
+>
+>
+
+## How to scale up and out - Enterprise and Enterprise Flash tiers
+
+The Enterprise and Enterprise Flash tiers are able to scale up and scale out in one operation. Other tiers require separate operations for each action.
+
+> [!CAUTION]
+> The Enterprise and Enterprise Flash tiers do not yet support _scale down_ or _scale in_ operations.
+>
++
+### Scale using the Azure portal
+
+1. To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** from the Resource menu.
+
+ :::image type="content" source="media/cache-how-to-scale/cache-enterprise-scale.png" alt-text="Screenshot showing Scale selected in the Resource menu for an Enterprise cache.":::
+
+1. To scale up, choose a different **Cache type** and then choose **Save**.
+ > [!IMPORTANT]
+ > You can only scale up at this time. You cannot scale down.
+
+ :::image type="content" source="media/cache-how-to-scale/cache-enterprise-scale-up.png" alt-text="Screenshot showing the Enterprise tiers in the working pane.":::
+
+1. To scale out, increase the **Capacity** slider. Capacity increases in increments of two. This number reflects how many underlying Redis Enterprise nodes are being added. This number is always a multiple of two to reflect nodes being added for both primary and replica shards.
+ > [!IMPORTANT]
+ > You can only scale out, increasing capacity, at this time. You cannot scale in.
+
+ :::image type="content" source="media/cache-how-to-scale/cache-enterprise-capacity.png" alt-text="Screenshot showing Capacity in the working pane a red box around it.":::
+
+1. While the cache is scaling to the new tier, a **Scaling Redis Cache** notification is displayed.
+
+ :::image type="content" source="media/cache-how-to-scale/cache-enterprise-notifications.png" alt-text="Screenshot showing notification of scaling an Enterprise cache.":::
+
-To scale your Azure Cache for Redis instances using the [Microsoft Azure Management Libraries (MAML)](https://azure.microsoft.com/updates/management-libraries-for-net-release-announcement/), call the `IRedisOperations.CreateOrUpdate` method and pass in the new size for the `RedisProperties.SKU.Capacity`.
+1. When scaling is complete, the status changes from **Scaling** to **Running**.
-```csharp
- static void Main(string[] args)
- {
- // For instructions on getting the access token, see
- // https://azure.microsoft.com/documentation/articles/cache-configure/#access-keys
- string token = GetAuthorizationHeader();
- TokenCloudCredentials creds = new TokenCloudCredentials(subscriptionId,token);
+### Scale using PowerShell
- RedisManagementClient client = new RedisManagementClient(creds);
- var redisProperties = new RedisProperties();
- // To scale, set a new size for the redisSKUCapacity parameter.
- redisProperties.Sku = new Sku(redisSKUName,redisSKUFamily,redisSKUCapacity);
- redisProperties.RedisVersion = redisVersion;
- var redisParams = new RedisCreateOrUpdateParameters(redisProperties, redisCacheRegion);
- client.Redis.CreateOrUpdate(resourceGroupName,cacheName, redisParams);
- }
+You can scale your Azure Cache for Redis instances with PowerShell by using the [Update-AzRedisEnterpriseCache](/powershell/module/az.redisenterprisecache/update-azredisenterprisecache) cmdlet. You can modify the `Sku` property to scale the instance up. You can modify the `Capacity` property to scale out the instance. The following example shows how to scale a cache named `myCache` to an Enterprise E20 (25 GB) instance with capacity of 4.
+
+```powershell
+ Update-AzRedisEnterpriseCache -ResourceGroupName myGroup -Name myCache -Sku Enterprise_E20 -Capacity 4
```
-For more information, see the [Manage Azure Cache for Redis using MAML](https://github.com/rustd/RedisSamples/tree/master/ManageCacheUsingMAML) sample.
+### Scale using Azure CLI
+
+To scale your Azure Cache for Redis instances using Azure CLI, call the [az redisenterprise update](/cli/azure/redisenterprise#az-redisenterprise-update) command. You can modify the `sku` property to scale the instance up. You can modify the `capacity` property to scale out the instance. The following example shows how to scale a cache named `myCache` to an Enterprise E20 (25 GB) instance with capacity of 4.
+
+```azurecli
+az redisenterprise update --cluster-name "myCache" --resource-group "myGroup" --sku "Enterprise_E20" --capacity 4
+```
## Scaling FAQ
The following list contains answers to commonly asked questions about Azure Cach
- [Can I scale to, from, or within a Premium cache?](#can-i-scale-to-from-or-within-a-premium-cache) - [After scaling, do I have to change my cache name or access keys?](#after-scaling-do-i-have-to-change-my-cache-name-or-access-keys) - [How does scaling work?](#how-does-scaling-work)-- [Will I lose data from my cache during scaling?](#will-i-lose-data-from-my-cache-during-scaling)
+- [Do I lose data from my cache during scaling?](#do-i-lose-data-from-my-cache-during-scaling)
- [Is my custom databases setting affected during scaling?](#is-my-custom-databases-setting-affected-during-scaling)-- [Will my cache be available during scaling?](#will-my-cache-be-available-during-scaling)
+- [Is my cache be available during scaling?](#is-my-cache-be-available-during-scaling)
- [Are there scaling limitations with geo-replication?](#are-there-scaling-limitations-with-geo-replication) - [Operations that aren't supported](#operations-that-arent-supported) - [How long does scaling take?](#how-long-does-scaling-take) - [How can I tell when scaling is complete?](#how-can-i-tell-when-scaling-is-complete)
+- [Do I need to make any changes to my client application to use clustering?](#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering)
+- [How are keys distributed in a cluster?](#how-are-keys-distributed-in-a-cluster)
+- [What is the largest cache size I can create?](#what-is-the-largest-cache-size-i-can-create)
+- [Do all Redis clients support clustering?](#do-all-redis-clients-support-clustering)
+- [How do I connect to my cache when clustering is enabled?](#how-do-i-connect-to-my-cache-when-clustering-is-enabled)
+- [Can I directly connect to the individual shards of my cache?](#can-i-directly-connect-to-the-individual-shards-of-my-cache)
+- [Can I configure clustering for a previously created cache?](#can-i-configure-clustering-for-a-previously-created-cache)
+- [Can I configure clustering for a basic or standard cache?](#can-i-configure-clustering-for-a-basic-or-standard-cache)
+- [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers)
+- [I'm getting MOVE exceptions when using StackExchange.Redis and clustering, what should I do?](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do)
+- [What is the difference between OSS Clustering and Enterprise Clustering on Enterprise-tier caches?](#what-is-the-difference-between-oss-clustering-and-enterprise-clustering-on-enterprise-tier-caches)
+- [How many shards do Enterprise tier caches use?](#how-many-shards-do-enterprise-tier-caches-use)
### Can I scale to, from, or within a Premium cache? - You can't scale from a **Premium** cache down to a **Basic** or **Standard** pricing tier. - You can scale from one **Premium** cache pricing tier to another. - You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a later scaling operation.
+- You can't scale from a **Premium** cache to an **Enterprise** or **Enterprise Flash** cache.
- If you enabled clustering when you created your **Premium** cache, you can [change the cluster size](cache-how-to-premium-clustering.md#set-up-clustering). If your cache was created without clustering enabled, you can configure clustering at a later time.
-For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
- ### After scaling, do I have to change my cache name or access keys? No, your cache name and keys are unchanged during a scaling operation. ### How does scaling work? -- When you scale a **Basic** cache to a different size, it's shut down and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
+- When you scale a **Basic** cache to a different size, it's shut down, and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
- When you scale a **Basic** cache to a **Standard** cache, a replica cache is provisioned and the data is copied from the primary cache to the replica cache. The cache remains available during the scaling process.-- When you scale a **Standard** cache to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes.
+- When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a different size, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes.
- When you scale out a clustered cache, new shards are provisioned and added to the Redis server cluster. Data is then resharded across all shards. - When you scale in a clustered cache, data is first resharded and then cluster size is reduced to required shards.-- In some cases, such as scaling or migrating your cache to a different cluster, the underlying IP address of the cache can change. The DNS record for the cache changes and is transparent to most applications. However, if you use an IP address to configure the connection to your cache, or to configure NSGs, or firewalls allowing traffic to the cache, your application might have trouble connecting sometime after that the DNS record updates.
+- In some cases, such as scaling or migrating your cache to a different cluster, the underlying IP address of the cache can change. The DNS record for the cache changes and is transparent to most applications. However, if you use an IP address to configure the connection to your cache, or to configure NSGs, or firewalls allowing traffic to the cache, your application might have trouble connecting sometime after the DNS record updates.
-### Will I lose data from my cache during scaling?
+### Do I lose data from my cache during scaling?
- When you scale a **Basic** cache to a new size, all data is lost and the cache is unavailable during the scaling operation. - When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved.-- When you scale a **Standard** cache to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
+- When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
### Is my custom databases setting affected during scaling?
If you configured a custom value for the `databases` setting during cache creati
- If you're using a custom number of `databases` that exceeds the limits of the new tier, the `databases` setting is lowered to the limits of the new tier and all data in the removed databases is lost. - When you scale to a pricing tier with the same or higher `databases` limit than the current tier, your `databases` setting is kept and no data is lost.
-While Standard and Premium caches have a 99.9% SLA for availability, there's no SLA for data loss.
+While Standard, Premium, Enterprise, and Enterprise Flash caches have a SLA for availability, there's no SLA for data loss.
-### Will my cache be available during scaling?
+### Is my cache be available during scaling?
-- **Standard** and **Premium** caches remain available during the scaling operation. However, connection blips can occur while scaling Standard and Premium caches, and also while scaling from Basic to Standard caches. These connection blips are expected to be small and redis clients can generally re-establish their connection instantly.
+- **Standard**, **Premium**, **Enterprise**, and **Enterprise Flash** caches remain available during the scaling operation. However, connection blips can occur while scaling these caches, and also while scaling from **Basic** to **Standard** caches. These connection blips are expected to be small and redis clients can generally re-establish their connection instantly.
+- For Enterprise and Enterprise Flash caches using active geo-replication, scaling only a subset of linked caches can introduce issues over time in some cases. We recommend scaling all caches in the geo-replication group together were possible.
- **Basic** caches are offline during scaling operations to a different size. Basic caches remain available when scaling from **Basic** to **Standard** but might experience a small connection blip. If a connection blip occurs, Redis clients can generally re-establish their connection instantly. ### Are there scaling limitations with geo-replication?
-With geo-replication configured, you might notice that you canΓÇÖt scale a cache or change the shards in a cluster. A geo-replication link between two caches prevents you from scaling operation or changing the number of shards in a cluster. You must unlink the cache to issue these commands. For more information, see [Configure Geo-replication](cache-how-to-geo-replication.md).
+With [passive geo-replication](cache-how-to-geo-replication.md) configured, you might notice that you canΓÇÖt scale a cache or change the shards in a cluster. A geo-replication link between two caches prevents you from scaling operation or changing the number of shards in a cluster. You must unlink the cache to issue these commands. For more information, see [Configure Geo-replication](cache-how-to-geo-replication.md).
+
+With [active geo-replication](cache-how-to-active-geo-replication.md) configured, you can't scale a cache. All caches in a geo replication group must be the same size and capacity.
### Operations that aren't supported
With geo-replication configured, you might notice that you canΓÇÖt scale a cache
- You can't scale from a **Standard** cache down to a **Basic** cache. - You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can do a scaling operation to the size you want at a later time. - You can't scale from a **Basic** cache directly to a **Premium** cache. First scale from **Basic** to **Standard** in one scaling operation, and then scale from **Standard** to **Premium** in a later operation.
+- You can't scale from a **Premium** cache to an **Enterprise** or **Enterprise Flash** cache.
- You can't scale from a larger size down to the **C0 (250 MB)** size. If a scaling operation fails, the service tries to revert the operation, and the cache will revert to the original size.
Generally, when you scale a cache with no data, it takes approximately 20 minute
### How can I tell when scaling is complete? In the Azure portal, you can see the scaling operation in progress. When scaling is complete, the status of the cache changes to **Running**.+
+### Do I need to make any changes to my client application to use clustering?
+
+* When clustering is enabled, only database 0 is available. If your client application uses multiple databases and it tries to read or write to a database other than 0, the following exception is thrown: `Unhandled Exception: StackExchange.Redis.RedisConnectionException: ProtocolFailure on GET >` `StackExchange.Redis.RedisCommandException: Multiple databases are not supported on this server; cannot switch to database: 6`
+
+ For more information, see [Redis Cluster Specification - Implemented subset](https://redis.io/topics/cluster-spec#implemented-subset).
+* If you're using [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/), you must use 1.0.481 or later. You connect to the cache using the same [endpoints, ports, and keys](cache-configure.md#properties) that you use when connecting to a cache where clustering is disabled. The only difference is that all reads and writes must be done to database 0.
+
+ Other clients may have different requirements. See [Do all Redis clients support clustering?](#do-all-redis-clients-support-clustering)
+* If your application uses multiple key operations batched into a single command, all keys must be located in the same shard. To locate keys in the same shard, see [How are keys distributed in a cluster?](#how-are-keys-distributed-in-a-cluster)
+* If you're using Redis ASP.NET Session State provider, you must use 2.0.1 or higher. See [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers)
+
+> [!IMPORTANT]
+> When using the Enterprise or Enterprise FLash tiers, you are given the choice of _OSS Cluster Mode_ or _Enterprise Cluster Mode_. OSS Cluster Mode is the same as clustering on the Premium tier and follows the open source clustering specification. Enterprise Cluster Mode can be less performant, but uses Redis Enterprise clustering which doesn't require any client changes to use. For more information, see [Clustering on Enterprise](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise).
+>
+>
+
+### How are keys distributed in a cluster?
+
+Per the Redis documentation on [Keys distribution model](https://redis.io/topics/cluster-spec#keys-distribution-model): The key space is split into 16,384 slots. Each key is hashed and assigned to one of these slots, which are distributed across the nodes of the cluster. You can configure which part of the key is hashed to ensure that multiple keys are located in the same shard using hash tags.
+
+* Keys with a hash tag - if any part of the key is enclosed in `{` and `}`, only that part of the key is hashed for the purposes of determining the hash slot of a key. For example, the following three keys would be located in the same shard: `{key}1`, `{key}2`, and `{key}3` since only the `key` part of the name is hashed. For a complete list of keys hash tag specifications, see [Keys hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags).
+* Keys without a hash tag - the entire key name is used for hashing, resulting in a statistically even distribution across the shards of the cache.
+
+For best performance and throughput, we recommend distributing the keys evenly. If you're using keys with a hash tag, it's the application's responsibility to ensure the keys are distributed evenly.
+
+For more information, see [Keys distribution model](https://redis.io/topics/cluster-spec#keys-distribution-model), [Redis Cluster data sharding](https://redis.io/topics/cluster-tutorial#redis-cluster-data-sharding), and [Keys hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags).
+
+For sample code about working with clustering and locating keys in the same shard with the StackExchange.Redis client, see the [clustering.cs](https://github.com/rustd/RedisSamples/blob/master/HelloWorld/Clustering.cs) portion of the [Hello World](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample.
+
+### What is the largest cache size I can create?
+
+The largest cache size you can have is 4.5 TB. This result is a clustered F1500 cache with capacity 9. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/).
+
+### Do all Redis clients support clustering?
+
+Many clients libraries support Redis clustering but not all. Check the documentation for the library you're using to verify you're using a library and version that support clustering. StackExchange.Redis is one library that does support clustering, in its newer versions. For more information on other clients, see the [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) section of the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
+
+The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as 'MOVED' na 'CROSSSLOTS'. When you attempt to use a client library that doesn't support clustering, with a cluster mode cache, the result can be many [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection), or just break your application, if you're doing cross-slot multi-key requests.
+
+> [!NOTE]
+> If you're using StackExchange.Redis as your client, verify that you are using the latest version of [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/) 1.0.481 or later for clustering to work correctly. For more information on any issues with move exceptions, see [move exceptions](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do).
+>
+### How do I connect to my cache when clustering is enabled?
+
+You can connect to your cache using the same [endpoints](cache-configure.md#properties), [ports](cache-configure.md#properties), and [keys](cache-configure.md#access-keys) that you use when connecting to a cache that doesn't have clustering enabled. Redis manages the clustering on the backend so you don't have to manage it from your client.
+
+### Can I directly connect to the individual shards of my cache?
+
+The clustering protocol requires the client to make the correct shard connections, so the client should make share connections for you. With that said, each shard consists of a primary/replica cache pair, collectively known as a cache instance. You can connect to these cache instances using the redis-cli utility in the [unstable](https://redis.io/download) branch of the Redis repository at GitHub. This version implements basic support when started with the `-c` switch. For more information, see [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) on [https://redis.io](https://redis.io) in the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
+
+You need to use the `-p` switch to specify the correct port to connect to. Use the [CLUSTER NODES](https://redis.io/commands/cluster-nodes/) command to determine the exact ports used for the primary and replica nodes. The following port ranges are used:
+
+- For non-TLS Premium tier caches, ports are available in the `130XX` range
+- For TLS enabled Premium tier caches, ports are available in the `150XX` range
+- For Enterprise and Enterprise Flash caches using OSS clustering, the initial connection is through port 10000. Connecting to individual nodes can be done using ports in the 85XX range. The 85xx ports will change over time and shouldn't be hardcoded into your application.
+
+### Can I configure clustering for a previously created cache?
+
+Yes. First, ensure that your cache is premium by scaling it up. Next, you can see the cluster configuration options, including an option to enable cluster. Change the cluster size after the cache is created, or after you have enabled clustering for the first time.
+
+>[!IMPORTANT]
+>You can't undo enabling clustering. And a cache with clustering enabled and only one shard behaves *differently* than a cache of the same size with *no* clustering.
+
+All Enterprise and Enterprise Flash tier caches are always clustered.
+
+### Can I configure clustering for a basic or standard cache?
+
+Clustering is only available for Premium, Enterprise, and Enterprise Flash caches.
+
+### Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?
+
+* **Redis Output Cache provider** - no changes required.
+* **Redis Session State provider** - to use clustering, you must use [RedisSessionStateProvider](https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider) 2.0.1 or higher or an exception is thrown, which is a breaking change. For more information, see [v2.0.0 Breaking Change Details](https://github.com/Azure/aspnet-redis-providers/wiki/v2.0.0-Breaking-Change-Details).
+
+### I'm getting MOVE exceptions when using StackExchange.Redis and clustering, what should I do?
+If you're using StackExchange.Redis and receive `MOVE` exceptions when using clustering, ensure that you're using [StackExchange.Redis 1.1.603](https://www.nuget.org/packages/StackExchange.Redis/) or later. For instructions on configuring your .NET applications to use StackExchange.Redis, see [Configure the cache clients](cache-dotnet-how-to-use-azure-redis-cache.md#configure-the-cache-client).
+
+### What is the difference between OSS Clustering and Enterprise Clustering on Enterprise tier caches?
+
+OSS Cluster Mode is the same as clustering on the Premium tier and follows the open source clustering specification. Enterprise Cluster Mode can be less performant, but uses Redis Enterprise clustering, which doesn't require any client changes to use. For more information, see [Clustering on Enterprise](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise).
+
+### How many shards do Enterprise tier caches use?
+
+Unlike Basic, Standard, and Premium tier caches, Enterprise and Enterprise Flash caches can take advantage of multiple shards on a single node. For more information, see [Sharding and CPU utilization](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization).
+
+## Next steps
+
+- [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting)
+- [[Best practices for scaling](cache-best-practices-scale.md)]
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
Previously updated : 03/15/2022 Last updated : 03/28/2023 # About Azure Cache for Redis
Azure Cache for Redis improves application performance by supporting common appl
| | -- | | [Data cache](cache-web-app-cache-aside-leaderboard.md) | Databases are often too large to load directly into a cache. It's common to use the [cache-aside](/azure/architecture/patterns/cache-aside) pattern to load data into the cache only as needed. When the system makes changes to the data, the system can also update the cache, which is then distributed to other clients. Additionally, the system can set an expiration on data, or use an eviction policy to trigger data updates into the cache.| | [Content cache](cache-aspnet-output-cache-provider.md) | Many web pages are generated from templates that use static content such as headers, footers, banners. These static items shouldn't change often. Using an in-memory cache provides quick access to static content compared to backend datastores. This pattern reduces processing time and server load, allowing web servers to be more responsive. It can allow you to reduce the number of servers needed to handle loads. Azure Cache for Redis provides the Redis Output Cache Provider to support this pattern with ASP.NET.|
-| [Session store](cache-aspnet-session-state-provider.md) | This pattern is commonly used with shopping carts and other user history data that a web application might associate with user cookies. Storing too much in a cookie can have a negative effect on performance as the cookie size grows and is passed and validated with every request. A typical solution uses the cookie as a key to query the data in a database. Using an in-memory cache, like Azure Cache for Redis, to associate information with a user is much faster than interacting with a full relational database. |
+| [Session store](cache-aspnet-session-state-provider.md) | This pattern is commonly used with shopping carts and other user history data that a web application might associate with user cookies. Storing too much in a cookie can have a negative effect on performance as the cookie size grows and is passed and validated with every request. A typical solution uses the cookie as a key to query the data in a database. When you use an in-memory cache, like Azure Cache for Redis, to associate information with a user is faster than interacting with a full relational database. |
| Job and message queuing | Applications often add tasks to a queue when the operations associated with the request take time to execute. Longer running operations are queued to be processed in sequence, often by another server. This method of deferring work is called task queuing. Azure Cache for Redis provides a distributed queue to enable this pattern in your application.| | Distributed transactions | Applications sometimes require a series of commands against a backend data-store to execute as a single atomic operation. All commands must succeed, or all must be rolled back to the initial state. Azure Cache for Redis supports executing a batch of commands as a single [transaction](https://redis.io/topics/transactions). |
Azure Cache for Redis is available in these tiers:
| Tier | Description | |||
-| Basic | An OSS Redis cache running on a single VM. This tier has no service-level agreement (SLA) and is ideal for development/test and non-critical workloads. |
+| Basic | An OSS Redis cache running on a single VM. This tier has no service-level agreement (SLA) and is ideal for development/test and noncritical workloads. |
| Standard | An OSS Redis cache running on two VMs in a replicated configuration. | | Premium | High-performance OSS Redis caches. This tier offers higher throughput, lower latency, better availability, and more features. Premium caches are deployed on more powerful VMs compared to the VMs for Basic or Standard caches. | | Enterprise | High-performance caches powered by Redis Inc.'s Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, RedisJSON, and RedisTimeSeries. Also, it offers even higher availability than the Premium tier. |
-| Enterprise Flash | Cost-effective large caches powered by Redis Inc.'s Redis Enterprise software. This tier extends Redis data storage to non-volatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. |
+| Enterprise Flash | Cost-effective large caches powered by Redis Inc.'s Redis Enterprise software. This tier extends Redis data storage to nonvolatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. |
### Feature comparison
Consider the following options when choosing an Azure Cache for Redis tier:
- **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tiers 12 GB - 14 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md). - **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).-- **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation, which will cause timeouts in your application.
+- **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation that cause timeouts in your application.
- **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance). - **Maximum number of client connections**: The Premium and Enterprise tiers offer the maximum numbers of clients that can connect to Redis, offering higher numbers of connections for larger sized caches. Clustering increases the total amount of network bandwidth available for a clustered cache. - **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss.
Consider the following options when choosing an Azure Cache for Redis tier:
- **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md). - **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/), [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/), and [RedisJSON](https://docs.redis.com/latest/modules/redisjson/) (preview). These modules add new data types and functionality to Redis.
-You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation).
+You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to scale - Basic, Standard, and Premium tiers](cache-how-to-scale.md#how-to-scalebasic-standard-and-premium-tiers).
### Special considerations for Enterprise tiers
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md
You can restrict public access to the private endpoint of your cache by disablin
>[!Important] > There is a `publicNetworkAccess` flag which is `Disabled` by default. > You can set the value to `Disabled` or `Enabled`. When set to enabled, this flag allows both public and private endpoint access to the cache. When set to `Disabled`, it allows only private endpoint access. For more information on how to change the value, see the [FAQ](#how-can-i-change-my-private-endpoint-to-be-disabled-or-enabled-from-public-network-access).+
+>[!Important]
+> Private endpoint is supported on cache tiers Basic, Standard, Premium, and Enterprise. We recommend using private endpoint instead of VNets. Private endpoints are easy to set up or remove, are supported on all tiers, and can connect your cache to multiple different VNets at once.
> >
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md
zone_pivot_groups: functions-nodejs-model
In this article, you use command-line tools to create a JavaScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
->[!NOTE]
->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md).
-Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account.
+Note that completion will incur a small cost of a few USD cents or less in your Azure account.
There is also a [Visual Studio Code-based version](create-first-function-vs-code-node.md) of this article.
Before you begin, you must have the following:
::: zone-end ::: zone pivot="nodejs-model-v4"
-+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5085 or above
++ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5095 or above ::: zone-end + One of the following tools for creating Azure resources:
Verify your prerequisites, which depend on whether you are using Azure CLI or Az
::: zone-end ::: zone pivot="nodejs-model-v4"
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above.
++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ::: zone-end + Run `az --version` to check that the Azure CLI version is 2.4 or later.
Verify your prerequisites, which depend on whether you are using Azure CLI or Az
::: zone-end ::: zone pivot="nodejs-model-v4"
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above.
++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ::: zone-end + Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later.
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md
zone_pivot_groups: functions-nodejs-model
In this article, you use command-line tools to create a TypeScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
->[!NOTE]
->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md).
-Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account.
+Note that completion will incur a small cost of a few USD cents or less in your Azure account.
There's also a [Visual Studio Code-based version](create-first-function-vs-code-typescript.md) of this article.
Before you begin, you must have the following:
+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x. ::: zone-end ::: zone pivot="nodejs-model-v4"
-+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5085 or above
++ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5095 or above ::: zone-end + One of the following tools for creating Azure resources:
Verify your prerequisites, which depend on whether you're using Azure CLI or Azu
::: zone-end ::: zone pivot="nodejs-model-v4"
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above.
++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ::: zone-end + Run `az --version` to check that the Azure CLI version is 2.4 or later.
Verify your prerequisites, which depend on whether you're using Azure CLI or Azu
::: zone-end ::: zone pivot="nodejs-model-v4"
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above.
++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ::: zone-end + Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later.
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md
zone_pivot_groups: functions-nodejs-model
Use Visual Studio Code to create a JavaScript function that responds to HTTP requests. Test the code locally, then deploy it to the serverless environment of Azure Functions.
->[!NOTE]
->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md).
-Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account.
+Note that completion will incur a small cost of a few USD cents or less in your Azure account.
There's also a [CLI-based version](create-first-function-cli-node.md) of this article.
azure-functions Create First Function Vs Code Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md
zone_pivot_groups: functions-nodejs-model
In this article, you use Visual Studio Code to create a TypeScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
->[!NOTE]
->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md).
-Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account.
+Note that completion will incur a small cost of a few USD cents or less in your Azure account.
There's also a [CLI-based version](create-first-function-cli-typescript.md) of this article.
Before you get started, make sure you have the following requirements in place:
+ [Azure Functions Core Tools 4.x](functions-run-local.md#install-the-azure-functions-core-tools). ::: zone-end ::: zone pivot="nodejs-model-v4"
-+ [Azure Functions Core Tools v4.0.5085 or above](functions-run-local.md#install-the-azure-functions-core-tools).
++ [Azure Functions Core Tools v4.0.5095 or above](functions-run-local.md#install-the-azure-functions-core-tools). ::: zone-end ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Durable Functions Cloud Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-cloud-backup.md
*Fan-out/fan-in* refers to the pattern of executing multiple functions concurrently and then performing some aggregation on the results. This article explains a sample that uses [Durable Functions](durable-functions-overview.md) to implement a fan-in/fan-out scenario. The sample is a durable function that backs up all or some of an app's site content into Azure Storage.
-> [!NOTE]
-> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4).
->
-> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience.
[!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)]
azure-functions Durable Functions Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-error-handling.md
ms.devlang: csharp, javascript, powershell, python, java
Durable Function orchestrations are implemented in code and can use the programming language's built-in error-handling features. There really aren't any new concepts you need to learn to add error handling and compensation into your orchestrations. However, there are a few behaviors that you should be aware of.
-> [!NOTE]
-> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4).
->
-> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience.
## Errors in activity functions
azure-functions Durable Functions Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-orchestrations.md
When an orchestration function is given more work to do (for example, a response
The event-sourcing behavior of the Durable Task Framework is closely coupled with the orchestrator function code you write. Suppose you have an activity-chaining orchestrator function, like the following orchestrator function:
-> [!NOTE]
-> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4).
->
-> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience.
# [C# (InProc)](#tab/csharp-inproc)
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is designed to work with all Azure Functions programming langu
| Java | Functions 4.0+ | Java 8+ | 4.x bundles | > [!NOTE]
-> The new programming models for authoring Functions in Python (V2) and Node.js (V4) are currently in preview. Compared to the current models, the new experiences are designed to be more idiomatic and intuitive for Python and JavaScript/TypeScript developers. To learn more, see Azure Functions [Python developer guide](../functions-reference-python.md?pivots=python-mode-decorators) and [Node.js developer guide](../functions-reference-node.md?pivots=nodejs-model-v4).
+> The new programming models for authoring Functions in Python (V2) and Node.js (V4) are currently in preview. Compared to the current models, the new experiences are designed to be more flexible and intuitive for Python and JavaScript/TypeScript developers. Learn more about the differences between the models in the [Python developer guide](../functions-reference-python.md?pivots=python-mode-decorators) and [Node.js upgrade guide](../functions-node-upgrade-v4.md).
> > In the following code snippets, Python (PM2) denotes programming model V2, and JavaScript (PM4) denotes programming model V4, the new experiences.
azure-functions Durable Functions Phone Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-phone-verification.md
This sample demonstrates how to build a [Durable Functions](durable-functions-ov
This sample implements an SMS-based phone verification system. These types of flows are often used when verifying a customer's phone number or for multi-factor authentication (MFA). It is a powerful example because the entire implementation is done using a couple small functions. No external data store, such as a database, is required.
-> [!NOTE]
-> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4).
->
-> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience.
[!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)]
azure-functions Durable Functions Sequence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sequence.md
Function chaining refers to the pattern of executing a sequence of functions in
[!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)]
-> [!NOTE]
-> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4).
->
-> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience.
## The functions
azure-functions Durable Functions Sub Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sub-orchestrations.md
Sub-orchestrator functions behave just like activity functions from the caller's
> [!NOTE] > Sub-orchestrations are not yet supported in PowerShell.
-> [!NOTE]
-> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4).
->
-> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience.
## Example
azure-functions Quickstart Js Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-js-vscode.md
zone_pivot_groups: functions-nodejs-model
In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chain together calls to other functions. You then publish the function code to Azure. -
->[!NOTE]
->The v4 programming model for authoring Functions in Node.js is currently in preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](../functions-reference-node.md).
->
->Use the selector at the top to choose the programming model of your choice for completing this quickstart.
![Screenshot of an Edge window. The window shows the output of invoking a simple durable function in Azure.](./media/quickstart-js-vscode/functions-vs-code-complete.png)
To complete this tutorial:
* Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). ::: zone-end ::: zone pivot="nodejs-model-v4"
-* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5085` or above.
+* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5095` or above.
::: zone-end * Durable Functions require an Azure storage account. You need an Azure subscription.
azure-functions Quickstart Ts Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-ts-vscode.md
zone_pivot_groups: functions-nodejs-model
In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chain together calls to other functions. You then publish the function code to Azure. -
->[!NOTE]
->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](../functions-reference-node.md).
->
->Use the selector at the top to choose the programming model of your choice for completing this quickstart.
![Screenshot of an Edge window. The window shows the output of invoking a simple durable function in Azure.](./media/quickstart-js-vscode/functions-vs-code-complete.png)
To complete this tutorial:
* Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). ::: zone-end ::: zone pivot="nodejs-model-v4"
-* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5085` or above.
+* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5095` or above.
::: zone-end * Durable Functions require an Azure storage account. You need an Azure subscription.
azure-functions Functions Bindings Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md
You can add the extension to your project by explicitly installing the [NuGet pa
::: zone-end --- ## Example Unless otherwise noted, these examples are specific to version 2.x and later version of the Functions runtime.
public static async Task Run(string myQueueItem, IAsyncCollector<CreateMessageOp
- ::: zone-end ::: zone pivot="programming-language-javascript" The following example shows a Twilio output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding.
In version 2.x, you set the `to` value in your code.
> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) [extension bundle]: ./functions-bindings-register.md#extension-bundles
-[Update your extensions]: ./functions-bindings-register.md
+[Update your extensions]: ./functions-bindings-register.md
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
You can use either the Azure portal or Azure CLI commands to migrate a function
+ Migration isn't supported on Linux. + The source plan and the target plan must be in the same resource group and geographical region. For more information, see [Move an app to another App Service plan](../app-service/app-service-plan-manage.md#move-an-app-to-another-app-service-plan). + The specific CLI commands depend on the direction of the migration.++ Downtime in your function executions occur as the function app is migrated between plans.++ State and other app-specific content is maintained, since the same Azure Files share is used by the app both before and after migration. ### Migration in the portal
azure-functions Functions Node Upgrade V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md
Version 4 was designed with the following goals in mind:
Version 4 of the Node.js programming model requires the following minimum versions: -- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0-alpha.8+
+- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0-alpha.9+
- [Node.js](https://nodejs.org/en/download/releases/) v18+ - [TypeScript](https://www.typescriptlang.org/) v4+ - [Azure Functions Runtime](./functions-versions.md) v4.16+-- [Azure Functions Core Tools](./functions-run-local.md) v4.0.4915+ (if running locally)
+- [Azure Functions Core Tools](./functions-run-local.md) v4.0.5095+ (if running locally)
+
+## Enable v4 programming model
+
+The following application setting is required to run the v4 programming model while it is in preview:
+- Name: `AzureWebJobsFeatureFlags`
+- Value: `EnableWorkerIndexing`
+
+If you're running locally using [Azure Functions Core Tools](functions-run-local.md), you should add this setting to your `local.settings.json` file. If you're running in Azure, follow these steps with the tool of your choice:
+
+# [Azure CLI](#tab/azure-cli-set-indexing-flag)
+
+Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively.
+
+```azurecli
+az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing
+```
+
+# [Azure PowerShell](#tab/azure-powershell-set-indexing-flag)
+
+Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively.
+
+```azurepowershell
+Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"}
+```
+
+# [VS Code](#tab/vs-code-set-indexing-flag)
+
+1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed
+1. Press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`.
+1. Choose your subscription and function app when prompted
+1. For the name, type `AzureWebJobsFeatureFlags` and press <kbd>Enter</kbd>.
+1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>.
++ ## Include the npm package
The http request and response types are now a subset of the [fetch standard](htt
## Troubleshooting
-If you see the following error, make sure you [set the `EnableWorkerIndexing` flag](./functions-reference-node.md#enable-v4-programming-model) and you're using the minimum version of all [requirements](#requirements):
+If you see the following error, make sure you [set the `EnableWorkerIndexing` flag](#enable-v4-programming-model) and you're using the minimum version of all [requirements](#requirements):
> No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
The following table shows each version of the Node.js programming model along wi
| [Programming Model Version](https://www.npmjs.com/package/@azure/functions?activeTab=versions) | Support Level | [Functions Runtime Version](./functions-versions.md) | [Node.js Version](https://github.com/nodejs/release#release-schedule) | Description | | - | - | | | |
-| 4.x | Preview | 4.x | 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. |
+| 4.x | Preview | 4.16+ | 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. |
| 3.x | GA | 4.x | 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file | | 2.x | GA (EOL) | 3.x | 14.x, 12.x, 10.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | | 1.x | GA (EOL) | 2.x | 10.x, 8.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. |
At the root of the project, there's a shared [host.json](functions-host-json.md)
::: zone pivot="nodejs-model-v4"
-## Enable v4 programming model
-
-The following application setting is required to run the v4 programming model while it is in preview:
-- Name: `AzureWebJobsFeatureFlags`-- Value: `EnableWorkerIndexing`-
-If you're running locally using [Azure Functions Core Tools](functions-run-local.md), you should add this setting to your `local.settings.json` file. If you're running in Azure, follow these steps with the tool of your choice:
-
-# [Azure CLI](#tab/azure-cli-set-indexing-flag)
-
-Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively.
-
-```azurecli
-az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing
-```
-
-# [Azure PowerShell](#tab/azure-powershell-set-indexing-flag)
-
-Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively.
-
-```azurepowershell
-Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"}
-```
-
-# [VS Code](#tab/vs-code-set-indexing-flag)
-
-1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed
-1. Press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`.
-1. Choose your subscription and function app when prompted
-1. For the name, type `AzureWebJobsFeatureFlags` and press <kbd>Enter</kbd>.
-1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>.
--- ## Folder structure The recommended folder structure for a JavaScript project looks like the following example:
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md
The [Search service] supports geocoding, which means that your API request can have search terms, like an address or the name of a place, and returns the result as latitude and longitude coordinates. For example, [Get Search Address] receives queries that contain location information, and returns results as latitude and longitude coordinates.
-However, the [Search service] doesn't have the same level of information and accuracy for all regions and countries. Use this article to determine what kind of locations you can reliably search for in each region.
+However, the [Search service] doesn't have the same level of information and accuracy for all countries/regions. Use this article to determine what kind of locations you can reliably search for in each region.
The ability to geocode in a country/region is dependent upon the road data coverage and geocoding precision of the geocoding service. The following categorizations are used to specify the level of geocoding support in each country/region.
The ability to geocode in a country/region is dependent upon the road data cover
| Sweden | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Switzerland | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Tajikistan | | | Γ£ô | Γ£ô | Γ£ô |
-| Turkey | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| T├╝rkiye | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| Turkmenistan | | | | Γ£ô | Γ£ô | | Ukraine | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | United Kingdom | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md
The render coverage tables below list the countries that support Azure Maps road
| Spain | Γ£ô | | Sweden | Γ£ô | | Switzerland | Γ£ô |
-| Turkey | Γ£ô |
+| T├╝rkiye | Γ£ô |
| Ukraine | Γ£ô | | United Kingdom | Γ£ô | | Vatican City | Γ£ô |
azure-maps Routing Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md
The following tables provide coverage information for Azure Maps routing.
| Sweden | Γ£ô | Γ£ô | Γ£ô | | Switzerland | Γ£ô | Γ£ô | Γ£ô | | Tajikistan | Γ£ô | | |
-| Turkey | Γ£ô | Γ£ô | Γ£ô |
+| T├╝rkiye | Γ£ô | Γ£ô | Γ£ô |
| Turkmenistan | Γ£ô | | | | Ukraine | Γ£ô | Γ£ô | | | United Kingdom | Γ£ô | Γ£ô | Γ£ô |
azure-maps Traffic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/traffic-coverage.md
The following tables provide information about what kind of traffic information
| Spain | Γ£ô | Γ£ô | | Sweden | Γ£ô | Γ£ô | | Switzerland | Γ£ô | Γ£ô |
-| Turkey | Γ£ô | Γ£ô |
+| T├╝rkiye | Γ£ô | Γ£ô |
| Ukraine | Γ£ô | Γ£ô | | United Kingdom | Γ£ô | Γ£ô |
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md
Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned
| Svalbard | Γ£ô | | | Γ£ô | | Sweden | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Switzerland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Turkey | Γ£ô | Γ£ô | | Γ£ô |
+| T├╝rkiye | Γ£ô | Γ£ô | | Γ£ô |
| Ukraine | Γ£ô | Γ£ô | | Γ£ô | | United Kingdom | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Vatican City | Γ£ô | | Γ£ô | Γ£ô |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 2/21/2023 Last updated : 3/24/2023
In addition to the generally available data collection listed above, Azure Monit
| Azure Monitor feature | Current support | Other extensions installed | More information | | : | : | : | : |
-| [VM insights](../vm/vminsights-overview.md) | Public preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights overview](../vm/vminsights-enable-overview.md) |
+| [VM insights](../vm/vminsights-overview.md) | Public preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
+| [Container insights](../containers/container-insights-overview.md) | Public preview | Containerized Azure Monitor agent | [Enable Container Insights](../containers/container-insights-onboard.md) |
In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure services in preview:
In addition to the generally available data collection listed above, Azure Monit
| [Change Tracking](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
+| Azure Stack HCI Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
+| Azure Virtual Desktop (AVD) Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
> [!NOTE] > Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available.
In addition to the generally available data collection listed above, Azure Monit
## Supported regions
-Azure Monitor Agent is available in all public regions and Azure Government clouds, for generally available features. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
+Azure Monitor Agent is available in all public regions, Azure Government anmd China clouds, for generally available features. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
## Costs
-There's no cost for the Azure Monitor Agent, but you might incur charges for the data ingested. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+There's no cost for the Azure Monitor Agent, but you might incur charges for the data ingested and stored. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
## Compare to legacy agents
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
## Manage alert rules in the Azure portal 1. In the [portal](https://portal.azure.com/), select **Monitor**, then **Alerts**.
-1. From the top command bar, select **Alert rules**. The page shows all your alert rules across on all subscriptions.
+1. From the top command bar, select **Alert rules**. The page shows all your alert rules on all subscriptions.
:::image type="content" source="media/alerts-managing-alert-instances/alerts-rules-page.png" alt-text="Screenshot of alerts rules page.":::
Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
> [!NOTE] > If you filter on a `target resource type` scope, the alerts rules list doesnΓÇÖt include resource health alert rules. To see the resource health alert rules, remove the `Target resource type` filter, or filter the rules based on the `Resource group` or `Subscription`.
-1. Select the alert rule that you want to edit. You can select multiple alert rules and enable or disable them. Multi-selecting rules can be useful when you want to perform maintenance on specific resources.
-1. Edit any of the fields in the following sections. You can't edit the **Alert Rule Name**, or the **Signal type** of an existing alert rule.
+1. Select an alert rule or use the checkboxes on the left to select multiple alert rules.
+1. If you select multiple alert rules, you can enable or disable the selected rules. Selecting multiple rules can be useful when you want to perform maintenance on specific resources.
+1. If you select a single alert rule, you can edit, disable, duplicate, or delete the rule in the alert rule pane.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-rules-pane.png" alt-text="Screenshot of alerts rules pane.":::
+
+1. To edit an alert rule, select **Edit**, and then edit any of the fields in the following sections. You can't edit the **Alert Rule Name**, or the **Signal type** of an existing alert rule.
- **Scope**. You can edit the scope for all alert rules **other than**: - Log alert rules - Metric alert rules that monitor a custom metric
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md
Depending on your integration, start connecting to your ITSM tool with these ste
- For ServiceNow ITSM, use the ITSM action: 1. Connect to your ITSM. For more information, see the [ServiceNow connection instructions](./itsmc-connections-servicenow.md).
- 1. (Optional) Set up the IP ranges. To list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, list the whole public IP range of an Azure region where the Log Analytics workspace belongs. For more information, see the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/EUS2/WUS2/US South Central, the customer can list the ActionGroup network tag only.
+ 1. (Optional) Set up the IP ranges. To list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, list the whole public IP range of an Azure region where the Log Analytics workspace belongs. For more information, see the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/WUS2/US South Central, the customer can list the ActionGroup network tag only.
1. [Configure your Azure ITSM solution and create the ITSM connection](./itsmc-definition.md#install-it-service-management-connector). 1. [Configure an action group to use the ITSM connector](./itsmc-definition.md#define-a-template).
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Application Insights is an extension of [Azure Monitor](../overview.md) and prov
1. *Proactively* understand how an application is performing. 1. *Reactively* review application execution data to determine the cause of an incident. + In addition to collecting [Metrics](standard-metrics.md) and application [Telemetry](data-model-complete.md) data, which describe application activities and health, Application Insights can also be used to collect and store application [trace logging data](asp-net-trace-logs.md). The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs; the logging framework rarely needs to be changed. - Application Insights provides other features including, but not limited to: - [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
In this article, you'll learn how to capture logs with Application Insights in .
## ASP.NET Core applications
-To add Application Insights logging to ASP.NET Core applications, use the `Microsoft.Extensions.Logging.ApplicationInsights` NuGet provider package.
+To add Application Insights logging to ASP.NET Core applications:
-1. Install the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] NuGet package.
+1. Install the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai].
1. Add `ApplicationInsightsLoggerProvider`:
- ```csharp
- using Microsoft.AspNetCore.Hosting;
- using Microsoft.Extensions.DependencyInjection;
- using Microsoft.Extensions.Hosting;
- using Microsoft.Extensions.Logging;
- using Microsoft.Extensions.Logging.ApplicationInsights;
-
- namespace WebApplication
+# [.NET 6.0+](#tab/dotnet6)
+
+```csharp
+using Microsoft.Extensions.Logging.ApplicationInsights;
+
+var builder = WebApplication.CreateBuilder(args);
+
+// Add services to the container.
+
+builder.Services.AddControllers();
+// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
+builder.Services.AddEndpointsApiExplorer();
+builder.Services.AddSwaggerGen();
+
+builder.Logging.AddApplicationInsights(
+ configureTelemetryConfiguration: (config) =>
+ config.ConnectionString = builder.Configuration.GetConnectionString("APPLICATIONINSIGHTS_CONNECTION_STRING"),
+ configureApplicationInsightsLoggerOptions: (options) => { }
+ );
+
+builder.Logging.AddFilter<ApplicationInsightsLoggerProvider>("your-category", LogLevel.Trace);
+
+var app = builder.Build();
+
+// Configure the HTTP request pipeline.
+if (app.Environment.IsDevelopment())
+{
+ app.UseSwagger();
+ app.UseSwaggerUI();
+}
+
+app.UseHttpsRedirection();
+
+app.UseAuthorization();
+
+app.MapControllers();
+
+app.Run();
+```
+
+# [.NET 5.0](#tab/dotnet5)
+
+```csharp
+using Microsoft.AspNetCore.Hosting;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Extensions.Logging;
+using Microsoft.Extensions.Logging.ApplicationInsights;
+
+namespace WebApplication
+{
+ public class Program
{
- public class Program
+ public static void Main(string[] args)
{
- public static void Main(string[] args)
- {
- var host = CreateHostBuilder(args).Build();
-
- var logger = host.Services.GetRequiredService<ILogger<Program>>();
- logger.LogInformation("From Program, running the host now.");
-
- host.Run();
- }
-
- public static IHostBuilder CreateHostBuilder(string[] args) =>
- Host.CreateDefaultBuilder(args)
- .ConfigureWebHostDefaults(webBuilder =>
- {
- webBuilder.UseStartup<Startup>();
- })
- .ConfigureLogging((context, builder) =>
- {
- builder.AddApplicationInsights(
- configureTelemetryConfiguration: (config) => config.ConnectionString = context.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"],
- configureApplicationInsightsLoggerOptions: (options) => { }
- );
-
- // Capture all log-level entries from Startup
- builder.AddFilter<ApplicationInsightsLoggerProvider>(
- typeof(Startup).FullName, LogLevel.Trace);
- });
+ var host = CreateHostBuilder(args).Build();
+
+ var logger = host.Services.GetRequiredService<ILogger<Program>>();
+ logger.LogInformation("From Program, running the host now.");
+
+ host.Run();
}+
+ public static IHostBuilder CreateHostBuilder(string[] args) =>
+ Host.CreateDefaultBuilder(args)
+ .ConfigureWebHostDefaults(webBuilder =>
+ {
+ webBuilder.UseStartup<Startup>();
+ })
+ .ConfigureLogging((context, builder) =>
+ {
+ builder.AddApplicationInsights(
+ configureTelemetryConfiguration: (config) => config.ConnectionString = context.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"],
+ configureApplicationInsightsLoggerOptions: (options) => { }
+ );
+
+ // Capture all log-level entries from Startup
+ builder.AddFilter<ApplicationInsightsLoggerProvider>(
+ typeof(Startup).FullName, LogLevel.Trace);
+ });
}
- ```
+}
+```
++ With the NuGet package installed, and the provider being registered with dependency injection, the app is ready to log. With constructor injection, either <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601> is required. When these implementations are resolved, `ApplicationInsightsLoggerProvider` will provide them. Logged messages or exceptions will be sent to Application Insights.
For more information, see [Logging in ASP.NET Core](/aspnet/core/fundamentals/lo
## Console application
-To add Application Insights logging to console applications, first install the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] NuGet provider package.
+To add Application Insights logging to console applications, first install the following NuGet packages:
+
+* [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai]
+* [`Microsoft.Extensions.DependencyInjection`][nuget-ai]
The following example uses the Microsoft.Extensions.Logging.ApplicationInsights package and demonstrates the default behavior for a console application. The Microsoft.Extensions.Logging.ApplicationInsights package should be used in a console application or whenever you want a bare minimum implementation of Application Insights without the full feature set such as metrics, distributed tracing, sampling, and telemetry initializers.
-Here are the installed packages:
+# [.NET 6.0+](#tab/dotnet6)
+
+```csharp
+using Microsoft.ApplicationInsights.Channel;
+using Microsoft.ApplicationInsights.Extensibility;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Logging;
+
+using var channel = new InMemoryChannel();
+
+try
+{
+ IServiceCollection services = new ServiceCollection();
+ services.Configure<TelemetryConfiguration>(config => config.TelemetryChannel = channel);
+ services.AddLogging(builder =>
+ {
+ // Only Application Insights is registered as a logger provider
+ builder.AddApplicationInsights(
+ configureTelemetryConfiguration: (config) => config.ConnectionString = "<YourConnectionString>",
+ configureApplicationInsightsLoggerOptions: (options) => { }
+ );
+ });
+
+ IServiceProvider serviceProvider = services.BuildServiceProvider();
+ ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>();
-```xml
-<ItemGroup>
- <PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="5.0.0" />
- <PackageReference Include="Microsoft.Extensions.Logging.ApplicationInsights" Version="2.17.0"/>
-</ItemGroup>
+ logger.LogInformation("Logger is working...");
+}
+finally
+{
+ // Explicitly call Flush() followed by Delay, as required in console apps.
+ // This ensures that even if the application terminates, telemetry is sent to the back end.
+ channel.Flush();
+
+ await Task.Delay(TimeSpan.FromMilliseconds(1000));
+}
```
+# [.NET 5.0](#tab/dotnet5)
+ ```csharp using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.Extensibility;
namespace ConsoleApp
``` ++ ## Frequently asked questions ### Why do some ILogger logs not have the same properties as others?
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
Content-Length: 54
## Telemetry initializer
-If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field.
-
-# [.NET](#tab/net)
-
-### ASP.NET or ASP.NET Core
+If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field. The code for this class is the same across .NET versions.
```csharp using Microsoft.ApplicationInsights.Channel;
namespace MyWebApp
> [!NOTE] > If you can't access `ISupportProperties`, make sure you're running the latest stable release of the Application Insights SDK. `ISupportProperties` is intended for high cardinality values. `GlobalProperties` is more appropriate for low cardinality values like region name and environment name.
-### Enable the telemetry initializer for ASP.NET
+
+# [.NET 6.0+](#tab/framework)
```csharp
-using Microsoft.ApplicationInsights.Extensibility;
+ using Microsoft.ApplicationInsights.Extensibility;
+ using CustomInitializer.Telemetry;
+
+builder.services.AddSingleton<ITelemetryInitializer, CloneIPAddress>();
+```
+# [.NET 5.0](#tab/dotnet5)
+
+```csharp
+ using Microsoft.ApplicationInsights.Extensibility;
+ using CustomInitializer.Telemetry;
+
+ public void ConfigureServices(IServiceCollection services)
+{
+ services.AddSingleton<ITelemetryInitializer, CloneIPAddress>();
+}
+```
+
+# [ASP.NET Framework](#tab/dotnet6)
+
+```csharp
+using Microsoft.ApplicationInsights.Extensibility;
namespace MyWebApp {
namespace MyWebApp
```
-### Enable the telemetry initializer for ASP.NET Core
-
-You can create your telemetry initializer the same way for ASP.NET Core as for ASP.NET. To enable the initializer, use the following example for reference:
-
-```csharp
- using Microsoft.ApplicationInsights.Extensibility;
- using CustomInitializer.Telemetry;
- public void ConfigureServices(IServiceCollection services)
-{
- services.AddSingleton<ITelemetryInitializer, CloneIPAddress>();
-}
-```
+ # [Node.js](#tab/nodejs)
azure-monitor Activity Log Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log-schema.md
This category contains the record of all create, update, delete, and action oper
| eventName | Friendly name of the Administrative event. | | category | Always "Administrative" | | httpRequest |Blob describing the Http Request. Usually includes the ΓÇ£clientRequestIdΓÇ¥, ΓÇ£clientIpAddressΓÇ¥ and ΓÇ£methodΓÇ¥ (HTTP method. For example, PUT). |
-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, and ΓÇ£InformationalΓÇ¥ |
+| level |[Severity level](#severity-level) of the event. |
| resourceGroupName |Name of the resource group for the impacted resource. | | resourceProviderName |Name of the resource provider for the impacted resource | | resourceType | The type of resource that was affected by an Administrative event. |
This category contains the record of any resource health events that have occurr
| eventDataId |Unique identifier of the alert event. | | category | Always "ResourceHealth" | | eventTimestamp |Timestamp when the event was generated by the Azure service processing the request corresponding the event. |
-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, or ΓÇ£InformationalΓÇ¥ (other levels are not supported) |
+| level |[Severity level](#severity-level) of the event. |
| operationId |A GUID shared among the events that correspond to a single operation. | | operationName |Name of the operation. | | resourceGroupName |Name of the resource group that contains the resource. |
This category contains the record of all activations of classic Azure alerts. An
| description |Static text description of the alert event. | | eventDataId |Unique identifier of the alert event. | | category | Always "Alert" |
-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, and ΓÇ£InformationalΓÇ¥ |
+| level |[Severity level](#severity-level) of the event. |
| resourceGroupName |Name of the resource group for the impacted resource if it is a metric alert. For other alert types, it is the name of the resource group that contains the alert itself. | | resourceProviderName |Name of the resource provider for the impacted resource if it is a metric alert. For other alert types, it is the name of the resource provider for the alert itself. | | resourceId | Name of the resource ID for the impacted resource if it is a metric alert. For other alert types, it is the resource ID of the alert resource itself. |
This category contains the record of any events related to the operation of the
| correlationId | A GUID in the string format. | | description |Static text description of the autoscale event. | | eventDataId |Unique identifier of the autoscale event. |
-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, and ΓÇ£InformationalΓÇ¥ |
+| level |[Severity level](#severity-level) of the event. |
| resourceGroupName |Name of the resource group for the autoscale setting. | | resourceProviderName |Name of the resource provider for the autoscale setting. | | resourceId |Resource ID of the autoscale setting. |
This category contains the record any alerts generated by Microsoft Defender for
| eventName |Friendly name of the security event. | | category | Always "Security" | | ID |Unique resource identifier of the security event. |
-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, or ΓÇ£InformationalΓÇ¥ |
+| level |[Severity level](#severity-level) of the event.|
| resourceGroupName |Name of the resource group for the resource. | | resourceProviderName |Name of the resource provider for Microsoft Defender for Cloud. Always "Microsoft.Security". | | resourceType |The type of resource that generated the security event, such as "Microsoft.Security/locations/alerts" |
This category contains the record of any new recommendations that are generated
| eventDataId | Unique identifier of the recommendation event. | | category | Always "Recommendation" | | ID |Unique resource identifier of the recommendation event. |
-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, or ΓÇ£InformationalΓÇ¥ |
+| level |[Severity level](#severity-level) of the event.|
| operationName |Name of the operation. Always "Microsoft.Advisor/generateRecommendations/action"| | resourceGroupName |Name of the resource group for the resource. | | resourceProviderName |Name of the resource provider for the resource that this recommendation applies to, such as "MICROSOFT.COMPUTE" |
This category contains records of all effect action operations performed by [Azu
| category | Declares the activity log event as belonging to "Policy". | | eventTimestamp | Timestamp when the event was generated by the Azure service processing the request corresponding the event. | | ID | Unique identifier of the event on the specific resource. |
-| level | Level of the event. Audit uses "Warning" and Deny uses "Error". An auditIfNotExists or deployIfNotExists error can generate "Warning" or "Error" depending on severity. All other Policy events use "Informational". |
+| level | [Severity level](#severity-level) of the event. Audit uses "Warning" and Deny uses "Error". An auditIfNotExists or deployIfNotExists error can generate "Warning" or "Error" depending on severity. All other Policy events use "Informational". |
| operationId | A GUID shared among the events that correspond to a single operation. | | operationName | Name of the operation and directly correlates to the Policy effect. | | resourceGroupName | Name of the resource group for the evaluated resource. |
Following is an example of an event using this schema:
"records": [ { "time": "2019-01-21T22:14:26.9792776Z",
- "resourceId": "/subscriptions/s1/resourceGroups/MSSupportGroup/providers/microsoft.support/supporttickets/115012112305841",
+ "resourceId": "/subscriptions/s1/resourceGroups/MSSupportGroup/providers/microsoft.support/supporttickets/123456112305841",
"operationName": "microsoft.support/supporttickets/write", "category": "Write", "resultType": "Success",
Following is an example of an event using this schema:
"callerIpAddress": "111.111.111.11", "correlationId": "c776f9f4-36e5-4e0e-809b-c9b3c3fb62a8", "identity": {
- "authorization": {
- "scope": "/subscriptions/s1/resourceGroups/MSSupportGroup/providers/microsoft.support/supporttickets/115012112305841",
- "action": "microsoft.support/supporttickets/write",
- "evidence": {
- "role": "Subscription Admin"
- }
- },
+ "authorization": {
+ "scope": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-001/providers/Microsoft.Storage/storageAccounts/ msftstorageaccount",
+ "action": "Microsoft.Storage/storageAccounts/listAccountSas/action",
+ "evidence": {
+ "role": "Azure Eventhubs Service Role",
+ "roleAssignmentScope": "/subscriptions/00000000-0000-0000-0000-000000000000",
+ "roleAssignmentId": "123abc2a6c314b0ab03a891259123abc",
+ "roleDefinitionId": "123456789de042a6a64b29b123456789",
+ "principalId": "abcdef038c6444c18f1c31311fabcdef",
+ "principalType": "ServicePrincipal"
+ }
+ },
"claims": { "aud": "https://management.core.windows.net/",
- "iss": "https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/",
+ "iss": "https://sts.windows.net/abcde123-86f1-41af-91ab-abcde1234567/",
"iat": "1421876371", "nbf": "1421876371", "exp": "1421880271", "ver": "1.0", "http://schemas.microsoft.com/identity/claims/tenantid": "00000000-0000-0000-0000-000000000000", "http://schemas.microsoft.com/claims/authnmethodsreferences": "pwd",
- "http://schemas.microsoft.com/identity/claims/objectidentifier": "2468adf0-8211-44e3-95xq-85137af64708",
+ "http://schemas.microsoft.com/identity/claims/objectidentifier": "123abc45-8211-44e3-95xq-85137af64708",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn": "admin@contoso.com", "puid": "20030000801A118C",
- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "9vckmEGF7zDKk1YzIY8k0t1_EAPaXoeHyPRn6f413zM",
+ "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "9876543210DKk1YzIY8k0t1_EAPaXoeHyPRn6f413zM",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname": "John", "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname": "Smith", "name": "John Smith",
- "groups": "cacfe77c-e058-4712-83qw-f9b08849fd60,7f71d11d-4c41-4b23-99d2-d32ce7aa621c,31522864-0578-4ea0-9gdc-e66cc564d18c",
+ "groups": "12345678-cacfe77c-e058-4712-83qw-f9b08849fd60,12345678-4c41-4b23-99d2-d32ce7aa621c,12345678-0578-4ea0-9gdc-e66cc564d18c",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": " admin@contoso.com",
- "appid": "c44b4083-3bq0-49c1-b47d-974e53cbdf3c",
+ "appid": "12345678-3bq0-49c1-b47d-974e53cbdf3c",
"appidacr": "2", "http://schemas.microsoft.com/identity/claims/scope": "user_impersonation", "http://schemas.microsoft.com/claims/authnclassreference": "1"
Following is an example of an event using this schema:
"location": "global", "properties": { "statusCode": "Created",
- "serviceRequestId": "50d5cddb-8ca0-47ad-9b80-6cde2207f97c"
+ "serviceRequestId": "12345678-8ca0-47ad-9b80-6cde2207f97c"
} } ]
Following is an example of an event using this schema:
-- ## Next steps * [Learn more about the Activity Log](./platform-logs-overview.md) * [Create a diagnostic setting to send Activity Log to Log Analytics workspace, Azure storage, or event hubs](./diagnostic-settings.md)
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
This section discusses requirements and limitations.
### Time before telemetry gets to destination
-Once you have set up a diagnostic setting, data should start flowing to your selected destination(s) with 90 minutes. If you get no information within 24 hours, then either
+Once you have set up a diagnostic setting, data should start flowing to your selected destination(s) within 90 minutes. If you get no information within 24 hours, then either
- no logs are being generated or - something is wrong in the underlying routing mechanism. Try disabling the configuration and then reenabling it. Contact Azure support through the Azure portal if you continue to have issues.
azure-monitor Custom Logs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md
If all of these conditions aren't true, then you can use DCR-based log collectio
## Migration procedure If the table that you're targeting with DCR-based log collection fits the criteria above, then you must perform the following steps:
-1. Configure your data collection rule (DCR) following procedures at [Send custom logs to Azure Monitor Logs using Resource Manager templates](tutorial-logs-ingestion-api.md) or [Add transformation in workspace data collection rule to Azure Monitor using resource manager templates](tutorial-workspace-transformations-api.md).
+1. Configure your data collection rule (DCR) following procedures at [Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](tutorial-logs-ingestion-api.md) or [Add transformation in workspace data collection rule to Azure Monitor using Resource Manager templates](tutorial-workspace-transformations-api.md).
-1. If using the Logs ingestion API, also [configure the data collection endpoint (DCE)](tutorial-logs-ingestion-api.md#create-a-data-collection-endpoint) and the agent or component that will be sending data to the API.
+1. If using the Logs ingestion API, also [configure the data collection endpoint (DCE)](tutorial-logs-ingestion-api.md#create-data-collection-endpoint) and the agent or component that will be sending data to the API.
1. Issue the following API call against your table. This call is idempotent, so there will be no effect if the table has already been migrated.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
eligible for commitment tier discount.
Availability zones aren't currently supported in all regions. New clusters you create in supported regions have availability zones enabled by default. ## Cluster pricing model
-Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters.
+Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
+ ## Required permissions To perform cluster-related actions, you need these permissions:
The same as for 'clusters in a resource group', but in subscription scope.
## Update commitment tier in cluster
-When the data volume to your linked workspaces changes over time, you can update the Commitment Tier level appropriately. The tier is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day. You don't have to provide the full REST request body, but you must include the sku.
+When the data volume to linked workspaces changes over time, you can update the Commitment Tier level appropriately to optimize cost. The tier is specified in units of Gigabytes (GB) and can have values of 500, 1000, 2000 or 5000 GB per day. You don't have to provide the full REST request body, but you must include the sku.
+
+During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period.
#### [CLI](#tab/cli)
Content-type: application/json
### Unlink a workspace from cluster
-You can unlink a workspace from a cluster at any time. The workspace pricing tier is changed to per-GB, data ingested to cluster before the unlink operation remains in the cluster, and new data to workspace get ingested to Log Analytics. You can query data as usual and the service performs cross-cluster queries seamlessly. If cluster was configured with Customer-managed key (CMK), data remains encrypted with your key and accessible, while your key and permissions to Key Vault remain.
+You can unlink a workspace from a cluster at any time. The workspace pricing tier is changed to per-GB, data ingested to cluster before the unlink operation remains in the cluster, and new data to workspace get ingested to Log Analytics.
+
+> [!WARNING]
+> Unlinking a workspace does not move workspace data out of the cluster. Any data collected for workspace while linked to cluster, remains in cluster for the retention period defined in workspace, and accessible as long as cluster isn't deleted.
+
+Queries aren't affected when workspace is unlinked and service performs cross-cluster queries seamlessly. If cluster was configured with Customer-managed key (CMK), data ingested to workspace while was linked, remains encrypted with your key and accessible, while your key and permissions to Key Vault remain.
> [!NOTE]
-> There is a limit of two link operations for a specific workspace within a month to prevent data distribution across clusters. Contact support if you reach limit.
+> - There is a limit of two link operations for a specific workspace within a month to prevent data distribution across clusters. Contact support if you reach the limit.
+> - Unlinked workspaces are moved to Pay-As-You-Go pricing tier.
Use the following commands to unlink a workspace from cluster:
N/A
You need to have *write* permissions on the cluster resource.
-When deleting a cluster, you're losing access to all data, which was ingested from workspaces that were linked to it. This operation isn't reversible.
-The cluster's billing stops when cluster is deleted, regardless of the 30-days commitment tier defined in cluster.
+Cluster deletion operation should be done with caution, since operation is non-recoverable. All ingested data to cluster from linked workspaces, gets permanently deleted.
+
+The cluster's billing stops when cluster is deleted, regardless of the 31-days commitment tier defined in cluster.
-If you delete your cluster while workspaces are linked, workspaces get automatically unlinked from the cluster before the cluster delete, and new data to workspaces gets ingested to Log Analytics clusters instead. You can query workspace for the time range before it was linked to the cluster, and after the unlink, and the service performs cross-cluster queries seamlessly.
+If you delete a cluster that has linked workspaces, workspaces get automatically unlinked from the cluster, workspaces are moved to Pay-As-You-Go pricing tier, and new data to workspaces is ingested to Log Analytics clusters instead. You can query workspace for the time range before it was linked to the cluster, and after the unlink, and the service performs cross-cluster queries seamlessly.
> [!NOTE] > - There is a limit of seven clusters per subscription and region, five active, plus two that were deleted in past two weeks.
-> - Cluster's name remain reserved for 14 days after deletion, and can't be used for creating a new cluster.
+> - Cluster's name remain reserved two weeks after deletion, and can't be used for creating a new cluster.
Use the following commands to delete a cluster:
Authorization: Bearer <token>
- If you create a cluster and get an error "region-name doesn't support Double Encryption for clusters.", you can still create the cluster without Double encryption by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body. - Double encryption setting can't be changed after the cluster has been created. -- Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) a workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.
+- Deleting a workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, workspace returns to previous state and remains linked to cluster.
+
+- During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period.
## Troubleshooting
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
Title: Logs Ingestion API in Azure Monitor
-description: Send data to a Log Analytics workspace by using a REST API.
+description: Send data to a Log Analytics workspace using REST API or client libraries.
Last updated 06/27/2022 # Logs Ingestion API in Azure Monitor
+The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace using either a [REST API call](#rest-api-call) or [client libraries](#client-libraries). By using this API, you can send data to [supported Azure tables](#supported-tables) or to [custom tables that you create](../logs/create-custom-table.md#create-a-custom-table). You can even [extend the schema of Azure tables with custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column) to accept additional data.
-The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace from any REST API client. By using this API, you can send data from almost any source to [supported Azure tables](#supported-tables) or to [custom tables that you create](../logs/create-custom-table.md#create-a-custom-table). You can even [extend the schema of Azure tables with custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column).
-
-> [!NOTE]
-> The Logs Ingestion API was previously referred to as the custom logs API.
## Basic operation Your application sends data to a [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md), which is a unique connection point for your subscription. The payload of your API call includes the source data formatted in JSON. The call: - Specifies a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that understands the format of the source data.-- Potentially filters and transforms it for the target table.-- Directs it to a specific table in a specific workspace.
+- Potentially filters and transforms the data for the target table.
+- Directs the data to a specific table in a specific workspace.
-You can modify the target table and workspace by modifying the DCR without any change to the REST API call or source data.
+You can modify the target table and workspace by modifying the DCR without any change to the API call or source data.
:::image type="content" source="media/data-ingestion-api-overview/data-ingestion-api-overview.png" lightbox="media/data-ingestion-api-overview/data-ingestion-api-overview.png" alt-text="Diagram that shows an overview of logs ingestion API."::: > [!NOTE] > To migrate solutions from the [Data Collector API](data-collector-api.md), see [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](custom-logs-migrate.md).
-## Supported tables
-
-### Custom tables
+## Components
-The Logs Ingestion API can send data to any custom table that you create and to certain Azure tables in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix.
+The Log ingestion API requires the following components to be created before you can send data. Each of these components must all be located in the same region.
-### Azure tables
+| Component | Description |
+|:|:|
+| Data collection endpoint (DCE) | The DCE provides an endpoint for the application to send to. A single DCE can support multiple DCRs. |
+| Data collection rule (DCR) | [Data collection rules](../essentials/data-collection-rule-overview.md) define data collected by Azure Monitor and specify how and where that data should be sent or stored. The API call must specify a DCR to use. The DCR must understand the structure of the input data and the structure of the target table. If the two don't match, it can include a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You can also use the transformation to filter source data and perform any other calculations or conversions.
+| Log Analytics workspace | The Log Analytics workspace contains the tables that will receive the data. The target tables are specific in the DCR. See [Support tables](#supported-tables) for the tables that the ingestion API can send to. |
-The Logs Ingestion API can send data to the following Azure tables. Other tables may be added to this list as support for them is implemented.
+## Supported tables
+The following tables can receive data from the ingestion API.
-- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)-- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)-- [Syslog](/azure/azure-monitor/reference/tables/syslog)-- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent)
+| Tables | Description |
+|:|:|
+| Custom tables | The Logs Ingestion API can send data to any custom table that you create in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix. |
+| Azure tables | The Logs Ingestion API can send data to the following Azure tables. Other tables may be added to this list as support for them is implemented.<br><br>- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)<br>- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)<br>- [Syslog](/azure/azure-monitor/reference/tables/syslog)<br>- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent)
> [!NOTE] > Column names must start with a letter and can consist of up to 45 alphanumeric characters and the characters `_` and `-`. The following are reserved column names: `Type`, `TenantId`, `resource`, `resourceid`, `resourcename`, `resourcetype`, `subscriptionid`, `tenanted`. Custom columns you add to an Azure table must have the suffix `_CF`.
Authentication for the Logs Ingestion API is performed at the DCE, which uses st
The source data sent by your application is formatted in JSON and must match the structure expected by the DCR. It doesn't necessarily need to match the structure of the target table because the DCR can include a [transformation](../essentials//data-collection-transformations.md) to convert the data to match the table's structure.
-## Data collection rule
-
-[Data collection rules](../essentials/data-collection-rule-overview.md) define data collected by Azure Monitor and specify how and where that data should be sent or stored. The REST API call must specify a DCR to use. A single DCE can support multiple DCRs, so you can specify a different DCR for different sources and target tables.
+## Client libraries
+You can use the following client libraries to send data to the Logs ingestion API.
-The DCR must understand the structure of the input data and the structure of the target table. If the two don't match, it can use a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You can also use the transformation to filter source data and perform any other calculations or conversions.
+- [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme)
+- [Java](/java/api/overview/azure/monitor-ingestion-readme)
+- [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme)
+- [Python](/python/api/overview/azure/monitor-ingestion-readme)
-## Send data
-To send data to Azure Monitor with the Logs Ingestion API, make a POST call to the DCE over HTTP. Details of the call are described in the following sections.
+## REST API call
+To send data to Azure Monitor with a REST API call, make a POST call to the DCE over HTTP. Details of the call are described in the following sections.
### Endpoint URI- The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. `Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#custom-logs) in the DCR that should handle the custom data. ```
The endpoint URI uses the following format, where the `Data Collection Endpoint`
The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR. Additionally, it is important to ensure that the request body is properly encoded in UTF-8 to prevent any issues with data transmission.
-## Sample call
-For sample data and an API call using the Logs Ingestion API, see either [Send custom logs to Azure Monitor Logs using the Azure portal](tutorial-logs-ingestion-portal.md) or [Send custom logs to Azure Monitor Logs using Resource Manager templates](tutorial-logs-ingestion-api.md).
## Limits and restrictions
For limits related to the Logs Ingestion API, see [Azure Monitor service limits]
## Next steps -- [Walk through a tutorial sending custom logs using the Azure portal](tutorial-logs-ingestion-portal.md)
+- [Walk through a tutorial configuring the i using the Azure portal](tutorial-logs-ingestion-portal.md)
- [Walk through a tutorial sending custom logs using Resource Manager templates and REST API](tutorial-logs-ingestion-api.md) - Get guidance on using the client libraries for the Logs ingestion API for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme).
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
In addition to using the built-in roles for a Log Analytics workspace, you can c
## Set table-level read access
-To create a [custom role](../../role-based-access-control/custom-roles.md) that lets specific users or groups read data from specific tables in a workspace:
-
-1. Create a custom role that grants users permission to execute queries in the Log Analytics workspace, based on the built-in Azure Monitor Logs **Reader** role:
-
- 1. Navigate to your workspace and select **Access control (IAM)** > **Roles**.
-
- 1. Right-click the **Reader** role and select **Clone**.
-
- :::image type="content" source="media/manage-access/access-control-clone-role.png" alt-text="Screenshot that shows the Roles tab of the Access control screen with the clone button highlighted for the Reader role." lightbox="media/manage-access/access-control-clone-role.png":::
-
- This opens the **Create a custom role** screen.
-
- 1. On the **Basics** tab of the screen, enter a **Custom role name** value and, optionally, provide a description.
-
- :::image type="content" source="media/manage-access/manage-access-create-custom-role.png" alt-text="Screenshot that shows the Basics tab of the Create a custom role screen with the Custom role name and Description fields highlighted." lightbox="media/manage-access/manage-access-create-custom-role.png":::
-
- 1. Select the **JSON** tab > **Edit**::
-
- 1. In the `"actions"` section, add:
-
- - `Microsoft.OperationalInsights/workspaces/read`
- - `Microsoft.OperationalInsights/workspaces/query/read`
- - `Microsoft.OperationalInsights/workspaces/analytics/query/action`
- - `Microsoft.OperationalInsights/workspaces/search/action`
-
- 1. In the `"not actions"` section, add `Microsoft.OperationalInsights/workspaces/sharedKeys/read`.
-
- :::image type="content" source="media/manage-access/manage-access-create-custom-role-json.png" alt-text="Screenshot that shows the JSON tab of the Create a custom role screen with the actions section of the JSON file highlighted." lightbox="media/manage-access/manage-access-create-custom-role-json.png":::
-
- 1. Select **Save** > **Review + Create** at the bottom of the screen, and then **Create** on the next page.
-
-1. Assign your custom role to the relevant users or groups:
- 1. Select **Access control (AIM)** > **Add** > **Add role assignment**.
-
- :::image type="content" source="media/manage-access/manage-access-add-role-assignment-button.png" alt-text="Screenshot that shows the Access control screen with the Add role assignment button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-button.png":::
-
- 1. Select the custom role you created and select **Next**.
-
- :::image type="content" source="media/manage-access/manage-access-add-role-assignment-screen.png" alt-text="Screenshot that shows the Add role assignment screen with a custom role and the Next button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-screen.png":::
-
-
- This opens the **Members** tab of the **Add custom role assignment** screen.
-
- 1. Click **+ Select members** to open the **Select members** screen.
-
- :::image type="content" source="media/manage-access/manage-access-add-role-assignment-select-members.png" alt-text="Screenshot that shows the Select members screen." lightbox="media/manage-access/manage-access-add-role-assignment-select-members.png":::
-
- 1. Search for and select the relevant user or group and click **Select**.
- 1. Select **Review and assign**.
-
-1. Grant the users or groups read access to specific tables in a workspace by calling the `https://management.azure.com/batch?api-version=2020-06-01` POST API and sending the following details in the request body:
-
- ```json
- {
- "requests": [
- {
- "content": {
- "Id": "<GUID_1>",
- "Properties": {
- "PrincipalId": "<user_object_ID>",
- "PrincipalType": "User",
- "RoleDefinitionId": "/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7",
- "Scope": "/subscriptions/<subscription_ID>/resourceGroups/<resource_group_name>/providers/Microsoft.OperationalInsights/workspaces/<workspace_name>/Tables/<table_name>",
- "Condition": null,
- "ConditionVersion": null
- }
- },
- "httpMethod": "PUT",
- "name": "<GUID_2>",
- "requestHeaderDetails": {
- "commandName": "Microsoft_Azure_AD."
- },
- "url": "/subscriptions/<subscription_ID>/resourceGroups/<resource_group_name>/providers/Microsoft.OperationalInsights/workspaces/<workspace_name>/Tables/<table_name>/providers/Microsoft.Authorization/roleAssignments/<GUID_1>?api-version=2020-04-01-preview"
- }
- ]
- }
- ```
-
- Where:
- - You can generate a GUID for `<GUID 1>` and `<GUID 2>` using any GUID generator.
- - `<user_object_ID>` is the object ID of the user to which you want to grant table read access.
- - `<subscription_ID>` is the ID of the subscription related to the workspace.
- - `<resource_group_name>` is the resource group of the workspace.
- - `<workspace_name>` is the name of the workspace.
- - `<table_name>` is the name of the table to which you want to assign the user or group permission to read data from.
-
-### Legacy method of setting table-level read access
-
-[Azure custom roles](../../role-based-access-control/custom-roles.md) let you grant access to specific tables in the workspace, although we recommend defining [table-level read access](#set-table-level-read-access) as described above.
-
-Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
+[Azure custom roles](../../role-based-access-control/custom-roles.md) let you grant specific users or groups access to specific tables in the workspace. Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
To define access to a particular table, create a [custom role](../../role-based-access-control/custom-roles.md):
To define access to a particular table, create a [custom role](../../role-based-
* Use `Microsoft.OperationalInsights/workspaces/query/*` to grant access to all tables. * To exclude access to specific tables when you use a wildcard in **Actions**, list the tables excluded tables in the **NotActions** section of the role definition.
-#### Examples
+### Examples
Here are examples of custom role actions to grant and deny access to specific tables.
Grant access to all tables except the _SecurityAlert_ table:
], ```
-#### Custom tables
+### Custom tables
- Custom tables store data you collect from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). To identify the table type, [view table information in Log Analytics](./log-analytics-tutorial.md#view-table-information).
+Custom tables store data you collect from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). To identify the table type, [view table information in Log Analytics](./log-analytics-tutorial.md#view-table-information).
> [!NOTE] > Tables created by the [Logs ingestion API](../essentials/../logs/logs-ingestion-api-overview.md) don't yet support table-level RBAC.
- You can't grant access to individual custom log tables, but you can grant access to all custom logs. To create a role with access to all custom log tables, create a custom role by using the following actions:
+You can't grant access to individual custom log tables at the table level, but you can grant access to all custom log tables. To create a role with access to all custom log tables, create a custom role by using the following actions:
``` "Actions": [
Some custom logs come from sources that aren't directly associated to a specific
For example, if a specific firewall is sending custom logs, create a resource group called *MyFireWallLogs*. Make sure that the API requests contain the resource ID of *MyFireWallLogs*. The firewall log records are then accessible only to users who were granted access to *MyFireWallLogs* or those users with full workspace access.
-#### Considerations
+### Considerations
- If a user is granted global read permission with the standard Reader or Contributor roles that include the _\*/read_ action, it will override the per-table access control and give them access to all log data. - If a user is granted per-table access but no other permissions, they can access log data from the API but not from the Azure portal. To provide access from the Azure portal, use Log Analytics Reader as its base role.
azure-monitor Tutorial Logs Ingestion Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md
Title: 'Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates)'
-description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor by using the REST API Azure Resource Manager template version.
+ Title: 'Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Resource Manager templates)'
+description: Tutorial on how sending data to a Log Analytics workspace in Azure Monitor using the Logs ingestion API. Supporting components configured using Resource Manager templates.
Previously updated : 02/01/2023 Last updated : 03/20/2023
-# Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates)
-The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send external data to a Log Analytics workspace with a REST API. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of a new table and a sample application to send log data to Azure Monitor.
+# Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)
+The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme).
> [!NOTE]
-> This tutorial uses ARM templates and a REST API to configure custom logs. For a similar tutorial using the Azure portal, see [Tutorial: Send data to Azure Monitor Logs using REST API (Azure portal)](tutorial-logs-ingestion-portal.md).
->
+> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components.
-In this tutorial, you learn to:
-
-> [!div class="checklist"]
-> * Create a custom table in a Log Analytics workspace.
-> * Create a data collection endpoint (DCE) to receive data over HTTP.
-> * Create a data collection rule (DCR) that transforms incoming data to match the schema of the target table.
-> * Create a sample application to send custom data to Azure Monitor.
-
-> [!NOTE]
-> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls by using the Azure Monitor **Tables** API and the Azure portal to install ARM templates. You can use any other method to make these calls.
->
-> See [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme) for guidance on using the Logs ingestion API client libraries for other languages.
+The steps required to configure the Logs ingestion API are as follows:
+1. [Create an Azure AD application](#create-azure-ad-application) to authenticate against the API.
+3. [Create a data collection endpoint (DCE)](#create-data-collection-endpoint) to receive data.
+2. [Create a custom table in a Log Analytics workspace](#create-new-table-in-log-analytics-workspace). This is the table you'll be sending data to.
+4. [Create a data collection rule (DCR)](#create-data-collection-rule) to direct the data to the target table.
+5. [Give the AD application access to the DCR](#assign-permissions-to-a-dcr).
+6. See [Sample code to send data to Azure Monitor using Logs ingestion API](tutorial-logs-ingestion-code.md) for sample code to send data to using the Logs ingestion API.
## Prerequisites To complete this tutorial, you need:
To complete this tutorial, you need:
- A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac). - [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. + ## Collect workspace details Start by gathering information that you'll need from your workspace.
Go to your workspace in the **Log Analytics workspaces** menu in the Azure porta
:::image type="content" source="media/tutorial-logs-ingestion-api/workspace-resource-id.png" lightbox="media/tutorial-logs-ingestion-api/workspace-resource-id.png" alt-text="Screenshot that shows the workspace resource ID.":::
-## Configure an application
+## Create Azure AD application
Start by registering an Azure Active Directory application to authenticate against the API. Any Resource Manager authentication scheme is supported, but this tutorial follows the [Client Credential Grant Flow scheme](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). 1. On the **Azure Active Directory** menu in the Azure portal, select **App registrations** > **New registration**.
Start by registering an Azure Active Directory application to authenticate again
:::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" alt-text="Screenshot that shows the secret value for the new app.":::
-## Create a new table in a Log Analytics workspace
-The custom table must be created before you can send data to it. The table for this tutorial will include three columns, as described in the following schema. The `name`, `type`, and `description` properties are mandatory for each column. The properties `isHidden` and `isDefaultDisplay` both default to `false` if not explicitly specified. Possible data types are `string`, `int`, `long`, `real`, `boolean`, `dateTime`, `guid`, and `dynamic`.
-
-Use the **Tables - Update** API to create the table with the following PowerShell code.
-
-> [!IMPORTANT]
-> Custom tables must use a suffix of `_CL`.
-
-1. Select the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
-
- :::image type="content" source="media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot that shows opening Cloud Shell.":::
-
-1. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
-
- ```PowerShell
- $tableParams = @'
- {
- "properties": {
- "schema": {
- "name": "MyTable_CL",
- "columns": [
- {
- "name": "TimeGenerated",
- "type": "datetime",
- "description": "The time at which the data was generated"
- },
- {
- "name": "AdditionalContext",
- "type": "dynamic",
- "description": "Additional message properties"
- },
- {
- "name": "CounterName",
- "type": "string",
- "description": "Name of the counter"
- },
- {
- "name": "CounterValue",
- "type": "real",
- "description": "Value collected for the counter"
- }
- ]
- }
- }
- }
- '@
-
- Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/MyTable_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
- ```
-
-## Create a data collection endpoint
-A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accept the data being sent to Azure Monitor. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE must be located in the same region as the Log Analytics workspace where the data will be sent.
+## Create data collection endpoint
+A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accept the data being sent to Azure Monitor. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE must be located in the same region as the DCR and the Log Analytics workspace where the data will be sent.
1. In the Azure portal's search box, enter **template** and then select **Deploy a custom template**.
A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accep
"location": { "type": "string", "defaultValue": "westus2",
- "allowedValues": [
- "westus2",
- "eastus2",
- "eastus2euap"
- ],
"metadata": {
- "description": "Specifies the location in which to create the Data Collection Endpoint."
+ "description": "Specifies the location for the Data Collection Endpoint."
} } },
A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accep
1. Select **Review + create** and then select **Create** after you review the details.
-1. After the DCE is created, select it so that you can view its properties. Note the **Logs ingestion URI** because you'll need it in a later step.
+1. Select **JSON View** to view other details for the DCE. Copy the **Resource ID** and the **logsIngestion endpoint** which you'll need in a later step.
- :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" alt-text="Screenshot that shows the DCE URI.":::
+ :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" alt-text="Screenshot that shows the DCE resource ID.":::
-1. Select **JSON View** to view other details for the DCE. Copy the **Resource ID** because you'll need it in a later step.
- :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" alt-text="Screenshot that shows the DCE resource ID.":::
+## Create new table in Log Analytics workspace
+The custom table must be created before you can send data to it. The table for this tutorial will include five columns shown in the schema below. The `name`, `type`, and `description` properties are mandatory for each column. The properties `isHidden` and `isDefaultDisplay` both default to `false` if not explicitly specified. Possible data types are `string`, `int`, `long`, `real`, `boolean`, `dateTime`, `guid`, and `dynamic`.
-## Create a data collection rule
-The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of data that's being sent to the HTTP endpoint and the [transformation](../essentials/data-collection-transformations.md) that will be applied to it before it's sent to the workspace. The DCR also defines the destination workspace and table the transformed data will be sent to.
+> [!NOTE]
+> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls by using the Azure Monitor **Tables** API. You can use any other valid method to make these calls.
+
+> [!IMPORTANT]
+> Custom tables must use a suffix of `_CL`.
+
+1. Select the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
+
+ :::image type="content" source="media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot that shows opening Cloud Shell.":::
+
+1. Copy the following PowerShell code and replace the variables in the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it.
+
+ ```PowerShell
+ $tableParams = @'
+ {
+ "properties": {
+ "schema": {
+ "name": "MyTable_CL",
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime",
+ "description": "The time at which the data was generated"
+ },
+ {
+ "name": "Computer",
+ "type": "string",
+ "description": "The computer that generated the data"
+ },
+ {
+ "name": "AdditionalContext",
+ "type": "dynamic",
+ "description": "Additional message properties"
+ },
+ {
+ "name": "CounterName",
+ "type": "string",
+ "description": "Name of the counter"
+ },
+ {
+ "name": "CounterValue",
+ "type": "real",
+ "description": "Value collected for the counter"
+ }
+ ]
+ }
+ }
+ }
+ '@
+
+ Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/MyTable_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+ ```
+
+## Create data collection rule
+The [DCR](../essentials/data-collection-rule-overview.md) defines how the data will be handled once it's received. This includes:
+
+- Schema of data that's being sent to the endpoint
+- [Transformation](../essentials/data-collection-transformations.md) that will be applied to the data before it's sent to the workspace
+- Destination workspace and table the transformed data will be sent to
1. In the Azure portal's search box, enter **template** and then select **Deploy a custom template**. :::image type="content" source="media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows how to deploy a custom template.":::
-1. Select **Build your own template in the editor**.
+2. Select **Build your own template in the editor**.
:::image type="content" source="media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows how to build a template in the editor.":::
-1. Paste the following ARM template into the editor and then select **Save**.
+3. Paste the following ARM template into the editor and then select **Save**.
:::image type="content" source="media/tutorial-workspace-transformations-api/edit-template.png" lightbox="media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows how to edit an ARM template."::: Notice the following details in the DCR defined in this template:
- - `dataCollectionEndpointId`: Identifies the Resource ID of the data collection endpoint.
- - `streamDeclarations`: Defines the columns of the incoming data.
- - `destinations`: Specifies the destination workspace.
+ - `dataCollectionEndpointId`: Resource ID of the data collection endpoint.
+ - `streamDeclarations`: Column definitions of the incoming data.
+ - `destinations`: Destination workspace.
- `dataFlows`: Matches the stream with the destination workspace and specifies the transformation query and the destination table. The output of the destination query is what will be sent to the destination table. ```json
The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of
"logAnalytics": [ { "workspaceResourceId": "[parameters('workspaceResourceId')]",
- "name": "clv2ws1"
+ "name": "myworkspace"
} ] },
The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of
"Custom-MyTableRawData" ], "destinations": [
- "clv2ws1"
+ "myworkspace"
], "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, CounterName=tostring(jsonContext.CounterName), CounterValue=toreal(jsonContext.CounterValue)", "outputStream": "Custom-MyTable_CL"
The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of
} ```
-1. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the DCR. Then provide values defined in the template. The values include a **Name** for the DCR and the **Workspace Resource ID** that you collected in a previous step. The **Location** should be the same location as the workspace. The **Region** will already be populated and will be used for the location of the DCR.
+4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the DCR. Then provide values defined in the template. The values include a **Name** for the DCR and the **Workspace Resource ID** that you collected in a previous step. The **Location** should be the same location as the workspace. The **Region** will already be populated and will be used for the location of the DCR.
:::image type="content" source="media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot that shows how to edit custom deployment values.":::
-1. Select **Review + create** and then select **Create** after you review the details.
+5. Select **Review + create** and then select **Create** after you review the details.
-1. When the deployment is complete, expand the **Deployment details** box and select your DCR to view its details. Select **JSON View**.
+6. When the deployment is complete, expand the **Deployment details** box and select your DCR to view its details. Select **JSON View**.
:::image type="content" source="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" lightbox="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" alt-text="Screenshot that shows DCR details.":::
After the DCR has been created, the application needs to be given permission to
:::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" alt-text="Screenshot that shows saving the DCR role assignment.":::
-## Send sample data
-The following PowerShell code sends data to the endpoint by using HTTP REST fundamentals.
-
-> [!NOTE]
-> This tutorial uses commands that require PowerShell v7.0 or later. Make sure your local installation of PowerShell is up to date or execute this script by using Azure Cloud Shell.
-
-1. Run the following PowerShell command, which adds a required assembly for the script.
-
- ```powershell
- Add-Type -AssemblyName System.Web
- ```
-
-1. Replace the parameters in the **Step 0** section with values from the resources that you created. You might also want to replace the sample data in the **Step 2** section with your own.
-
- ```powershell
- ##################
- ### Step 0: Set parameters required for the rest of the script.
- ##################
- #information needed to authenticate to AAD and obtain a bearer token
- $tenantId = "00000000-0000-0000-0000-000000000000"; #Tenant ID the data collection endpoint resides in
- $appId = "00000000-0000-0000-0000-000000000000"; #Application ID created and granted permissions
- $appSecret = "00000000000000000000000"; #Secret created for the application
-
- #information needed to send data to the DCR endpoint
- $dcrImmutableId = "dcr-000000000000000"; #the immutableId property of the DCR object
- $dceEndpoint = "https://my-dcr-name.westus2-1.ingest.monitor.azure.com"; #the endpoint property of the Data Collection Endpoint object
- $streamName = "Custom-MyTableRawData"; #name of the stream in the DCR that represents the destination table
-
- ##################
- ### Step 1: Obtain a bearer token used later to authenticate against the DCE.
- ##################
- $scope= [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default")
- $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials";
- $headers = @{"Content-Type"="application/x-www-form-urlencoded"};
- $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
-
- $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token
-
- ##################
- ### Step 2: Load up some sample data.
- ##################
- $currentTime = Get-Date ([datetime]::UtcNow) -Format O
- $staticData = @"
- [
- {
- "Time": "$currentTime",
- "Computer": "Computer1",
- "AdditionalContext": {
- "InstanceName": "user1",
- "TimeZone": "Pacific Time",
- "Level": 4,
- "CounterName": "AppMetric1",
- "CounterValue": 15.3
- }
- },
- {
- "Time": "$currentTime",
- "Computer": "Computer2",
- "AdditionalContext": {
- "InstanceName": "user2",
- "TimeZone": "Central Time",
- "Level": 3,
- "CounterName": "AppMetric1",
- "CounterValue": 23.5
- }
- }
- ]
- "@;
-
- ##################
- ### Step 3: Send the data to Log Analytics via the DCE.
- ##################
- $body = $staticData;
- $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"};
- $uri = "$dceEndpoint/dataCollectionRules/$dcrImmutableId/streams/$($streamName)?api-version=2021-11-01-preview"
-
- $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers
- ```
-
- > [!NOTE]
- > If you receive an `Unable to find type [System.Web.HttpUtility].` error, run the last line in section 1 of the script for a fix and execute it. Executing it uncommented as part of the script won't resolve the issue. The command must be executed separately.
-
-1. After you execute this script, you should see an `HTTP - 204` response. In a few minutes, the data arrives to your Log Analytics workspace.
-
-## Troubleshooting
-This section describes different error conditions you might receive and how to correct them.
-
-### Script returns error code 403
-Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate.
-
-### Script returns error code 413 or warning of TimeoutExpired with the message ReadyBody_ClientConnectionAbort in the response
-The message is too large. The maximum message size is currently 1 MB per call.
-
-### Script returns error code 429
-API limits have been exceeded. For information on the current limits, see [Service limits for the Logs Ingestion API](../service-limits.md#logs-ingestion-api).
-
-### Script returns error code 503
-Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate.
-### You don't receive an error, but data doesn't appear in the workspace
-The data might take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes.
+## Sample code
+See [Sample code to send data to Azure Monitor using Logs ingestion API](tutorial-logs-ingestion-code.md) for sample code using the components created in this tutorial.
-### IntelliSense in Log Analytics doesn't recognize the new table
-The cache that drives IntelliSense might take up to 24 hours to update.
## Next steps
azure-monitor Tutorial Logs Ingestion Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-code.md
+
+ Title: 'Sample code to send data to Azure Monitor using Logs ingestion API'
+description: Sample code using REST API and client libraries for Logs ingestion API in Azure Monitor.
+ Last updated : 03/21/2023++
+# Sample code to send data to Azure Monitor using Logs ingestion API
+This article provides sample code using the [Logs ingestion API](logs-ingestion-api-overview.md). Each sample requires the following components to be created before the code is run. See [Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](tutorial-logs-ingestion-api.md) for a complete walkthrough of creating these components configured to support each of these samples.
++
+- Custom table in a Log Analytics workspace
+- Data collection endpoint (DCE) to receive data
+- Data collection rule (DCR) to direct the data to the target table
+- AD application with access to the DCR
+
+## Sample code
+
+## [PowerShell](#tab/powershell)
+
+The following PowerShell code sends data to the endpoint by using HTTP REST fundamentals.
+
+> [!NOTE]
+> This sample requires PowerShell v7.0 or later.
+
+1. Run the following sample PowerShell command, which adds a required assembly for the script.
+
+ ```powershell
+ Add-Type -AssemblyName System.Web
+ ```
+
+1. Replace the parameters in the **Step 0** section with values from your application, DCE, and DCR. You might also want to replace the sample data in the **Step 2** section with your own.
+
+ ```powershell
+ ### Step 0: Set variables required for the rest of the script.
+
+ # information needed to authenticate to AAD and obtain a bearer token
+ $tenantId = "00000000-0000-0000-00000000000000000" #Tenant ID the data collection endpoint resides in
+ $appId = " 000000000-0000-0000-00000000000000000" #Application ID created and granted permissions
+ $appSecret = "0000000000000000000000000000000000000000" #Secret created for the application
+
+ # information needed to send data to the DCR endpoint
+ $dceEndpoint = "https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com" #the endpoint property of the Data Collection Endpoint object
+ $dcrImmutableId = "dcr-00000000000000000000000000000000" #the immutableId property of the DCR object
+ $streamName = "Custom-MyTableRawData" #name of the stream in the DCR that represents the destination table
+
+
+ ### Step 1: Obtain a bearer token used later to authenticate against the DCE.
+
+ $scope= [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default")
+ $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials";
+ $headers = @{"Content-Type"="application/x-www-form-urlencoded"};
+ $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
+
+ $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token
+
+
+ ### Step 2: Create some sample data.
+
+ $currentTime = Get-Date ([datetime]::UtcNow) -Format O
+ $staticData = @"
+ [
+ {
+ "Time": "$currentTime",
+ "Computer": "Computer1",
+ "AdditionalContext": {
+ "InstanceName": "user1",
+ "TimeZone": "Pacific Time",
+ "Level": 4,
+ "CounterName": "AppMetric1",
+ "CounterValue": 15.3
+ }
+ },
+ {
+ "Time": "$currentTime",
+ "Computer": "Computer2",
+ "AdditionalContext": {
+ "InstanceName": "user2",
+ "TimeZone": "Central Time",
+ "Level": 3,
+ "CounterName": "AppMetric1",
+ "CounterValue": 23.5
+ }
+ }
+ ]
+ "@;
+
+
+ ### Step 3: Send the data to the Log Analytics workspace via the DCE.
+
+ $body = $staticData;
+ $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"};
+ $uri = "$dceEndpoint/dataCollectionRules/$dcrImmutableId/streams/$($streamName)?api-version=2021-11-01-preview"
+
+ $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers
+ ```
+
+ > [!NOTE]
+ > If you receive an `Unable to find type [System.Web.HttpUtility].` error, run the last line in section 1 of the script for a fix and execute it. Executing it uncommented as part of the script won't resolve the issue. The command must be executed separately.
+
+3. Execute the script, and you should see an `HTTP - 204` response. The data should arrive in your Log Analytics workspace within a few minutes.
++
+## [Python](#tab/python)
+
+The following sample code uses the [Azure Monitor Ingestion client library for Python](/python/api/overview/azure/monitor-ingestion-readme).
++
+1. Use [pip](https://pypi.org/project/pip/) to install the Azure Monitor Ingestion and Azure Identity client libraries for Python. The Azure Identity library is required for the authentication used in this sample.
+
+ ```bash
+ pip install azure-monitor-ingestion
+ pip install azure-identity
+ ```
+
+2. Create the following environment variables with values for your Azure AD application. These values are used by `DefaultAzureCredential` in the Azure Identity library.
+
+ - AZURE_TENANT_ID
+ - AZURE_CLIENT_ID
+ - AZURE_CLIENT_SECRET
+
+3. Replace the variables in the following sample code with values from your DCE and DCR. You might also want to replace the sample data in the **Step 2** section with your own.
++
+ ```python
+ # information needed to send data to the DCR endpoint
+ dce_endpoint = "https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com" # ingestion endpoint of the Data Collection Endpoint object
+ dcr_immutableid = "dcr-00000000000000000000000000000000" # immutableId property of the Data Collection Rule
+ stream_name = "Custom-MyTableRawData" #name of the stream in the DCR that represents the destination table
+
+ # Import required modules
+ import os
+ from azure.identity import DefaultAzureCredential
+ from azure.monitor.ingestion import LogsIngestionClient
+ from azure.core.exceptions import HttpResponseError
+
+ credential = DefaultAzureCredential()
+ client = LogsIngestionClient(endpoint=dce_endpoint, credential=credential, logging_enable=True)
+
+ body = [
+ {
+ "Time": "2023-03-12T15:04:48.423211Z",
+ "Computer": "Computer1",
+ "AdditionalContext": {
+ "InstanceName": "user1",
+ "TimeZone": "Pacific Time",
+ "Level": 4,
+ "CounterName": "AppMetric2",
+ "CounterValue": 35.3
+ }
+ },
+ {
+ "Time": "2023-03-12T15:04:48.794972Z",
+ "Computer": "Computer2",
+ "AdditionalContext": {
+ "InstanceName": "user2",
+ "TimeZone": "Central Time",
+ "Level": 3,
+ "CounterName": "AppMetric2",
+ "CounterValue": 43.5
+ }
+ }
+ ]
+
+ try:
+ client.upload(rule_id=dcr_immutableid, stream_name=stream_name, logs=body)
+ except HttpResponseError as e:
+ print(f"Upload failed: {e}")
+ ```
+
+3. Execute the code, and the data should arrive in your Log Analytics workspace within a few minutes.
+
+## [JavaScript](#tab/javascript)
+
+The following sample code uses the [Azure Monitor Ingestion client library for JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme).
++
+1. Use [npm](https://www.npmjs.com/) to install the Azure Monitor Ingestion and Azure Identity client libraries for JavaScript. The Azure Identity library is required for the authentication used in this sample.
++
+ ```bash
+ npm install --save @azure/monitor-ingestion
+ npm install --save @azure/identity
+ ```
+
+2. Create the following environment variables with values for your Azure AD application. These values are used by `DefaultAzureCredential` in the Azure Identity library.
+
+ - AZURE_TENANT_ID
+ - AZURE_CLIENT_ID
+ - AZURE_CLIENT_SECRET
+
+3. Replace the variables in the following sample code with values from your DCE and DCR. You might also want to replace the sample data with your own.
+
+ ```javascript
+ const { isAggregateLogsUploadError, DefaultAzureCredential } = require("@azure/identity");
+ const { LogsIngestionClient } = require("@azure/monitor-ingestion");
+
+ require("dotenv").config();
+
+ async function main() {
+ const logsIngestionEndpoint = "https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com";
+ const ruleId = "dcr-00000000000000000000000000000000";
+ const streamName = "Custom-MyTableRawData";
+ const credential = new DefaultAzureCredential();
+ const client = new LogsIngestionClient(logsIngestionEndpoint, credential);
+ const logs = [
+ {
+ Time: "2021-12-08T23:51:14.1104269Z",
+ Computer: "Computer1",
+ AdditionalContext: {
+ "InstanceName": "user1",
+ "TimeZone": "Pacific Time",
+ "Level": 4,
+ "CounterName": "AppMetric2",
+ "CounterValue": 35.3
+ }
+ },
+ {
+ Time: "2021-12-08T23:51:14.1104269Z",
+ Computer: "Computer2",
+ AdditionalContext: {
+ "InstanceName": "user2",
+ "TimeZone": "Pacific Time",
+ "Level": 4,
+ "CounterName": "AppMetric2",
+ "CounterValue": 43.5
+ }
+ },
+ ];
+ try{
+ await client.upload(ruleId, streamName, logs);
+ }
+ catch(e){
+ let aggregateErrors = isAggregateLogsUploadError(e) ? e.errors : [];
+ if (aggregateErrors.length > 0) {
+ console.log("Some logs have failed to complete ingestion");
+ for (const error of aggregateErrors) {
+ console.log(`Error - ${JSON.stringify(error.cause)}`);
+ console.log(`Log - ${JSON.stringify(error.failedLogs)}`);
+ }
+ } else {
+ console.log(e);
+ }
+ }
+ }
+
+ main().catch((err) => {
+ console.error("The sample encountered an error:", err);
+ process.exit(1);
+ });
+ ```
+
+4. Execute the code, and the data should arrive in your Log Analytics workspace within a few minutes.
+
+## [Java](#tab/java)
+The following sample code uses the [Azure Monitor Ingestion client library for Java](/java/api/overview/azure/monitor-ingestion-readme).
++
+1. Include the Logs ingestion package and the `azure-identity` package from the [Azure Identity library](https://github.com/Azure/azure-sdk-for-java/tree/azure-monitor-ingestion_1.0.1/sdk/identity/azure-identity). The Azure Identity library is required for the authentication used in this sample.
+
+ > [!NOTE]
+ > See the Maven repositories for [Microsoft Azure Client Library For Identity](https://mvnrepository.com/artifact/com.azure/azure-identity) and [Microsoft Azure SDK For Azure Monitor Data Ingestion](https://mvnrepository.com/artifact/com.azure/azure-monitor-ingestion) for the latest versions.
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-monitor-ingestion</artifactId>
+ <version>{get-latest-version}</version>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>{get-latest-version}</version>
+ </dependency>
+ ```
++
+3. Create the following environment variables with values for your Azure AD application. These values are used by `DefaultAzureCredential` in the Azure Identity library.
+
+ - AZURE_TENANT_ID
+ - AZURE_CLIENT_ID
+ - AZURE_CLIENT_SECRET
+
+4. Replace the variables in the following sample code with values from your DCE and DCR. You may also want to replace the sample data with your own.
+
+ ```java
+ import com.azure.identity.DefaultAzureCredentialBuilder;
+ import com.azure.monitor.ingestion.models.LogsUploadException;
+
+ import java.time.OffsetDateTime;
+ import java.util.Arrays;
+ import java.util.List;
+
+ public class LogsUploadSample {
+ public static void main(String[] args) {
+
+ LogsIngestionClient client = new LogsIngestionClientBuilder()
+ .endpoint("https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com")
+ .credential(new DefaultAzureCredentialBuilder().build())
+ .buildClient();
+
+ List<Object> dataList = Arrays.asList(
+ new Object() {
+ OffsetDateTime time = OffsetDateTime.now();
+ String computer = "Computer1";
+ Object additionalContext = new Object() {
+ String instanceName = "user4";
+ String timeZone = "Pacific Time";
+ int level = 4;
+ String counterName = "AppMetric1";
+ double counterValue = 15.3;
+ };
+ },
+ new Object() {
+ OffsetDateTime time = OffsetDateTime.now();
+ String computer = "Computer2";
+ Object additionalContext = new Object() {
+ String instanceName = "user2";
+ String timeZone = "Central Time";
+ int level = 3;
+ String counterName = "AppMetric2";
+ double counterValue = 43.5;
+ };
+ });
+
+ try {
+ client.upload("dcr-00000000000000000000000000000000", "Custom-MyTableRawData", dataList);
+ System.out.println("Logs uploaded successfully");
+ } catch (LogsUploadException exception) {
+ System.out.println("Failed to upload logs ");
+ exception.getLogsUploadErrors()
+ .forEach(httpError -> System.out.println(httpError.getMessage()));
+ }
+ }
+ }
+ ```
+
+5. Execute the code, and the data should arrive in your Log Analytics workspace within a few minutes.
++
+## [.NET](#tab/net)
+
+The following script uses the [Azure Monitor Ingestion client library for .NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme).
+
+1. Install the Azure Monitor Ingestion client library and the Azure Identity library. The Azure Identity library is required for the authentication used in this sample.
+
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ dotnet add package Azure.Monitor.Ingestion
+ ```
+
+3. Create the following environment variables with values for your Azure AD application. These values are used by `DefaultAzureCredential` in the Azure Identity library.
+
+ - AZURE_TENANT_ID
+ - AZURE_CLIENT_ID
+ - AZURE_CLIENT_SECRET
+
+2. Replace the variables in the following sample code with values from your DCE and DCR. You may also want to replace the sample data with your own.
+
+ ```csharp
+ using Azure;
+ using Azure.Core;
+ using Azure.Identity;
+ using Azure.Monitor.Ingestion;
+
+ // Initialize variables
+ var endpoint = new Uri("https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com");
+ var ruleId = "dcr-00000000000000000000000000000000";
+ var streamName = "Custom-MyTableRawData";
+
+ // Create credential and client
+ var credential = new DefaultAzureCredential();
+ LogsIngestionClient client = new(endpoint, credential);
+
+ DateTimeOffset currentTime = DateTimeOffset.UtcNow;
+
+ // Use BinaryData to serialize instances of an anonymous type into JSON
+ BinaryData data = BinaryData.FromObjectAsJson(
+     new[] {
+         new
+         {
+             Time = currentTime,
+             Computer = "Computer1",
+             AdditionalContext = new
+             {
+                 InstanceName = "user1",
+                 TimeZone = "Pacific Time",
+                 Level = 4,
+                 CounterName = "AppMetric1",
+                 CounterValue = 15.3
+             }
+         },
+         new
+         {
+             Time = currentTime,
+             Computer = "Computer2",
+             AdditionalContext = new
+             {
+                 InstanceName = "user2",
+                 TimeZone = "Central Time",
+                 Level = 3,
+                 CounterName = "AppMetric1",
+                 CounterValue = 23.5
+             }
+         },
+     });
+
+ // Upload logs
+ try
+ {
+     Response response = client.Upload(ruleId, streamName, RequestContent.Create(data));
+ }
+ catch (Exception ex)
+ {
+     Console.WriteLine("Upload failed with Exception " + ex.Message);
+ }
+
+ // Logs can also be uploaded in a List
+ var entries = new List<Object>();
+ for (int i = 0; i < 10; i++)
+ {
+     entries.Add(
+         new {
+             Time = recordingNow,
+             Computer = "Computer" + i.ToString(),
+             AdditionalContext = i
+         }
+     );
+ }
+
+ // Make the request
+ LogsUploadOptions options = new LogsUploadOptions();
+ bool isTriggered = false;
+ options.UploadFailed += Options_UploadFailed;
+ await client.UploadAsync(TestEnvironment.DCRImmutableId, TestEnvironment.StreamName, entries, options).ConfigureAwait(false);
+
+ Task Options_UploadFailed(LogsUploadFailedEventArgs e)
+ {
+     isTriggered = true;
+     Console.WriteLine(e.Exception);
+     foreach (var log in e.FailedLogs)
+     {
+         Console.WriteLine(log);
+     }
+     return Task.CompletedTask;
+ }
+ ```
+
+3. Execute the code, and the data should arrive in your Log Analytics workspace within a few minutes.
+++++
+## Troubleshooting
+This section describes different error conditions you might receive and how to correct them.
+
+### Script returns error code 403
+Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate.
+
+### Script returns error code 413 or warning of TimeoutExpired with the message ReadyBody_ClientConnectionAbort in the response
+The message is too large. The maximum message size is currently 1 MB per call.
+
+### Script returns error code 429
+API limits have been exceeded. The limits are currently set to 500 MB of data per minute for both compressed and uncompressed data and 300,000 requests per minute. Retry after the duration listed in the `Retry-After` header in the response.
+
+### Script returns error code 503
+Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate.
+
+### You don't receive an error, but data doesn't appear in the workspace
+The data might take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes.
+
+### IntelliSense in Log Analytics doesn't recognize the new table
+The cache that drives IntelliSense might take up to 24 hours to update.
+
+## Next steps
+
+- [Learn more about data collection rules](../essentials/data-collection-rule-overview.md)
+- [Learn more about writing transformation queries](../essentials//data-collection-transformations.md)
+
azure-monitor Tutorial Logs Ingestion Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md
Title: 'Tutorial: Send data to Azure Monitor Logs by using a REST API (Azure portal)'
-description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor by using a REST API (Azure portal version).
+ Title: 'Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)'
+description: Tutorial on how sending data to a Log Analytics workspace in Azure Monitor using the Logs ingestion API. Supporting components configured using the Azure portal.
+ Last updated : 03/20/2023 - Previously updated : 07/15/2022+
-# Tutorial: Send data to Azure Monitor Logs by using a REST API (Azure portal)
-The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send external data to a Log Analytics workspace with a REST API. This tutorial uses the Azure portal to walk through configuration of a new table and a sample application to send log data to Azure Monitor.
+# Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)
+The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send external data to a Log Analytics workspace with a REST API. This tutorial uses the Azure portal to walk through configuration of a new table and a sample application to send log data to Azure Monitor. The sample application collects entries from a text file and
> [!NOTE]
-> This tutorial uses the Azure portal. For a similar tutorial that uses Azure Resource Manager templates, see [Tutorial: Send data to Azure Monitor Logs by using a REST API (Resource Manager templates)](tutorial-logs-ingestion-api.md).
+> This tutorial uses the Azure portal to configure the components to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components and that has sample code for client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme).
-In this tutorial, you learn to:
-> [!div class="checklist"]
-> * Create a custom table in a Log Analytics workspace.
-> * Create a data collection endpoint (DCE) to receive data over HTTP.
-> * Create a data collection rule (DCR) that transforms incoming data to match the schema of the target table.
-> * Create a sample application to send custom data to Azure Monitor.
+The steps required to configure the Logs ingestion API are as follows:
+
+1. [Create an Azure AD application](#create-azure-ad-application) to authenticate against the API.
+3. [Create a data collection endpoint (DCE)](#create-data-collection-endpoint) to receive data.
+2. [Create a custom table in a Log Analytics workspace](#create-new-table-in-log-analytics-workspace). This is the table you'll be sending data to. As part of this process, you will create a data collection rule (DCR) to direct the data to the target table.
+5. [Give the AD application access to the DCR](#assign-permissions-to-the-dcr).
+6. [Use sample code to send data to using the Logs ingestion API](#send-sample-data).
-> [!NOTE]
-> This tutorial uses PowerShell to call the Logs ingestion API. See [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme) for guidance on using the client libraries for other languages.
## Prerequisites To complete this tutorial, you need:
In this tutorial, you'll use a PowerShell script to send sample Apache access lo
After the configuration is finished, you'll send sample data from the command line, and then inspect the results in Log Analytics.
-## Configure the application
+## Create Azure AD application
Start by registering an Azure Active Directory application to authenticate against the API. Any Resource Manager authentication scheme is supported, but this tutorial will follow the [Client Credential Grant Flow scheme](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). 1. On the **Azure Active Directory** menu in the Azure portal, select **App registrations** > **New registration**.
Start by registering an Azure Active Directory application to authenticate again
:::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" alt-text="Screenshot that shows the secret value for the new app.":::
-## Create a data collection endpoint
-A [data collection endpoint](../essentials/data-collection-endpoint-overview.md) is required to accept the data from the script. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE must be located in the same region as the VM being associated, but it does not need to be in the same region as the Log Analytics workspace where the data will be sent or the data collection rule being used.
+## Create data collection endpoint
+A [data collection endpoint](../essentials/data-collection-endpoint-overview.md) is required to accept the data from the script. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE does not need to be in the same region as the Log Analytics workspace where the data will be sent or the data collection rule being used.
1. To create a new DCE, go to the **Monitor** menu in the Azure portal. Select **Data Collection Endpoints** and then select **Create**.
A [data collection endpoint](../essentials/data-collection-endpoint-overview.md)
:::image type="content" source="media/tutorial-logs-ingestion-portal/data-collection-endpoint-uri.png" lightbox="media/tutorial-logs-ingestion-portal/data-collection-endpoint-uri.png" alt-text="Screenshot that shows DCE URI.":::
-## Generate sample data
-> [!IMPORTANT]
-> You must be using PowerShell version 7.2 or later.
-
-The following PowerShell script generates sample data to configure the custom table and sends sample data to the logs ingestion API to test the configuration.
-
-1. Run the following PowerShell command, which adds a required assembly for the script:
-
- ```powershell
- Add-Type -AssemblyName System.Web
- ```
-
-1. Update the values of `$tenantId`, `$appId`, and `$appSecret` with the values you noted for **Directory (tenant) ID**, **Application (client) ID**, and secret **Value**. Then save it with the file name *LogGenerator.ps1*.
-
- ``` PowerShell
- param ([Parameter(Mandatory=$true)] $Log, $Type="file", $Output, $DcrImmutableId, $DceURI, $Table)
- ################
- ##### Usage
- ################
- # LogGenerator.ps1
- # -Log <String> - Log file to be forwarded
- # [-Type "file|API"] - Whether the script should generate sample JSON file or send data via
- # API call. Data will be written to a file by default.
- # [-Output <String>] - Path to resulting JSON sample
- # [-DcrImmutableId <string>] - DCR immutable ID
- # [-DceURI] - Data collection endpoint URI
- # [-Table] - The name of the custom log table, including "_CL" suffix
--
- ##### >>>> PUT YOUR VALUES HERE <<<<<
- # Information needed to authenticate to Azure Active Directory and obtain a bearer token
- $tenantId = "<put tenant ID here>"; #the tenant ID in which the Data Collection Endpoint resides
- $appId = "<put application ID here>"; #the app ID created and granted permissions
- $appSecret = "<put secret value here>"; #the secret created for the above app - never store your secrets in the source code
- ##### >>>> END <<<<<
--
- $file_data = Get-Content $Log
- if ("file" -eq $Type) {
- ############
- ## Convert plain log to JSON format and output to .json file
- ############
- # If not provided, get output file name
- if ($null -eq $Output) {
- $Output = Read-Host "Enter output file name"
- };
-
- # Form file payload
- $payload = @();
- $records_to_generate = [math]::min($file_data.count, 500)
- for ($i=0; $i -lt $records_to_generate; $i++) {
- $log_entry = @{
- # Define the structure of log entry, as it will be sent
- Time = Get-Date ([datetime]::UtcNow) -Format O
- Application = "LogGenerator"
- RawData = $file_data[$i]
- }
- $payload += $log_entry
- }
- # Write resulting payload to file
- New-Item -Path $Output -ItemType "file" -Value ($payload | ConvertTo-Json) -Force
-
- } else {
- ############
- ## Send the content to the data collection endpoint
- ############
- if ($null -eq $DcrImmutableId) {
- $DcrImmutableId = Read-Host "Enter DCR Immutable ID"
- };
-
- if ($null -eq $DceURI) {
- $DceURI = Read-Host "Enter data collection endpoint URI"
- }
- if ($null -eq $Table) {
- $Table = Read-Host "Enter the name of custom log table"
- }
-
- ## Obtain a bearer token used to authenticate against the data collection endpoint
- $scope = [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default")
- $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials";
- $headers = @{"Content-Type" = "application/x-www-form-urlencoded" };
- $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
- $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token
-
- ## Generate and send some data
- foreach ($line in $file_data) {
- # We are going to send log entries one by one with a small delay
- $log_entry = @{
- # Define the structure of log entry, as it will be sent
- Time = Get-Date ([datetime]::UtcNow) -Format O
- Application = "LogGenerator"
- RawData = $line
- }
- # Sending the data to Log Analytics via the DCR!
- $body = $log_entry | ConvertTo-Json -AsArray;
- $headers = @{"Authorization" = "Bearer $bearerToken"; "Content-Type" = "application/json" };
- $uri = "$DceURI/dataCollectionRules/$DcrImmutableId/streams/Custom-$Table"+"?api-version=2021-11-01-preview";
- $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers;
-
- # Let's see how the response looks
- Write-Output $uploadResponse
- Write-Output ""
-
- # Pausing for 1 second before processing the next entry
- Start-Sleep -Seconds 1
- }
- }
- ```
-
-1. Copy the sample log data from [sample data](#sample-data) or copy your own Apache log data into a file called `sample_access.log`.
-
-1. To read the data in the file and create a JSON file called `data_sample.json` that you can send to the logs ingestion API, run:
-
- ```PowerShell
- .\LogGenerator.ps1 -Log "sample_access.log" -Type "file" -Output "data_sample.json"
- ```
-
-## Add a custom log table
+## Create new table in Log Analytics workspace
Before you can send data to the workspace, you need to create the custom table where the data will be sent. 1. Go to the **Log Analytics workspaces** menu in the Azure portal and select **Tables**. The tables in the workspace will appear. Select **Create** > **New custom log (DCR based)**.
The final step is to give the application permission to use the DCR. Any applica
:::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" alt-text="Screenshot that shows saving the DCR role assignment.":::
+## Generate sample data
+
+The following PowerShell script generates sample data to configure the custom table and sends sample data to the logs ingestion API to test the configuration.
+
+1. Run the following PowerShell command, which adds a required assembly for the script:
+
+ ```powershell
+ Add-Type -AssemblyName System.Web
+ ```
+
+1. Update the values of `$tenantId`, `$appId`, and `$appSecret` with the values you noted for **Directory (tenant) ID**, **Application (client) ID**, and secret **Value**. Then save it with the file name *LogGenerator.ps1*.
+
+ ``` PowerShell
+ param ([Parameter(Mandatory=$true)] $Log, $Type="file", $Output, $DcrImmutableId, $DceURI, $Table)
+ ################
+ ##### Usage
+ ################
+ # LogGenerator.ps1
+ # -Log <String> - Log file to be forwarded
+ # [-Type "file|API"] - Whether the script should generate sample JSON file or send data via
+ # API call. Data will be written to a file by default.
+ # [-Output <String>] - Path to resulting JSON sample
+ # [-DcrImmutableId <string>] - DCR immutable ID
+ # [-DceURI] - Data collection endpoint URI
+ # [-Table] - The name of the custom log table, including "_CL" suffix
++
+ ##### >>>> PUT YOUR VALUES HERE <<<<<
+ # Information needed to authenticate to Azure Active Directory and obtain a bearer token
+ $tenantId = "<put tenant ID here>"; #the tenant ID in which the Data Collection Endpoint resides
+ $appId = "<put application ID here>"; #the app ID created and granted permissions
+ $appSecret = "<put secret value here>"; #the secret created for the above app - never store your secrets in the source code
+ ##### >>>> END <<<<<
++
+ $file_data = Get-Content $Log
+ if ("file" -eq $Type) {
+ ############
+ ## Convert plain log to JSON format and output to .json file
+ ############
+ # If not provided, get output file name
+ if ($null -eq $Output) {
+ $Output = Read-Host "Enter output file name"
+ };
+
+ # Form file payload
+ $payload = @();
+ $records_to_generate = [math]::min($file_data.count, 500)
+ for ($i=0; $i -lt $records_to_generate; $i++) {
+ $log_entry = @{
+ # Define the structure of log entry, as it will be sent
+ Time = Get-Date ([datetime]::UtcNow) -Format O
+ Application = "LogGenerator"
+ RawData = $file_data[$i]
+ }
+ $payload += $log_entry
+ }
+ # Write resulting payload to file
+ New-Item -Path $Output -ItemType "file" -Value ($payload | ConvertTo-Json) -Force
+
+ } else {
+ ############
+ ## Send the content to the data collection endpoint
+ ############
+ if ($null -eq $DcrImmutableId) {
+ $DcrImmutableId = Read-Host "Enter DCR Immutable ID"
+ };
+
+ if ($null -eq $DceURI) {
+ $DceURI = Read-Host "Enter data collection endpoint URI"
+ }
+
+ if ($null -eq $Table) {
+ $Table = Read-Host "Enter the name of custom log table"
+ }
+
+ ## Obtain a bearer token used to authenticate against the data collection endpoint
+ $scope = [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default")
+ $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials";
+ $headers = @{"Content-Type" = "application/x-www-form-urlencoded" };
+ $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
+ $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token
+
+ ## Generate and send some data
+ foreach ($line in $file_data) {
+ # We are going to send log entries one by one with a small delay
+ $log_entry = @{
+ # Define the structure of log entry, as it will be sent
+ Time = Get-Date ([datetime]::UtcNow) -Format O
+ Application = "LogGenerator"
+ RawData = $line
+ }
+ # Sending the data to Log Analytics via the DCR!
+ $body = $log_entry | ConvertTo-Json -AsArray;
+ $headers = @{"Authorization" = "Bearer $bearerToken"; "Content-Type" = "application/json" };
+ $uri = "$DceURI/dataCollectionRules/$DcrImmutableId/streams/Custom-$Table"+"?api-version=2021-11-01-preview";
+ $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers;
+
+ # Let's see how the response looks
+ Write-Output $uploadResponse
+ Write-Output ""
+
+ # Pausing for 1 second before processing the next entry
+ Start-Sleep -Seconds 1
+ }
+ }
+ ```
+
+1. Copy the sample log data from [sample data](#sample-data) or copy your own Apache log data into a file called `sample_access.log`.
+
+1. To read the data in the file and create a JSON file called `data_sample.json` that you can send to the logs ingestion API, run:
+
+ ```PowerShell
+ .\LogGenerator.ps1 -Log "sample_access.log" -Type "file" -Output "data_sample.json"
+ ```
++ ## Send sample data Allow at least 30 minutes for the configuration to take effect. You might also experience increased latency for the first few entries, but this activity should normalize.
Allow at least 30 minutes for the configuration to take effect. You might also e
1. From Log Analytics, query your newly created table to verify that data arrived and that it's transformed properly. ## Troubleshooting
-This section describes different error conditions you might receive and how to correct them.
-
-### Script returns error code 403
-Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate.
-
-### Script returns error code 413 or warning of TimeoutExpired with the message ReadyBody_ClientConnectionAbort in the response
-The message is too large. The maximum message size is currently 1 MB per call.
-
-### Script returns error code 429
-API limits have been exceeded. The limits are currently set to 500 MB of data per minute for both compressed and uncompressed data and 300,000 requests per minute. Retry after the duration listed in the `Retry-After` header in the response.
-
-### Script returns error code 503
-Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate.
-
-### You don't receive an error, but data doesn't appear in the workspace
-The data might take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes.
-
-### IntelliSense in Log Analytics doesn't recognize the new table
-The cache that drives IntelliSense might take up to 24 hours to update.
+See the [Troubleshooting](tutorial-logs-ingestion-code.md#troubleshooting) section of the sample code article if your code doesn't work as expected.
## Sample data You can use the following sample data for the tutorial. Alternatively, you can use your own data if you have your own Apache access logs.
azure-monitor Profiler Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-azure-functions.md
In this article, you'll use the Azure portal to:
||-| |APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 | |DiagnosticServices_EXTENSION_VERSION | ~3 |
+|APPINSIGHTS_INSTRUMENTATIONKEY | Unique value from your App Insights resource. |
## Add app settings to your Azure Functions app
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
na Previously updated : 03/15/2023 Last updated : 03/27/2023 # Requirements and considerations for large volumes (preview)
To enroll in the preview for large volumes, use the [large volumes preview sign-
* You can't create a large volume with application volume groups. * Large volumes aren't currently supported with cross-zone replication. * The SDK for large volumes isn't currently available.
-* Large volumes aren't currently supported with cool access tier.
+* Currently, large volumes are not suited for database (HANA, Oracle, SQL Server, etc) data and log volumes. For database workloads requiring more than a single volumeΓÇÖs throughput limit, consider deploying multiple regular volumes.
* Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You're able to grow to 500 TiB with the throughput ceiling per the following table: | Capacity tier | Volume size (TiB) | Throughput (MiB/s) |
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-mount-options.md
For example, [Deploy a SAP HANA scale-out system with standby node on Azure VMs
``` sudo vi /etc/fstab # Add the following entries
-10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
-10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
-10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
-10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
-10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0
+10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0
+10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0
+10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0
+10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0
``` For example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased read-ahead for the NFS mounts. See [NFS read-ahead best practices for Azure NetApp Files](performance-linux-nfs-read-ahead.md) for details.
azure-resource-manager Concepts Custom Role Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/concepts-custom-role-definition.md
- Title: Overview of custom role definitions
-description: Describes the concept of creating custom role definitions for managed applications.
--- Previously updated : 09/16/2019--
-# Custom role definition artifact in Azure Managed Applications
-
-Custom role definition is an optional artifact in managed applications. It's used to determine what permissions the managed application needs to perform its functions.
-
-This article provides an overview of the custom role definition artifact and its capabilities.
-
-## Custom role definition artifact
-
-You need to name the custom role definition artifact customRoleDefinition.json. Place it at the same level as createUiDefinition.json and mainTemplate.json in the .zip package that creates a managed application definition. To learn how to create the .zip package and publish a managed application definition, see [Publish a managed application definition.](publish-service-catalog-app.md)
-
-## Custom role definition schema
-
-The customRoleDefinition.json file has a top-level `roles` property that's an array of roles. These roles are the permissions that the managed application needs to function. Currently, only built-in roles are allowed, but you can specify multiple roles. A role can be referenced by the ID of the role definition or by the role name.
-
-Sample JSON for custom role definition:
-
-```json
-{
- "contentVersion": "0.0.0.1",
- "roles": [
- {
- "properties": {
- "roleName": "Contributor"
- }
- },
- {
- "id": "acdd72a7-3385-48ef-bd42-f606fba81ae7"
- },
- {
- "id": "/providers/Microsoft.Authorization/roledefinitions/9980e02c-c2be-4d73-94e8-173b1dc7cf3c"
- }
- ]
-}
-```
-
-## Roles
-
-A role is composed of either a `$.properties.roleName` or an `id`:
-
-```json
-{
- "id": null,
- "properties": {
- "roleName": "Contributor"
- }
-}
-```
-
-> [!NOTE]
-> You can use either the `id` or `roleName` field. Only one is required. These fields are used to look up the role definition that should be applied. If both are supplied, the `id` field will be used.
-
-|Property|Required?|Description|
-||||
-|id|Yes|The ID of the built-in role. You can use the full ID or just the GUID.|
-|roleName|Yes|The name of the built-in role.|
azure-web-pubsub Quickstart Use Client Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-use-client-sdk.md
Title: Quickstart - Pub-sub using Azure Web PubSub client SDK
+ Title: Quickstart - Create a client using the Azure Web PubSub client SDK (preview)
description: Quickstart showing how to use the Azure Web PubSub client SDK Previously updated : 02/7/2023 Last updated : 03/15/2023 ms.devlang: azurecli
-# Quickstart: Pub-sub using Web PubSub client SDK
+# Quickstart: Create a client using the Azure Web PubSub client SDK (preview)
+
+Get started with the Azure Web PubSub client SDK for .NET or JavaScript to create a Web PubSub client
+that:
+
+* connects to a Web PubSub service instance
+* subscribes a Web PubSub group.
+* publishes a message to the Web PubSub group.
+
+[API reference documentation](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub-client) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub-client/src) | [Package (JavaScript npm)](https://www.npmjs.com/package/@azure/web-pubsub-client) | [Samples](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub-client/samples-dev/helloworld.ts)
+
+[API reference documentation](https://github.com/Azure/azure-sdk-for-net#azure-sdk-for-net) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/webpubsub/Azure.Messaging.WebPubSub.Client/src) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Messaging.WebPubSub.Client) | [Samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/webpubsub/Azure.Messaging.WebPubSub.Client/samples)
-This quickstart guide demonstrates how to construct a project using the Web PubSub client SDK, connect to the Web PubSub, subscribe to messages from groups and publish a message to the group.
> [!NOTE]
-> The client SDK is still in preview version. The interface may change in later versions
+> The client SDK is still in preview version. The interface may change in later versions.
## Prerequisites -- A Web PubSub instance. If you haven't created one, you can follow the guidance: [Create a Web PubSub instance from Azure portal](./howto-develop-create-instance.md)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- A file editor such as Visual Studio Code.
-Install the dependencies for the language you're using:
+## Setting up
+
+### Create an Azure Web PubSub service instance
+
+1. In the Azure portal **Home** page, select **Create a resource**.
+1. In the **Search the Marketplace** box, enter *Web PubSub*.
+1. Select **Web PubSub** from the results.
+1. Select **Create**.
+1. Create a new resource group
+ 1. Select **Create new**.
+ 1. Enter the name and select **OK**.
+1. Enter a **Resource Name** for the service instance.
+1. Select **Pricing tier**. You can choose **Free** for testing.
+1. Select **Create**, then **Create** again to confirm the new service instance.
+1. Once deployment is complete, select **Go to resource**.
+
+### Generate the client URL
+
+A client uses a Client Access URL to connect and authenticate with the service, which follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`.
+
+To give the client permission to send messages to and join a specific group, you must generate a Client Access URL with the **Send To Groups** and **Join/Leave Groups** permissions.
+
+1. In the Azure portal, go to your Web PubSub service resource page.
+1. Select **Keys** from the menu.
+1. In the **Client URL Generator** section:
+ 1. Select **Send To Groups**
+ 1. Select **Allow Sending To Specific Groups**.
+ 1. Enter *group1* in the **Group Name** field and select **Add**.
+ 1. Select **Join/Leave Groups**.
+ 1. Select **Allow Joining/Leaving Specific Groups**.
+ 1. Enter *group1* in the **Group Name** field and select **Add**.
+ 1. Copy and save the **Client Access URL** for use later in this article.
++
+### Install programming language
+
+This quickstart uses the Azure Web PubSub client SDK for JavaScript or C#. Open a terminal window and install the dependencies for the language you're using.
# [JavaScript](#tab/javascript)
Install both the .NET Core SDK and dotnet runtime.
-## Add the Web PubSub client SDK
+### Install the package
+
+Install the Azure Web PubSub client SDK for the language you're using.
# [JavaScript](#tab/javascript)
-The SDK is available as an [npm module](https://www.npmjs.com/package/@azure/web-pubsub-client)
+The SDK is available as an [npm module](https://www.npmjs.com/package/@azure/web-pubsub-client).
+
+Open a terminal window and install the Web PubSub client SDK using the following command.
```bash npm install @azure/web-pubsub-client ```
+Note that the SDK is available as an [npm module](https://www.npmjs.com/package/@azure/web-pubsub-client).
+ # [C#](#tab/csharp)
-The SDK is available as an [NuGet packet](https://www.nuget.org/packages/Azure.Messaging.WebPubSub.Client)
+Open a terminal window to create your project and install the Web PubSub client SDK.
```bash
+# create project directory
+mkdir webpubsub-client
+
+# change to the project directory
+cd webpubsub-client
+ # Add a new .net project dotnet new console
dotnet new console
dotnet add package Azure.Messaging.WebPubSub.Client --prerelease ```
+Note that the SDK is available as a [NuGet packet](https://www.nuget.org/packages/Azure.Messaging.WebPubSub.Client).
+
-## Connect to Web PubSub
+## Code examples
-A client uses a Client Access URL to connect and authenticate with the service, which follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown as the following diagram.
-![The diagram shows how to get client access url.](./media/howto-websocket-connect/generate-client-url.png)
+### Create and connect to the Web PubSub service
-As shown in the diagram above, the client has the permissions to send messages to and join a specific group named `group1`.
+This code example creates a Web PubSub client that connects to the Web PubSub service instance. A client uses a Client Access URL to connect and authenticate with the service. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand.
+For this example, you can use the Client Access URL you generated in the portal.
# [JavaScript](#tab/javascript)
-Add a file with name `index.js` and add following codes:
+In the terminal window, create a new directory for your project and change to that directory.
+
+```bash
+mkdir webpubsub-client
+cd webpubsub-client
+```
+
+Create a file with name `index.js` and enter following code:
```javascript const { WebPubSubClient } = require("@azure/web-pubsub-client");
-// Instantiates the client object. <client-access-url> is copied from Azure portal mentioned above.
-const client = new WebPubSubClient("<client-access-url>");
+// Instantiates the client object. env.process.env.WebPubSubClientURL
+// env.process.env.WebPubSubClientURL is the Client Access URL from Azure portal
+const client = new WebPubSubClient(env.process.env.WebPubSubClientURL);
``` # [C#](#tab/csharp)
-Edit the `Program.cs` file and add following codes:
+Edit the `Program.cs` file and add following code:
```csharp using Azure.Messaging.WebPubSub.Clients;
-// Instantiates the client object. <client-access-uri> is copied from Azure portal mentioned above.
-var client = new WebPubSubClient(new Uri("<client-access-uri>"));
+// Client Access URL from Azure portal
+var clientURL = Environment.GetEnvironmentVariable("WebPubSubClientURL"));
+// Instantiates the client object.
+var client = new WebPubSubClient(new Uri(clientURL));
```
-## Subscribe to a group
+### Subscribe to a group
-To receive message from groups, you need to add a callback to handle messages you receive from the group, and you must join the group before you can receive messages from it. The following code subscribes the client to a group called `group1`.
+To receive message from a group, you need to subscribe to the group and add a callback to handle messages you receive from the group. The following code subscribes the client to a group called `group1`.
# [JavaScript](#tab/javascript)
+Add this following code to the `index.js` file:
+ ```javascript // callback to group messages. client.on("group-message", (e) => {
client.joinGroup("group1");
# [C#](#tab/csharp)
+Add the following code to the `Program.cs` file:
+ ```csharp // callback to group messages. client.GroupMessageReceived += eventArgs =>
await client.StartAsync();
// join a group to subscribe message from the group await client.JoinGroupAsync("group1"); ```+
-## Publish a message to a group
+### Publish a message to a group
-Then you can send messages to the group and as the client has joined the group before, you can receive the message you've sent.
+After your client has subscribed to the group, it can send messages to and receive the message from the group.
# [JavaScript](#tab/javascript)
+Add the following code to the `index.js` file:
+ ```javascript client.sendToGroup("group1", "Hello World", "text"); ``` # [C#](#tab/csharp)
+Add the following code to the `Program.cs` file:
+ ```csharp await client.SendToGroupAsync("group1", BinaryData.FromString("Hello World"), WebPubSubDataType.Text); ```
-## Repository and Samples
+## Run the code
+
+Run the client in your terminal. To verify the client is sending and receiving messages, you can open a second terminal and start the client from the same directory. You can see the message you sent from the second client in the first client's terminal window.
+
+# [JavaScript](#tab/javascript)
+
+To start the client go the terminal and run the following command. Replace the `<Client Access URL>` with the client access URL you copied from the portal.
+
+```bash
+export WebPubSubClientURL="<Client Access URL>"
+node index.js
+```
+
+# [C#](#tab/csharp)
+
+To start the client, run the following command in your terminal replacing the `<client-access-url>` with the client access URL you copied from the portal:
+
+```bash
+export WebPubSubClientURL="<Client Access URL>"
+dotnet run <client-access-url>
+```
+++
+## Clean up resources
+
+To delete the resources you created in this quickstart, you can delete the resource group you created. Go to the Azure portal, select your resource group, and select **Delete resource group**.
+
+## Next steps
+
+To learn more the Web PubSub service client SDKs, see the following resources:
# [JavaScript](#tab/javascript)
await client.SendToGroupAsync("group1", BinaryData.FromString("Hello World"), We
[.NET SDK repository on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/webpubsub/Azure.Messaging.WebPubSub.Client) [Log streaming sample](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/logstream/sdk)---
-## Next steps
-
-This quickstart provides you with a basic idea of how to connect to the Web PubSub with client SDK and how to subscribe to group messages and publish messages to groups.
-
azure-web-pubsub Quickstart Use Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-use-sdk.md
Azure Web PubSub helps you manage WebSocket clients. This quickstart shows you h
## Prerequisites - An Azure subscription, if you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- a Bash and PowerShell command shell. The Python, Javascript and Java samples require a Bash command shell.
+- a Bash and PowerShell command shell. The Python, JavaScript and Java samples require a Bash command shell.
- A file editor such as VSCode. - Azure CLI: [install the Azure CLI](/cli/azure/install-azure-cli)
Install both the .NET Core SDK and the `aspnetcore` and dotnet runtime.
## 1. Setup
-To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process. If you are using Cloud Shell it is not necessary to sign in.
+To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process. If you're using Cloud Shell, it isn't necessary to sign in.
```azurecli az login
The connection to the Web PubSub service is established when you see a JSON mess
## 4. Publish messages using service SDK You'll use the Azure Web PubSub SDK to publish a message to all the clients connected to the hub.
-You can choose between C#, JavaScript, Python and Java. The dependencies for each language are installed in the steps for that language. Note that Python, JavaScript and Java require a bash shell to run the commands in this quickstart.
+You can choose between C#, JavaScript, Python and Java. The dependencies for each language are installed in the steps for that language. Python, JavaScript and Java require a bash shell to run the commands in this quickstart.
### Set up the project to publish messages 1. Open a new command shell for this project.
-1. Save the connection string from the client shell:
+1. Save the connection string from the client shell. Replace the `<your_connection_string>` placeholder with the connection string you displayed in an earlier step.
# [Bash](#tab/bash) ```azurecli
- Connection_String="<your_connection_string>"
+ connection_string="<your_connection_string>"
``` # [Azure PowerShell](#tab/azure-powershell)
backup Backup Center Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-actions.md
To stop protection, navigate to the Backup center and select the **Backup Instan
![Stop protection](./media/backup-center-actions/backup-center-stop-protection.png) - [Learn more](backup-azure-manage-vms.md#stop-protecting-a-vm) about stopping backup for Azure Virtual Machines.-- [Learn more](manage-azure-managed-disks.md#stop-protection-preview) about stopping backup for a disk.
+- [Learn more](manage-azure-managed-disks.md#stop-protection) about stopping backup for a disk.
- [Learn more](manage-azure-database-postgresql.md#stop-protection) about stopping backup for Azure Database for PostgreSQL Server. ## Resume backup
backup Blob Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-overview.md
You won't incur any management charges or instance fee when using operational ba
### Vaulted backup (preview)
-You won't incur backup storage charges or instance fees during the preview. However, you'll incur the source side cost, [associated with Object replication](/storage/blobs/object-replication-overview#billing), on the backed-up source account.
+You won't incur backup storage charges or instance fees during the preview. However, you'll incur the source side cost, [associated with Object replication](../storage/blobs/object-replication-overview.md#billing), on the backed-up source account.
## Next steps
backup Manage Azure Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-managed-disks.md
Title: Manage Azure Managed Disks description: Learn about managing Azure Managed Disk from the Azure portal. Previously updated : 01/20/2023 Last updated : 03/27/2023
After you trigger the restore operation, the backup service creates a job for tr
This section describes several Azure Backup supported management operations that make it easy to manage Azure Managed disks.
-### Stop Protection (Preview)
+### Stop Protection
There are three ways by which you can stop protecting an Azure Disk:
There are three ways by which you can stop protecting an Azure Disk:
1. From the list of disk backup instances, select the instance that you want to retain.
-1. Select **Stop Backup (Preview)**.
+1. Select **Stop Backup**.
:::image type="content" source="./media/manage-azure-managed-disks/select-disk-backup-instance-to-stop-inline.png" alt-text="Screenshot showing the selection of the Azure disk backup instance to be stopped." lightbox="./media/manage-azure-managed-disks/select-disk-backup-instance-to-stop-expanded.png":::
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md
This article lists current metros containing point-of-presence (POP) locations,
| Africa | Johannesburg, South Africa <br/> Nairobi, Kenya | South Africa | | Middle East | Muscat, Oman<br />Fujirah, United Arab Emirates | Qatar<br />United Arab Emirates | | India | Bengaluru (Bangalore), India<br />Chennai, India<br />Mumbai, India<br />New Delhi, India<br /> | India |
-| Asia | Hong Kong<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong<br />Indonesia<br />Israel<br />Japan<br />Macau<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />Turkey<br />Vietnam |
+| Asia | Hong Kong<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong<br />Indonesia<br />Israel<br />Japan<br />Macau<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />T├╝rkiye<br />Vietnam |
| Australia and New Zealand | Melbourne, Australia<br />Sydney, Australia<br />Auckland, New Zealand | Australia<br />New Zealand | ## Next steps
cdn Cdn Restrict Access By Country Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-restrict-access-by-country-region.md
In the country/region filtering rules table, select the delete icon next to a ru
* Only one rule can be applied to the same relative path. That is, you can't create multiple country/region filters that point to the same relative path. However, because country/region filters are recursive, a folder can have multiple country/region filters. In other words, a subfolder of a previously configured folder can be assigned a different country/region filter.
-* The geo-filtering feature uses [country/region codes](microsoft-pop-abbreviations.md) codes to define the countries/regions from which a request is allowed or blocked for a secured directory. **Azure CDN from Verizon** and **Azure CDN from Akamai** profiles use ISO 3166-1 alpha-2 country codes to define the countries from which a request are allowed or blocked for a secured directory.
+* The geo-filtering feature uses [country/region codes](microsoft-pop-abbreviations.md) codes to define the countries/regions from which a request is allowed or blocked for a secured directory. **Azure CDN from Verizon** and **Azure CDN from Akamai** profiles use ISO 3166-1 alpha-2 country codes to define the countries/regions from which a request are allowed or blocked for a secured directory.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
The following tables show the Microsoft Security Response Center (MSRC) updates
| MS16-139 |[3199720] |Security Update for Windows Kernel |2.57 |Nov 8.2016 | | MS16-140 |[3193479] |Security Update For Boot Manager |5.3, 4.38, 3.45 |Nov 8, 2016 | | MS16-142 |[3198467] |Cumulative Security Update for Internet Explorer |2.57, 4.38, 5.3 |Nov 8, 2016 |
-| N/A |[3192321] |Turkey ends DST observance |5.3, 4.38, 3.45, 2.57 |Nov 8, 2016 |
+| N/A |[3192321] |T├╝rkiye ends DST observance |5.3, 4.38, 3.45, 2.57 |Nov 8, 2016 |
| N/A |[3185330] |October 2016 security monthly quality rollup for Windows 7 SP1 and Windows Server 2008 R2 SP1 |2.57 |Nov 8, 2016 | | N/A |[3192403] |October 2016 Preview of Monthly Quality Rollup for Windows 7 SP1 and Windows Server 2008 R2 SP1 |2.57 |Nov 8, 2016 | | N/A |[3177467] |Servicing stack update for Windows 7 SP1 and Windows Server 2008 R2 SP1: September 20, 2016 |2.57 |Nov 8, 2016 |
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/language-support.md
The `Accept-Language` header and the `setLang` query parameter are mutually excl
|Sweden|SE| |Switzerland|CH| |Taiwan|TW|
-|Turkey|TR|
+|T├╝rkiye|TR|
|United Kingdom|GB| |United States|US|
The `Accept-Language` header and the `setLang` query parameter are mutually excl
|Switzerland|French|fr-CH| |Switzerland|German|de-CH| |Taiwan|Traditional Chinese|zh-TW|
-|Turkey|Turkish|tr-TR|
+|T├╝rkiye|Turkish|tr-TR|
|United Kingdom|English|en-GB| |United States|English|en-US| |United States|Spanish|es-US|
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/language-support.md
Alternatively, you can specify the country/region using the `cc` query parameter
|Sweden|SE| |Switzerland|CH| |Taiwan|TW|
-|Turkey|TR|
+|T├╝rkiye|TR|
|United Kingdom|GB| |United States|US|
Alternatively, you can specify the country/region using the `cc` query parameter
|Switzerland|French|fr-CH| |Switzerland|German|de-CH| |Taiwan|Traditional Chinese|zh-TW|
-|Turkey|Turkish|tr-TR|
+|T├╝rkiye|Turkish|tr-TR|
|United Kingdom|English|en-GB| |United States|English|en-US| |United States|Spanish|es-US|
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/language-support.md
For a list of country/region codes that you may specify in the `cc` query parame
|Brazil|Portuguese|pt-BR| |Russia|Russian|ru-RU| |Sweden|Swedish|sv-SE|
-|Turkey|Turkish|tr-TR|
+|T├╝rkiye|Turkish|tr-TR|
## Supported markets for news endpoint For the `/news` endpoint, the following table lists the market code values that you may use to specify the `mkt` query parameter. Bing returns content for only these markets. The list is subject to change.
The following are the country/region codes that you may specify in the `cc` quer
|Sweden|SE| |Switzerland|CH| |Taiwan|TW|
-|Turkey|TR|
+|T├╝rkiye|TR|
|United Kingdom|GB| |United States|US|
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/language-support.md
Alternatively, you can specify the market with the `mkt` query parameter, and a
|Sweden|SE| |Switzerland|CH| |Taiwan|TW|
-|Turkey|TR|
+|T├╝rkiye|TR|
|United Kingdom|GB| |United States|US|
Alternatively, you can specify the market with the `mkt` query parameter, and a
|Switzerland|French|fr-CH| |Switzerland|German|de-CH| |Taiwan|Traditional Chinese|zh-TW|
-|Turkey|Turkish|tr-TR|
+|T├╝rkiye|Turkish|tr-TR|
|United Kingdom|English|en-GB| |United States|English|en-US| |United States|Spanish|es-US|
cognitive-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-audio-data.md
You could otherwise specify individual files in the container. You must generate
- [Batch transcription overview](batch-transcription.md) - [Create a batch transcription](batch-transcription-create.md) - [Get batch transcription results](batch-transcription-get.md)
+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
cognitive-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md
The [Trusted Azure services security mechanism](batch-transcription-audio-data.m
- [Batch transcription overview](batch-transcription.md) - [Locate audio files for batch transcription](batch-transcription-audio-data.md)-- [Get batch transcription results](batch-transcription-get.md)
+- [Get batch transcription results](batch-transcription-get.md)
+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
cognitive-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md
Depending in part on the request parameters set when you created the transcripti
- [Batch transcription overview](batch-transcription.md) - [Locate audio files for batch transcription](batch-transcription-audio-data.md) - [Create a batch transcription](batch-transcription-create.md)
+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md
Batch transcription jobs are scheduled on a best-effort basis. You can't estimat
- [Locate audio files for batch transcription](batch-transcription-audio-data.md) - [Create a batch transcription](batch-transcription-create.md) - [Get batch transcription results](batch-transcription-get.md)
+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/)
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Previously updated : 10/27/2022 Last updated : 03/27/2023
Custom Neural Voice (CNV) is a text-to-speech feature that lets you create a one
> [!IMPORTANT] > Custom Neural Voice access is [limited](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
+>
+> Access to [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) is available for anyone to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice.
Out of the box, [text-to-speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=tts). The prebuilt neural voices work very well in most text-to-speech scenarios if a unique voice isn't required.
cognitive-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md
More than 75 prebuilt standard voices are available in over 45 languages and loc
| Tamil (India) | `ta-IN` | Male | `ta-IN-Valluvar`| | Telugu (India) | `te-IN` | Female | `te-IN-Chitra`| | Thai (Thailand) | `th-TH` | Male | `th-TH-Pattara`|
-| Turkish (Turkey) | `tr-TR` | Female | `tr-TR-SedaRUS`|
+| Turkish (T├╝rkiye) | `tr-TR` | Female | `tr-TR-SedaRUS`|
| Vietnamese (Vietnam) | `vi-VN` | Male | `vi-VN-An` | > [!IMPORTANT]
cognitive-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-sas-tokens.md
Previously updated : 12/17/2022 Last updated : 03/24/2023 # Create SAS tokens for your storage containers
-In this article, you'll learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
+In this article, you learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
+
+>[!TIP]
+>
+> [Managed identities](create-use-managed-identities.md) provide an alternate method for you to grant access to your storage data without the need to include SAS tokens with your HTTP requests. *See*, [Managed identities for Document Translation](create-use-managed-identities.md).
+>
+> * You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications.
+> * Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your source and target URLs.
+> * There's no added cost to use managed identities in Azure.
At a high level, here's how SAS tokens work:
Azure Blob Storage offers three resource types:
## Prerequisites
-To get started, you'll need the following resources:
+To get started, you need the following resources:
* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/). * A [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource.
-* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
* [Create a storage account](../../../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field. * [Create a container](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window.
Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your co
1. Specify the signed key **Start** and **Expiry** times. * When you create a shared access signature (SAS), the default duration is 48 hours. After 48 hours, you'll need to create a new token.
- * Consider setting a longer duration period for the time you'll be using your storage account for Translator Service operations.
+ * Consider setting a longer duration period for the time you're using your storage account for Translator Service operations.
* The value for the expiry time is a maximum of seven days from the creation of the SAS token.
-1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, it won't be authorized.
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails.
1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS. 1. Review then select **Generate SAS token and URL**.
-1. The **Blob SAS token** query string and **Blob SAS URL** will be displayed in the lower area of window.
+1. The **Blob SAS token** query string and **Blob SAS URL** are displayed in the lower area of window.
1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.**
Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your co
Azure Storage Explorer is a free standalone app that enables you to easily manage your Azure cloud storage resources from your desktop.
-* You'll need the [**Azure Storage Explorer**](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
+* You need the [**Azure Storage Explorer**](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment.
* After the Azure Storage Explorer app is installed, [connect it to the storage account](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Document Translation. Follow these steps to create tokens for a storage container or specific blob file:
Azure Storage Explorer is a free standalone app that enables you to easily manag
* Define your container **Permissions** by checking and/or clearing the appropriate check box. * Review and select **Create**.
-1. A new window will appear with the **Container** name, **URI**, and **Query string** for your container.
+1. A new window appears with the **Container** name, **URI**, and **Query string** for your container.
1. **Copy and paste the container, URI, and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.** 1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
Azure Storage Explorer is a free standalone app that enables you to easily manag
* Select **key1** or **key2**. * Review and select **Create**.
-1. A new window will appear with the **Blob** name, **URI**, and **Query string** for your blob.
+1. A new window appears with the **Blob** name, **URI**, and **Query string** for your blob.
1. **Copy and paste the blob, URI, and query string values in a secure location. They will only be displayed once and cannot be retrieved once the window is closed.** 1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
Azure Storage Explorer is a free standalone app that enables you to easily manag
### Use your SAS URL to grant access
-The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the resources may be accessed by the client.
+The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the client accesses the resources.
You can include your SAS URL with REST API requests in two ways:
cognitive-services Create Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-managed-identities.md
Previously updated : 03/17/2023 Last updated : 03/24/2023 # Managed identities for Document Translation
-Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to data without having SAS tokens included with your HTTP requests.
+ Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to data without the need to include SAS tokens with your HTTP requests.
:::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md
Previously updated : 07/13/2022 Last updated : 03/24/2023 recommendations: false
recommendations: false
Document Translation is a cloud-based feature of the [Azure Translator](../translator-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API can be used to translate multiple and complex documents across all [supported languages and dialects](../../language-support.md), while preserving original document structure and data format.
-This documentation contains the following article types:
-
-* [**Quickstarts**](get-started-with-document-translation.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](create-sas-tokens.md) contain instructions for using the feature in more specific or customized ways.
-* [**Reference**](reference/rest-api-guide.md) provide REST API settings, values, keywords, and configuration.
- ## Document Translation key features | Feature | Description |
You can add Document Translation to your applications using the REST API or a cl
## Get started
-In our how-to guide, you'll learn how to quickly get started using Document Translation. To begin, you'll need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free).
+In our quickstart, you learn how to rapidly get started using Document Translation. To begin, you need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free).
> [!div class="nextstepaction"] > [Start here](get-started-with-document-translation.md "Learn how to use Document Translation with HTTP REST") ## Supported document formats
-The following document file types are supported by Document Translation:
+Document Translation supports the following document file types:
| File type| File extension|Description| |||--|
-|Adobe PDF|pdf|Portable document file format. Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.|
-|Comma-Separated Values |csv| A comma-delimited raw-data file used by spreadsheet programs.|
-|HTML|html, htm|Hyper Text Markup Language.|
+|Adobe PDF|`pdf`|Portable document file format. Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.|
+|Comma-Separated Values |`csv`| A comma-delimited raw-data file used by spreadsheet programs.|
+|HTML|`html`, `htm`|Hyper Text Markup Language.|
|Localization Interchange File Format|xlf| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
-|Markdown| markdown, mdown, mkdn, md, mkd, mdwn, mdtxt, mdtext, rmd| A lightweight markup language for creating formatted text.|
-|MHTML|mthml, mht| A web page archive format used to combine HTML code and its companion resources.|
-|Microsoft Excel|xls, xlsx|A spreadsheet file for data analysis and documentation.|
-|Microsoft Outlook|msg|An email message created or saved within Microsoft Outlook.|
-|Microsoft PowerPoint|ppt, pptx| A presentation file used to display content in a slideshow format.|
-|Microsoft Word|doc, docx| A text document file.|
-|OpenDocument Text|odt|An open-source text document file.|
-|OpenDocument Presentation|odp|An open-source presentation file.|
-|OpenDocument Spreadsheet|ods|An open-source spreadsheet file.|
-|Rich Text Format|rtf|A text document containing formatting.|
-|Tab Separated Values/TAB|tsv/tab| A tab-delimited raw-data file used by spreadsheet programs.|
-|Text|txt| An unformatted text document.|
+|Markdown| `markdown`, `mdown`, `mkdn`, `md`, `mkd`, `mdwn`, `mdtxt`, `mdtext`, `rmd`| A lightweight markup language for creating formatted text.|
+|M&#8203;HTML|`mthml`, `mht`| A web page archive format used to combine HTML code and its companion resources.|
+|Microsoft Excel|`xls`, `xlsx`|A spreadsheet file for data analysis and documentation.|
+|Microsoft Outlook|`msg`|An email message created or saved within Microsoft Outlook.|
+|Microsoft PowerPoint|`ppt`, `pptx`| A presentation file used to display content in a slideshow format.|
+|Microsoft Word|`doc`, `docx`| A text document file.|
+|OpenDocument Text|`odt`|An open-source text document file.|
+|OpenDocument Presentation|`odp`|An open-source presentation file.|
+|OpenDocument Spreadsheet|`ods`|An open-source spreadsheet file.|
+|Rich Text Format|`rtf`|A text document containing formatting.|
+|Tab Separated Values/TAB|`tsv`/`tab`| A tab-delimited raw-data file used by spreadsheet programs.|
+|Text|`txt`| An unformatted text document.|
### Legacy file types
-Source file types will be preserved during the document translation with the following **exceptions**:
+Source file types are preserved during the document translation with the following **exceptions**:
| Source file extension | Translated file extension| | | |
Source file types will be preserved during the document translation with the fol
## Supported glossary formats
-The following glossary file types are supported by Document Translation:
+Document Translation supports the following glossary file types:
| File type| File extension|Description| |||--|
-|Comma-Separated Values| csv |A comma-delimited raw-data file used by spreadsheet programs.|
-|Localization Interchange File Format| xlf , xliff| A parallel document format, export of Translation Memory systems The languages used are defined inside the file.|
-|Tab-Separated Values/TAB|tsv, tab| A tab-delimited raw-data file used by spreadsheet programs.|
+|Comma-Separated Values| `csv` |A comma-delimited raw-data file used by spreadsheet programs.|
+|Localization Interchange File Format| `xlf` , `xliff`| A parallel document format, export of Translation Memory systems The languages used are defined inside the file.|
+|Tab-Separated Values/TAB|`tsv`, `tab`| A tab-delimited raw-data file used by spreadsheet programs.|
## Next steps
cognitive-services V3 0 Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-languages.md
# Translator 3.0: Languages
-Gets the set of languages currently supported by other operations of the Translator.
+Gets the set of languages currently supported by other operations of the Translator.
## Request URL
Request parameters passed on the query string are:
</tr> <tr> <td>scope</td>
- <td>*Optional parameter*.<br/>A comma-separated list of names defining the group of languages to return. Allowed group names are: `translation`, `transliteration` and `dictionary`. If no scope is given, then all groups are returned, which is equivalent to passing `scope=translation,transliteration,dictionary`. To decide which set of supported languages is appropriate for your scenario, see the description of the [response object](#response-body).</td>
+ <td>*Optional parameter*.<br/>A comma-separated list of names defining the group of languages to return. Allowed group names are: `translation`, `transliteration` and `dictionary`. If no scope is given, then all groups are returned, which is equivalent to passing `scope=translation,transliteration,dictionary`.</td>
</tr>
-</table>
+</table>
+
+*See* [response body](#response-body).
Request headers are:
Request headers are:
<th>Description</th> <tr> <td>Accept-Language</td>
- <td>*Optional request header*.<br/>The language to use for user interface strings. Some of the fields in the response are names of languages or names of regions. Use this parameter to define the language in which these names are returned. The language is specified by providing a well-formed BCP 47 language tag. For instance, use the value `fr` to request names in French or use the value `zh-Hant` to request names in Chinese Traditional.<br/>Names are provided in the English language when a target language is not specified or when localization is not available.
+ <td>*Optional request header*.<br/>The language to use for user interface strings. Some of the fields in the response are names of languages or names of regions. Use this parameter to define the language in which these names are returned. The language is specified by providing a well-formed BCP 47 language tag. For instance, use the value `fr` to request names in French or use the value `zh-Hant` to request names in Chinese Traditional.<br/>Names are provided in the English language when a target language isn't specified or when localization isn't available.
</td> </tr> <tr> <td>X-ClientTraceId</td> <td>*Optional request header*.<br/>A client-generated GUID to uniquely identify the request.</td> </tr>
-</table>
+</table>
Authentication isn't required to get language resources.
The value for each property is as follows.
* `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages. An example is:
-
+ ```json { "translation": {
The value for each property is as follows.
* `nativeName`: Display name of the target language in the locale native for the target language. * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.
-
+ * `code`: Language code identifying the target language. An example is:
The value for each property is as follows.
}, ```
-The structure of the response object will not change without a change in the version of the API. For the same version of the API, the list of available languages may change over time because Microsoft Translator continually extends the list of languages supported by its services.
+The structure of the response object doesn't change without a change in the version of the API. For the same version of the API, the list of available languages may change over time because Microsoft Translator continually extends the list of languages supported by its services.
-The list of supported languages will not change frequently. To save network bandwidth and improve responsiveness, a client application should consider caching language resources and the corresponding entity tag (`ETag`). Then, the client application can periodically (for example, once every 24 hours) query the service to fetch the latest set of supported languages. Passing the current `ETag` value in an `If-None-Match` header field will allow the service to optimize the response. If the resource has not been modified, the service will return status code 304 and an empty response body.
+The list of supported languages doesn't change frequently. To save network bandwidth and improve responsiveness, a client application should consider caching language resources and the corresponding entity tag (`ETag`). Then, the client application can periodically (for example, once every 24 hours) query the service to fetch the latest set of supported languages. Passing the current `ETag` value in an `If-None-Match` header field allows the service to optimize the response. If the resource hasn't been modified, the service returns status code 304 and an empty response body.
## Response headers
The list of supported languages will not change frequently. To save network band
</tr> <tr> <td>X-RequestId</td>
- <td>Value generated by the service to identify the request. It is used for troubleshooting purposes.</td>
+ <td>Value generated by the service to identify the request. It's used for troubleshooting purposes.</td>
</tr>
-</table>
+</table>
## Response status codes
-The following are the possible HTTP status codes that a request returns.
+The following are the possible HTTP status codes that a request returns.
<table width="100%"> <th width="20%">Status Code</th>
The following are the possible HTTP status codes that a request returns.
</tr> <tr> <td>304</td>
- <td>The resource has not been modified since the version specified by request headers `If-None-Match`.</td>
+ <td>The resource hasn't been modified since the version specified by request headers `If-None-Match`.</td>
</tr> <tr> <td>400</td>
The following are the possible HTTP status codes that a request returns.
<td>503</td> <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td> </tr>
-</table>
+</table>
-If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
+If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
## Examples
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/language-support.md
Alternatively, you can specify the country/region using the `cc` query parameter
|Sweden|SE| |Switzerland|CH| |Taiwan|TW|
-|Turkey|TR|
+|T├╝rkiye|TR|
|United Kingdom|GB| |United States|US|
Alternatively, you can specify the country/region using the `cc` query parameter
|Switzerland|French|fr-CH| |Switzerland|German|de-CH| |Taiwan|Traditional Chinese|zh-TW|
-|Turkey|Turkish|tr-TR|
+|T├╝rkiye|Turkish|tr-TR|
|United Kingdom|English|en-GB| |United States|English|en-US| |United States|Spanish|es-US|
cognitive-services Tutorial Visual Search Image Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-image-upload.md
This application has an option to change these values. Add the following `<div>`
<option value="fr-CH">Switzerland (French)</option> <option value="de-CH">Switzerland (German)</option> <option value="zh-TW">Taiwan (Traditional Chinese)</option>
- <option value="tr-TR">Turkey (Turkish)</option>
+ <option value="tr-TR">T├╝rkiye (Turkish)</option>
<option value="en-GB">United Kingdom (English)</option> <option value="en-US" selected>United States (English)</option> <option value="es-US">United States (Spanish)</option>
communication-services European Union Data Boundary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/european-union-data-boundary.md
Azure Communication Services complies with European Union Data Boundary (EUDB) [announced by Microsoft Dec 15, 2022](https://blogs.microsoft.com/eupolicy/2022/12/15/eu-data-boundary-cloud-rollout/).
-This boundary defines data residency and processing rules for resources based on the data location selected when creating a new communication resource. When a data location for a resource is one of the European countries in scope of EUDB, then all processing and storage of personal data remain within the European Union. The EU Data Boundary consists of the countries in the European Union (EU) and the European Free Trade Association (EFTA). The EU Countries are Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, and Sweden; and the EFTA countries are Liechtenstein, Iceland, Norway, and Switzerland.
+This boundary defines data residency and processing rules for resources based on the data location selected when creating a new communication resource. When a data location for a resource is one of the European countries/regions in scope of EUDB, then all processing and storage of personal data remain within the European Union. The EU Data Boundary consists of the countries/regions in the European Union (EU) and the European Free Trade Association (EFTA). The EU countries/regions are Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, and Sweden; and the EFTA countries/regions are Liechtenstein, Iceland, Norway, and Switzerland.
## Calling
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
More details on eligible subscription types are as follows:
| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | | Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go |
-\* In some countries, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed.
+\* In some countries/regions, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed.
\** Applications from all other subscription types will be reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
### Usage charges |Number type |To make calls* |To receive calls| |--|--||
-|Geographic |Starting at USD 0165/min |USD 0.0072/min |
-|Toll-free |Starting at USD 0165/min | USD 0.2200/min |
+|Geographic |Starting at USD 0.165/min |USD 0.0072/min |
+|Toll-free |Starting at USD 0.165/min | USD 0.2200/min |
\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
communication-services Plan Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/plan-solution.md
The table below summarizes these phone number types:
| Local (Geographic) | +1 (local area code) XXX XX XX | US* | Calling (Outbound) | Assigning phone numbers to users in your applications | | Toll-Free | +1 (toll-free area *code*) XXX XX XX | US* | Calling (Outbound), SMS (Inbound/Outbound)| Assigning phone numbers to Interactive Voice Response (IVR) systems/Bots, SMS applications |
-*To find all countries where telephone numbers are available, please refer to [subscription eligibility and number capabilities page](../numbers/sub-eligibility-number-capability.md).
+*To find all countries/regions where telephone numbers are available, please refer to [subscription eligibility and number capabilities page](../numbers/sub-eligibility-number-capability.md).
### Phone number capabilities in Azure Communication Services
communication-services Mute Participants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/mute-participants.md
+
+ Title: Mute participants during a call
+
+description: Provides a how-to guide for muting participants during a call.
++++ Last updated : 03/19/2023+++
+zone_pivot_groups: acs-csharp-java
++
+# Mute participants during a call
+
+>[!IMPORTANT]
+>Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+>Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/acs-tap-invite).
+
+With Azure Communication Services Call Automation SDK, developers can now mute participants through server based API requests. This feature can be useful when you want your application to mute participants after they've joined the meeting to avoid any interruptions or distractions to ongoing meetings.
+
+If youΓÇÖre interested in abilities to allow participants to mute/unmute themselves on the call when theyΓÇÖve joined with ACS Client Libraries, you can use our [mute/unmute function](../../../communication-services/how-tos/calling-sdk/manage-calls.md) provided through our Calling Library.
+
+## Common use cases
+
+### Contact center supervisor call monitoring
+
+In a typical contact center, there may be times when a supervisor needs to join an on-going call to monitor the call to provide guidance to agents after the call on how they could improve their assistance. The supervisor would join muted as to not disturb the on-going call with any extra side noise.
+
+*This guide helps you learn how to mute participants by using the mute action provided through Azure Communication Services Call Automation SDK.*
+++++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+Learn more about [Call Automation](../../concepts/call-automation/call-automation.md).
communication-services Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/call-recording/bring-your-own-storage.md
This quickstart gets you started with BYOS (Bring your own storage) for Call Rec
![Diagram showing a communication service resource with managed identity disabled](../media/byos-managed-identity-1.png) 1. Open your Azure Communication Services resource. Navigate to *Identity* on the left.
-2. System Assigned Managed Identity is disabled by default. Enable it and click of *Save*
+2. System Assigned Managed Identity is disabled by default. Enable it and click on *Save*
3. Once completed, you're able to see the Object principal ID of the newly created identity. ![Diagram showing a communication service resource with managed identity enabled](../media/byos-managed-identity-2.png)
confidential-computing Quick Create Confidential Vm Azure Cli Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli-amd.md
Make a note of the `publicIpAddress` to use later.
Create a confidential [disk encryption set](../virtual-machines/linux/disks-enable-customer-managed-keys-cli.md) using [Azure Key Vault](../key-vault/general/quick-create-cli.md) or [Azure Key Vault managed Hardware Security Module (HSM)](../key-vault/managed-hsm/quick-create-cli.md). Based on your security and compliance needs you can choose either option. The following example uses Azure Key Vault Premium.
-1. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault.
+1. Grant confidential VM Service Principal `Confidential VM Orchestrator` to tenant
+For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role.
+ ```azurecli
+ Connect-AzureAD -Tenant "your tenant ID"
+ New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ ```
+2. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault.
```azurecli-interactive az keyvault create -n keyVaultName -g myResourceGroup --enabled-for-disk-encryption true --sku premium --enable-purge-protection true ```
-2. Create a key in the key vault using [az keyvault key create](/cli/azure/keyvault). For the key type, use RSA-HSM.
+3. Give `Confidential VM Orchestrator` permissions to `get` and `release` the key vault.
+ ```azurecli
+ $cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json
+ az keyvault set-policy --name $KeyVault --object-id $cvmAgent.objectId --key-permissions get release
+ ```
+4. Create a key in the key vault using [az keyvault key create](/cli/azure/keyvault). For the key type, use RSA-HSM.
```azurecli-interactive az keyvault key create --name mykey --vault-name keyVaultName --default-cvm-policy --exportable --kty RSA-HSM ```
-3. Create the disk encryption set using [az disk-encryption-set create](/cli/azure/disk-encryption-set). Set the encryption type to `ConfidentialVmEncryptedWithCustomerKey`.
+5. Create the disk encryption set using [az disk-encryption-set create](/cli/azure/disk-encryption-set). Set the encryption type to `ConfidentialVmEncryptedWithCustomerKey`.
```azurecli-interactive $keyVaultKeyUrl=(az keyvault key show --vault-name keyVaultName --name mykey--query [key.kid] -o tsv) az disk-encryption-set create --resource-group myResourceGroup --name diskEncryptionSetName --key-url $keyVaultKeyUrl --encryption-type ConfidentialVmEncryptedWithCustomerKey ```
-4. Grant the disk encryption set resource access to the key vault using [az key vault set-policy](/cli/azure/keyvault).
+6. Grant the disk encryption set resource access to the key vault using [az key vault set-policy](/cli/azure/keyvault).
```azurecli-interactive $desIdentity=(az disk-encryption-set show -n diskEncryptionSetName -g myResourceGroup --query [identity.principalId] -o tsv) az keyvault set-policy -n keyVaultName -g myResourceGroup --object-id $desIdentity --key-permissions wrapkey unwrapkey get ```
-5. Use the disk encryption set ID to create the VM.
+7. Use the disk encryption set ID to create the VM.
```azurecli-interactive $diskEncryptionSetID=(az disk-encryption-set show -n diskEncryptionSetName -g myResourceGroup --query [id] -o tsv) ```
-6. Create a VM with the [az vm create](/cli/azure/vm) command. Choose `DiskWithVMGuestState` for OS disk confidential encryption with a customer-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
+8. Create a VM with the [az vm create](/cli/azure/vm) command. Choose `DiskWithVMGuestState` for OS disk confidential encryption with a customer-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
```azurecli-interactive az vm create \
echo -n $JWT | cut -d "." -f 2 | base64 -d 2> | jq .
## Next steps > [!div class="nextstepaction"]
-> [Create a confidential VM on AMD with an ARM template](quick-create-confidential-vm-arm-amd.md)
+> [Create a confidential VM on AMD with an ARM template](quick-create-confidential-vm-arm-amd.md)
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
Previously updated : 3/20/2023 Last updated : 3/24/2023
The [custom location](../azure-arc/kubernetes/custom-locations.md) is an Azure l
+ > [!NOTE]
+ > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](../azure-arc/kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with an Azure Active Directory user with restricted permissions on the cluster resource.
+ >
+ 1. Validate that the custom location is successfully created with the following command. The output should show the `provisioningState` property as `Succeeded`. If not, rerun the command after a minute. ```azurecli
container-apps Log Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-options.md
Container Apps application logs consist of two different categories:
- Container console output (`stdout`/`stderr`) messages. - System logs generated by Azure Container Apps.
+- Spring App console logs.
You can choose between these logs destinations:
container-apps Log Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-streaming.md
description: View your container app's log stream.
- Previously updated : 08/30/2022 Last updated : 03/24/2023 # View log streams in Azure Container Apps
-While developing and troubleshooting your container app, it's important to see a container's logs in real-time. Container Apps lets you view a stream of your container's `stdout` and `stderr` log messages through the Azure portal or the Azure CLI.
+While developing and troubleshooting your container app, it's essential to see the [logs](logging.md) for your container app in real time. Azure Container Apps lets you stream:
-## Azure portal
+- [system logs](logging.md#system-logs) from the Container Apps environment and your container app.
+- container [console logs](logging.md#container-console-logs) from your container app.
-View a container app's log stream in the Azure portal with these steps.
+Log streams are accessible through the Azure portal or the Azure CLI.
-1. Navigate to your container app in the Azure portal.
+## View log streams via the Azure portal
+
+You can view system logs and console logs in the Azure portal. System logs are generated by the container app's runtime. Console logs are generated by your container app.
+
+### Environment system log stream
+
+To troubleshoot issues in your container app environment, you can view the system log stream from your environment page. The log stream displays the system logs for the Container Apps service and the apps actively running in the environment:
+
+1. Go to your environment in the Azure portal.
1. Select **Log stream** under the *Monitoring* section on the sidebar menu.
-1. If you have multiple revisions, replicas, or containers, you can select from the pull-down menus to choose a container. If your app has only one container, you can skip this step.
-After a container is selected, the log stream is displayed in the viewing pane.
+ :::image type="content" source="media/observability/system-log-streaming-env.png" alt-text="Screenshot of Container Apps environment system log stream page.":::
+### Container app log stream
+
+You can view a log stream of your container app's system or console logs from your container app page.
+
+1. Go to your container app in the Azure portal.
+1. Select **Log stream** under the *Monitoring* section on the sidebar menu.
+1. To view the console log stream, select **Console**.
+ 1. If you have multiple revisions, replicas, or containers, you can select from the drop-down menus to choose a container. If your app has only one container, you can skip this step.
-## Azure CLI
+ :::image type="content" source="media/observability/screenshot-log-stream-console-app.png" alt-text="Screenshot of Container Apps console log stream from app page.":::
-You can view a container's log stream from the Azure CLI with the `az containerapp logs show` command. You can use these arguments to:
+1. To view the system log stream, select **System**. The system log stream displays the system logs for all running containers in your container app.
-- View previous log entries with the `--tail` argument.-- View a live stream with the `--follow`argument.
+ :::image type="content" source="media/observability/screenshot-log-stream-system-app.png" alt-text="Screenshot of Container Apps system log stream from app page.":::
-Use `Ctrl/Cmd-C` to stop the live stream.
+## View log streams via the Azure CLI
-For example, you can list the last 50 container log entries in a container app with a single container using the following command.
+You can view your container app's log streams from the Azure CLI with the `az containerapp logs show` command or your container app's environment system log stream with the `az containerapp env logs show` command.
-This example live streams a container's log entries.
+Control the log stream with the following arguments:
+
+- `--tail` (Default) View the last n log messages. Values are 0-300 messages. The default is 20.
+- `--follow` View a continuous live stream of the log messages.
+
+### Stream Container app logs
+
+You can stream the system or console logs for your container app. To stream the container app system logs, use the `--type` argument with the value `system`. To stream the container console logs, use the `--type` argument with the value `console`. The default is `console`.
+
+#### View container app system log stream
+
+This example uses the `--tail` argument to display the last 50 system log messages from the container app. Replace the \<placeholders\> with your container app's values.
# [Bash](#tab/bash)
This example live streams a container's log entries.
az containerapp logs show \ --name <ContainerAppName> \ --resource-group <ResourceGroup> \
+ --type system \
--tail 50 ```
az containerapp logs show \
az containerapp logs show ` --name <ContainerAppName> ` --resource-group <ResourceGroup> `
+ --type system `
--tail 50 ```
-To connect to a container console in a container app with multiple revisions, replicas, and containers include the following parameters in the `az containerapp logs show` command.
+This example displays a continuous live stream of system log messages from the container app using the `--follow` argument. Replace the \<placeholders\> with your container app's values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp logs show \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --type system \
+ --follow
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp logs show `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --type system `
+ --follow
+```
+++
+Use `Ctrl-C` or `Cmd-C` to stop the live stream.
+
+### View container console log stream
+
+To connect to a container's console log stream in a container app with multiple revisions, replicas, and containers, include the following parameters in the `az containerapp logs show` command.
| Argument | Description | |-|-|
-| `--revision` | The revision name of the container to connect to. |
-| `--replica` | The replica name of the container to connect to. |
-| `--container` | The container name of the container to connect to. |
+| `--revision` | The revision name. |
+| `--replica` | The replica name in the revision. |
+| `--container` | The container name to connect to. |
-You can get the revision names with the `az containerapp revision list` command. Replace the \<placeholders\> with your container app's values.
+You can get the revision names with the `az containerapp revision list` command. Replace the \<placeholders\> with your container app's values.
# [Bash](#tab/bash)
az containerapp replica list `
-Stream the container logs with the `az container app show` command. Replace the \<placeholders\> with your container app's values.
-
+Live stream the container console using the `az container app show` command with the `--follow` argument. Replace the \<placeholders\> with your container app's values.
# [Bash](#tab/bash)
az containerapp logs show \
--revision <RevisionName> \ --replica <ReplicaName> \ --container <ContainerName> \
+ --type console \
--follow ```
az containerapp logs show `
--revision <RevisionName> ` --replica <ReplicaName> ` --container <ContainerName> `
+ --type console `
+ --follow
+```
+++
+Use `Ctrl-C` or `Cmd-C` to stop the live stream.
+
+View the last 50 console log messages using the `az containerapp logs show` command with the `--tail` argument. Replace the \<placeholders\> with your container app's values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp logs show \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --revision <RevisionName> \
+ --replica <ReplicaName> \
+ --container <ContainerName> \
+ --type console \
+ --tail 50
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp logs show `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --revision <RevisionName> `
+ --replica <ReplicaName> `
+ --container <ContainerName> `
+ --type console `
+ --tail 50
+```
+++
+### View environment system log stream
+
+Use the following command with the `--follow` argument to view the live system log stream from the Container Apps environment. Replace the \<placeholders\> with your environment values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp env logs show \
+ --name <ContainerAppEnvironmentName> \
+ --resource-group <ResourceGroup> \
+ --follow
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp env logs show `
+ --name <ContainerAppEnvironmentName> `
+ --resource-group <ResourceGroup> `
--follow ```
+Use `Ctrl-C` or `Cmd-C` to stop the live stream.
-Enter **Ctrl-C** to stop the log stream.
+This example uses the `--tail` argument to display the last 50 environment system log messages. Replace the \<placeholders\> with your environment values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp env logs show \
+ --name <ContainerAppName> \
+ --resource-group <ResourceGroup> \
+ --tail 50
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp env logs show `
+ --name <ContainerAppName> `
+ --resource-group <ResourceGroup> `
+ --tail 50
+```
++ > [!div class="nextstepaction"]
-> [View log streams from the Azure portal](log-streaming.md)
+> [Log storage and monitoring options in Azure Container Apps](log-monitoring.md)
container-apps Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/logging.md
Azure Container Apps provides two types of application logging categories: -- [Container console logs](#container-console-logs): Log streams from your container console.-- [System logs](#system-logs): Logs generated by the Azure Container Apps service.
+- [Container console logs](#container-console-logs): Log streams from your container console.
+- [System logs](#system-logs): Logs generated by the Azure Container Apps service.
+You can view the [log streams](log-streaming.md) in near real-time in the Azure portal or CLI. For more options to store and monitor your logs, see [Logging options](log-options.md).
## Container console Logs
-Container console logs are written by your application to the `stdout` and `stderr` output streams of the application's container. By implementing detailed logging in your application, you'll be able to troubleshoot issues and monitor the health of your application.
-
-You can view your container console logs through [Logs streaming](log-streaming.md). For other options to store and monitoring your log data, see [Logging options](log-options.md).
+Container Apps captures the `stdout` and `stderr` output streams from your application containers and displays them as console logs. When you implement logging in your application, you can troubleshoot problems and monitor the health of your app.
## System logs
-System logs are generated by the Azure Container Apps to inform you for the status of service level events. Log messages include the following information:
+Container Apps generates system logs to inform you of the status of service level events. Log messages include the following information:
- Successfully created dapr component - Successfully updated dapr component
System logs are generated by the Azure Container Apps to inform you for the stat
- Successfully mounted volume - Error mounting volume - Successfully bound Domain-- Auth enabled on app. Creating authentication config
+- Auth enabled on app
+- Creating authentication config
- Auth config created successfully-- Setting a traffic weight
+- Setting a traffic weight
- Creating a new revision: - Successfully provisioned revision - Deactivating Old revisions - Error provisioning revision
-The system log data can be stored and monitored through the Container Apps logging options. For more information, see [Logging options](log-options.md).
- ## Next steps > [!div class="nextstepaction"]
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
These features include:
|Feature |Description | |||
-|[Log streaming](log-streaming.md) | View streaming console logs from a container in near real-time. |
+|[Log streaming](log-streaming.md) | View streaming system and console logs from a container in near real-time. |
|[Container console](container-console.md) | Connect to the Linux console in your containers to debug your application from inside the container. | |[Azure Monitor metrics](metrics.md)| View and analyze your application's compute and network usage through metric data. | |[Application logging](logging.md) | Monitor, analyze and debug your app using log data.|
cosmos-db Analytical Store Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md
+
+ Title: Change data capture in analytical store
+
+description: Change data capture (CDC) in Azure Cosmos DB analytical store allows you to efficiently consume a continuous and incremental feed of changed data.
+++++ Last updated : 03/23/2023++
+# Change Data Capture in Azure Cosmos DB analytical store
++
+Change data capture (CDC) in [Azure Cosmos DB analytical store](analytical-store-introduction.md) allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store. The change data capture feature of the analytical store is seamlessly integrated with Azure Synapse and Azure Data Factory, providing you with a scalable no-code experience for high data volume. As the change data capture feature is based on analytical store, it [doesn't consume provisioned RUs, doesn't affect your transactional workloads](analytical-store-introduction.md#decoupled-performance-for-analytical-workloads), provides lower latency, and has lower TCO.
++
+In addition to providing incremental data feed from analytical store to diverse targets, change data capture supports the following capabilities:
+
+- Supports applying filters, projections and transformations on the Change feed via source query
+- Supports capturing deletes and intermediate updates
+- Ability to filter the change feed for a specific type of operation (**Insert** | **Update** | **Delete** | **TTL**)
+- Each change in Container appears exactly once in the change data capture feed, and the checkpoints are managed internally for you
+- Changes can be synchronized from ΓÇ£the BeginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥
+- There's no limitation around the fixed data retention period for which changes are available
+- Multiple change feeds on the same container can be consumed simultaneously
+
+## Features
+
+Change data capture in Azure Cosmos DB analytical store supports the following key features.
+
+### Capturing deletes and intermediate updates
+
+The change data capture feature for the analytical store captures deleted records and the intermediate updates. The captured deletes and updates can be applied on Sinks that support delete and update operations. The {_rid} value uniquely identifies the records and so by specifying {_rid} as key column on the Sink side, the update and delete operations would be reflected on the Sink.
+
+### Filter the change feed for a specific type of operation
+
+You can filter the change data capture feed for a specific type of operation. For example, you can selectively capture the insert and update operations only, thereby ignoring the user-delete and TTL-delete operations.
+
+### Applying filters, projections, and transformations on the Change feed via source query
+
+You can optionally use a source query to specify filter(s), projection(s), and transformation(s), which would all be pushed down to the columnar analytical store. Here's a sample source-query that would only capture incremental records with the filter `Category = 'Urban'`. This sample query projects only five fields and applies a simple transformation:
+
+```sql
+SELECT ProductId, Product, Segment, concat(Manufacturer, '-', Category) as ManufacturerCategory
+FROM c
+WHERE Category = 'Urban'
+```
+
+> [!NOTE]
+> If you would like to enable source-query based change data capture on Azure Data Factory data flows during preview, please email [cosmosdbsynapselink@microsoft.com](mailto:cosmosdbsynapselink@microsoft.com) and share your **subscription Id** and **region**. This is not necessary to enable source-query based change data capture on an Azure Synapse data flow.
+
+### Throughput isolation, lower latency and lower TCO
+
+Operations on Cosmos DB analytical store don't consume the provisioned RUs and so don't affect your transactional workloads. change data capture with analytical store also has lower latency and lower TCO. The lower latency is attributed to analytical store enabling better parallelism for data processing and reduces the overall TCO enabling you to drive cost efficiencies in these rapidly shifting economic conditions.
+
+## Scenarios
+
+Here are common scenarios where you could use change data capture and the analytical store.
+
+### Consuming incremental data from Cosmos DB
+
+You can use analytical store change data capture, if you're currently using or planning to use:
+
+- Incremental data capture using Azure Data Factory Data Flows or Copy activity.
+- One time batch processing using Azure Data Factory.
+- Streaming Cosmos DB data
+ - The analytical store has up to 2-min latency to sync transactional store data. You can schedule Data Flows in Azure Data Factory every minute.
+ - If you need to stream without the above latency, we recommend using the change feed feature of the transactional store.
+- Capturing deletes, incremental changes, applying filters on Cosmos DB Data.
+ - If you're using Azure Functions triggers or any other option with change feed and would like to capture deletes, incremental changes, apply transformations etc.; we recommend change data capture over analytical store.
+
+### Incremental feed to analytical platform of your choice
+
+change data capture capability enables end-to-end analytical story providing you with the flexibility to use Azure Cosmos DB data on analytical platform of your choice seamlessly. It also enables you to bring Cosmos DB data into a centralized data lake and join with data from diverse data sources. For more information, see [supported sink types](../data-factory/data-flow-sink.md#supported-sinks). You can flatten the data, apply more transformations either in Azure Synapse Analytics or Azure Data Factory.
+
+## Change data capture on Azure Cosmos DB for MongoDB containers
+
+The linked service interface for the API for MongoDB isn't available within Azure Data Factory data flows yet. You can use your API for MongoDB's account endpoint with the **Azure Cosmos DB for NoSQL** linked service interface as a work around until the Mongo linked service is directly supported.
+
+In the interface for a new NoSQL linked service, select **Enter Manually** to provide the Azure Cosmos DB account information. Here, use the account's NoSQL document endpoint (ex: `https://<account-name>.documents.azure.com:443/`) instead of the Mongo DB endpoint (ex: `mongodb://<account-name>.mongo.cosmos.azure.com:10255/`)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with change data capture in the analytical store](get-started-change-data-capture.md)
cosmos-db Get Started Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-started-change-data-capture.md
+
+ Title: Get started with change data capture in analytical store
+
+description: Enable change data capture in Azure Cosmos DB analytical store for an existing account to consume a continuous and incremental feed of changed data.
+++++ Last updated : 03/23/2023++
+# Get started with change data capture in the analytical store for Azure Cosmos DB
++
+Use Change data capture (CDC) in Azure Cosmos DB analytical store as a source to [Azure Data Factory](../data-factory/index.yml) or [Azure Synapse Analytics](../synapse-analytics/index.yml) to capture specific changes to your data.
+
+## Prerequisites
+
+- An existing Azure Cosmos DB account.
+ - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit.
+
+## Enable analytical store
+
+First, enable Azure Synapse Link at the account level and then enable analytical store for the containers that's appropriate for your workload.
+
+1. Enable Azure Synapse Link: [Enable Azure Synapse Link for an Azure Cosmos DB account](configure-synapse-link.md#enable-synapse-link) |
+
+1. Enable analytical store for your container\[s\]:
+
+ | Option | Guide |
+ | | |
+ | **Enable for a specific new container** | [Enable Azure Synapse Link for your new containers](configure-synapse-link.md#new-container) |
+ | **Enable for a specific existing container** | [Enable Azure Synapse Link for your existing containers](configure-synapse-link.md#existing-container) |
+
+## Create a target Azure resource using data flows
+
+The change data capture feature of the analytical store is available through the data flow feature of [Azure Data Factory](../data-factory/concepts-data-flow-overview.md) or [Azure Synapse Analytics](../synapse-analytics/concepts-data-flow-overview.md). For this guide, use Azure Data Factory.
+
+> [!IMPORTANT]
+> You can alternatively use Azure Synapse Analytics. First, [create an Azure Synapse workspace](../synapse-analytics/quickstart-create-workspace.md), if you don't already have one. Within the newly created workspace, select the **Develop** tab, select **Add new resource**, and then select **Data flow**.
+
+1. [Create an Azure Data Factory](../data-factory/quickstart-create-data-factory.md), if you don't already have one.
+
+ > [!TIP]
+ > If possible, create the data factory in the same region where your Azure Cosmos DB account resides.
+
+1. Launch the newly created data factory.
+
+1. In the data factory, select the **Data flows** tab, and then select **New data flow**.
+
+1. Give the newly created data flow a unique name. In this example, the data flow is named `cosmoscdc`.
+
+ :::image type="content" source="media/get-started-change-data-capture/data-flow-name.png" lightbox="media/get-started-change-data-capture/data-flow-name.png" alt-text="Screnshot of a new data flow with the name cosmoscdc.":::
+
+## Configure source settings for the analytical store container
+
+Now create and configure a source to flow data from the Azure Cosmos DB account's analytical store.
+
+1. Select **Add Source**.
+
+ :::image type="content" source="media/get-started-change-data-capture/add-source.png" alt-text="Screenshot of the add source menu option.":::
+
+1. In the **Output stream name** field, enter **cosmos**.
+
+ :::image type="content" source="media/get-started-change-data-capture/source-name.png" alt-text="Screenshot of naming the newly created source cosmos.":::
+
+1. In the **Source type** section, select **Inline**.
+
+ :::image type="content" source="media/get-started-change-data-capture/inline-source-type.png" alt-text="Screenshot of selecting the inline source type.":::
+
+1. In the **Dataset** field, select **Azure - Azure Cosmos DB for NoSQL**.
+
+ :::image type="content" source="media/get-started-change-data-capture/dataset-type-cosmos.png" alt-text="Screenshot of selecting Azure Cosmos DB for NoSQL as the dataset type.":::
+
+1. Create a new linked service for your account named **cosmoslinkedservice**. Select your existing Azure Cosmos DB for NoSQL account in the **New linked service** popup dialog and then select **Ok**. In this example, we select a pre-existing Azure Cosmos DB for NoSQL account named `msdocs-cosmos-source` and a database named `cosmicworks`.
+
+ :::image type="content" source="media/get-started-change-data-capture/new-linked-service.png" alt-text="Screenshot of the New linked service dialog with an Azure Cosmos DB account selected.":::
+
+1. Select **Analytical** for the store type.
+
+ :::image type="content" source="media/get-started-change-data-capture/linked-service-analytical.png" alt-text="Screenshot of the analytical option selected for a linked service.":::
+
+1. Select the **Source options** tab.
+
+1. Within **Source options**, select your target container and enable **Data flow debug**. In this example, the container is named `products`.
+
+ :::image type="content" source="media/get-started-change-data-capture/container-name.png" alt-text="Screenshot of a source container selected named products.":::
+
+1. Select **Data flow debug**. In the **Turn on data flow debug** popup dialog, retain the default options and then select **Ok**.
+
+ :::image type="content" source="media/get-started-change-data-capture/enable-data-flow-debug.png" alt-text="Screenshot of the toggle option to enable data flow debug.":::
+
+1. The **Source options** tab also contains other options you may wish to enable. This table describes those options:
+
+| Option | Description |
+| | |
+| Capture intermediate updates | Enable this option if you would like to capture the history of changes to items including the intermediate changes between change data capture reads. |
+| Capture Deletes | Enable this option to capture user-deleted records and apply them on the Sink. Deletes can't be applied on Azure Data Explorer and Azure Cosmos DB Sinks. |
+| Capture Transactional store TTLs | Enable this option to capture Azure Cosmos DB transactional store (time-to-live) TTL deleted records and apply on the Sink. TTL-deletes can't be applied on Azure Data Explorer and Azure Cosmos DB sinks. |
+| Batchsize in bytes | Specify the size in bytes if you would like to batch the change data capture feeds |
+| Extra Configs | Extra Azure Cosmos DB analytical store configs and their values. (ex: `spark.cosmos.allowWhiteSpaceInFieldNames -> true`) |
+
+## Create and configure sink settings for update and delete operations
+
+First, create a straightforward [Azure Blob Storage](../storage/blobs/index.yml) sink and then configure the sink to filter data to only specific operations.
+
+1. [Create an Azure Blob Storage](../data-factory/quickstart-create-data-factory.md) account and container, if you don't already have one. For the next examples, we'll use an account named `msdocsblobstorage` and a container named `output`.
+
+ > [!TIP]
+ > If possible, create the storage account in the same region where your Azure Cosmos DB account resides.
+
+1. Back in Azure Data Factory, create a new sink for the change data captured from your `cosmos` source.
+
+ :::image type="content" source="media/get-started-change-data-capture/add-sink.png" alt-text="Screenshot of adding a new sink that's connected to the existing source.":::
+
+1. Give the sink a unique name. In this example, the sink is named `storage`.
+
+ :::image type="content" source="media/get-started-change-data-capture/sink-name.png" alt-text="Screenshot of naming the newly created sink storage.":::
+
+1. In the **Sink type** section, select **Inline**. In the **Dataset** field, select **Delta**.
+
+ :::image type="content" source="media/get-started-change-data-capture/sink-dataset-type.png" alt-text="Screenshot of selecting and Inline Delta dataset type for the sink.":::
+
+1. Create a new linked service for your account using **Azure Blob Storage** named **storagelinkedservice**. Select your existing Azure Blob Storage account in the **New linked service** popup dialog and then select **Ok**. In this example, we select a pre-existing Azure Blob Storage account named `msdocsblobstorage`.
+
+ :::image type="content" source="media/get-started-change-data-capture/new-linked-service-sink-type.png" alt-text="Screenshot of the service type options for a new Delta linked service.":::
+
+ :::image type="content" source="media/get-started-change-data-capture/new-linked-service-sink-config.png" alt-text="Screenshot of the New linked service dialog with an Azure Blob Storage account selected.":::
+
+1. Select the **Settings** tab.
+
+1. Within **Settings**, set the **Folder path** to the name of the blob container. In this example, the container's name is `output`.
+
+ :::image type="content" source="media/get-started-change-data-capture/sink-container-name.png" alt-text="Screenshot of the blob container named output set as the sink target.":::
+
+1. Locate the **Update method** section and change the selections to only allow **delete** and **update** operations. Also, specify the **Key columns** as a **List of columns** using the field `_{rid}` as the unique identifier.
+
+ :::image type="content" source="media/get-started-change-data-capture/sink-methods-columns.png" alt-text="Screenshot of update methods and key columns being specified for the sink.":::
+
+1. Select **Validate** to ensure you haven't made any errors or omissions. Then, select **Publish** to publish the data flow.
+
+ :::image type="content" source="media/get-started-change-data-capture/validate-publish-data-flow.png" alt-text="Screenshot of the option to validate and then publish the current data flow.":::
+
+## Schedule change data capture execution
+
+After a data flow has been published, you can add a new pipeline to move and transform your data.
+
+1. Create a new pipeline. Give the pipeline a unique name. In this example, the pipeline is named `cosmoscdcpipeline`.
+
+ :::image type="content" source="media/get-started-change-data-capture/new-pipeline.png" alt-text="Screenshot of the new pipeline option within the resources section.":::
+
+1. In the **Activities** section, expand the **Move &amp; transform** option and then select **Data flow**.
+
+ :::image type="content" source="media/get-started-change-data-capture/data-flow-activity.png" alt-text="Screenshot of the data flow activity option within the activities section.":::
+
+1. Give the data flow activity a unique name. In this example, the activity is named `cosmoscdcactivity`.
+
+1. In the **Settings** tab, select the data flow named `cosmoscdc` you created earlier in this guide. Then, select a compute size based on the data volume and required latency for your workload.
+
+ :::image type="content" source="media/get-started-change-data-capture/data-flow-settings.png" alt-text="Screenshot of the configuration settings for both the data flow and compute size for the activity.":::
+
+ > [!TIP]
+ > For incremental data sizes greater than 100 GB, we recommend the **Custom** size with a core count of 32 (+16 driver cores).
+
+1. Select **Add trigger**. Schedule this pipeline to execute at a cadence that makes sense for your workload. In this example, the pipeline is configured to execute every five minutes.
+
+ :::image type="content" source="media/get-started-change-data-capture/add-trigger.png" alt-text="Screenshot of the add trigger button for a new pipeline.":::
+
+ :::image type="content" source="media/get-started-change-data-capture/trigger-configuration.png" alt-text="Screenshot of a trigger configuration based on a schedule, starting in the year 2023, that runs every five minutes.":::
+
+ > [!NOTE]
+ > The minimum recurrence window for change data capture executions is one minute.
+
+1. Select **Validate** to ensure you haven't made any errors or omissions. Then, select **Publish** to publish the pipeline.
+
+1. Observe the data placed into the Azure Blob Storage container as an output of the data flow using Azure Cosmos DB analytical store change data capture.
+
+ :::image type="content" source="media/get-started-change-data-capture/output-files.png" alt-text="Screnshot of the output files from the pipeline in the Azure Blob Storage container.":::
+
+ > [!NOTE]
+ > The initial cluster startup time may take up to three minutes. To avoid cluster startup time in the subsequent change data capture executions, configure the Dataflow cluster **Time to live** value. For more information about the itegration runtime and TTL, see [integration runtime in Azure Data Factory](../data-factory/concepts-integration-runtime.md).
+
+## Next steps
+
+- Review the [overview of Azure Cosmos DB analytical store](analytical-store-introduction.md)
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Azure Cosmos DB for MongoDB vCore supports the following aggregation pipeline fe
| Command | Supported | ||| | `$mergeObjects` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `$objectToArray` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
+| `$objectToArray` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
| `$setField` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No | ## Data types
cosmos-db Periodic Backup Modify Interval Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-modify-interval-retention.md
+
+ Title: Modify periodic backup interval and retention period
+
+description: Learn how to modify the interval and retention period for periodic backup in Azure Cosmos DB accounts.
+++++ Last updated : 03/21/2023+++
+# Modify periodic backup interval and retention period in Azure Cosmos DB
++
+Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos DB account creation or after the account is created. The backup configuration is set at the Azure Cosmos DB account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal, Azure PowerShell, or the Azure CLI.
+
+## Prerequisites
+
+- An existing Azure Cosmos DB account.
+ - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit.
+
+## Before you start
+
+If you've accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
+
+## Modify backup options for an existing account
+
+Use the following steps to change the default backup options for an existing Azure Cosmos DB account.
+
+### [Azure portal](#tab/azure-portal)
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your Azure Cosmos DB account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required.
+
+ - **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a nonzero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval can't be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken.
+
+ - **Backup Retention** - It represents the period where each backup is retained. You can configure it in hours or days. The minimum retention period canΓÇÖt be less than two times the backup interval (in hours) and it canΓÇÖt be greater than 720 hours.
+
+ - **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There's an extra charge if you need more than two copies. See the Consumed Storage section in the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for extra copies.
+
+ - **Backup storage redundancy** - Choose the required storage redundancy option. For more information, see [backup storage redundancy](periodic-backup-storage-redundancy.md). By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the account is being provisioned supports it. Otherwise, the account fallback to the highest redundancy option available. You can choose other storage such as locally redundant to ensure the backup isn't replicated to another region. The changes made to an existing account are applied to only future backups. After the backup storage redundancy of an existing account is updated, it may take up to twice the backup interval time for the changes to take effect, and **you will lose access to restore the older backups immediately.**
+
+ > [!NOTE]
+ > You must have the Azure [Azure Cosmos DB Operator role](../role-based-access-control/built-in-roles.md#cosmos-db-operator) role assigned at the subscription level to configure backup storage redundancy.
+
+ :::image type="content" source="./media/periodic-backup-modify-interval-retention/configure-existing-account-portal.png" lightbox="./media/periodic-backup-modify-interval-retention/configure-existing-account-portal.png" alt-text="Screenshot of configuration options including backup interval, retention, and storage redundancy for an existing Azure Cosmos DB account.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the [`az cosmosdb update`](/cli/azure/cosmosdb#az-cosmosdb-update) command to update the periodic backup options for an existing account.
+
+```azurecli-interactive
+az cosmosdb update \
+ --resource-group <resource-group-name> \
+ --name <account-name> \
+ --backup-interval 480 \
+ --backup-retention 24
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use the [`Update-AzCosmosDBAccount`](/powershell/module/az.cosmosdb/update-azcosmosdbaccount) cmdlet to update the periodic backup options for an existing account.
+
+```azurepowershell-interactive
+$parameters = @{
+ ResourceGroupName = "<resource-group-name>"
+ Name = "<account-name>"
+ BackupIntervalInMinutes = 480
+ BackupRetentionIntervalInHours = 24
+}
+Update-AzCosmosDBAccount @parameters
+```
+
+### [Azure Resource Manager template](#tab/azure-resource-manager-template)
+
+Use the following Azure Resource Manager JSON template to update the periodic backup options for an existing account.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "newAccountName": {
+ "type": "string",
+ "defaultValue": "[format('nosql-{0}', toLower(uniqueString(resourceGroup().id)))]",
+ "metadata": {
+ "description": "Name of the existing Azure Cosmos DB account."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for the Azure Cosmos DB account."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "apiVersion": "2022-05-15",
+ "name": "[parameters('newAccountName')]",
+ "location": "[parameters('location')]",
+ "kind": "GlobalDocumentDB",
+ "properties": {
+ "databaseAccountOfferType": "Standard",
+ "locations": [
+ {
+ "locationName": "[parameters('location')]"
+ }
+ ],
+ "backupPolicy": {
+ "type": "Periodic",
+ "periodicModeProperties": {
+ "backupIntervalInMinutes": 480,
+ "backupRetentionIntervalInHours": 24,
+ "backupStorageRedundancy": "Local"
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+Alternatively, you can use the Bicep variant of the same template.
+
+```bicep
+@description('Name of the existing Azure Cosmos DB account.')
+param newAccountName string = 'nosql-${toLower(uniqueString(resourceGroup().id))}'
+
+@description('Location for the Azure Cosmos DB account.')
+param location string = resourceGroup().location
+
+resource account 'Microsoft.DocumentDB/databaseAccounts@2022-05-15' = {
+ name: newAccountName
+ location: location
+ kind: 'GlobalDocumentDB'
+ properties: {
+ databaseAccountOfferType: 'Standard'
+ locations: [
+ {
+ locationName: location
+ }
+ ]
+ backupPolicy:
+ type: 'Periodic'
+ periodicModeProperties:
+ backupIntervalInMinutes: 480,
+ backupRetentionIntervalInHours: 24,
+ backupStorageRedundancy: 'Local'
+ }
+}
+```
+++
+## Configure backup options for a new account
+
+Use these steps to change the default backup options for a new Azure Cosmos DB account.
+
+> [!NOTE]
+> For illustrative purposes, these examples assume that you are creating an [Azure Cosmos DB for NoSQL](nosql/index.yml) account. The steps are very similar for accounts using other APIs.
+
+### [Azure portal](#tab/azure-portal)
+
+When provisioning a new account, from the **Backup Policy** tab, select **Periodic*** backup policy. The periodic policy allows you to configure the backup interval, backup retention, and backup storage redundancy. For example, you can choose **locally redundant backup storage** or **Zone redundant backup storage** options to prevent backup data replication outside your region.
++
+### [Azure CLI](#tab/azure-cli)
+
+Use the [`az cosmosdb create`](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new account with the specified periodic backup options.
+
+```azurecli-interactive
+az cosmosdb create \
+ --resource-group <resource-group-name> \
+ --name <account-name> \
+ --locations regionName=<azure-region> \
+ --backup-interval 360 \
+ --backup-retention 12
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use the [`New-AzCosmosDBAccount`](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new account with the specified periodic backup options.
+
+```azurepowershell-interactive
+$parameters = @{
+ ResourceGroupName = "<resource-group-name>"
+ Name = "<account-name>"
+ Location = "<azure-region>"
+ BackupPolicyType = "Periodic"
+ BackupIntervalInMinutes = 360
+ BackupRetentionIntervalInHours = 12
+}
+New-AzCosmosDBAccount @parameters
+```
+
+### [Azure Resource Manager template](#tab/azure-resource-manager-template)
+
+Use the following Azure Resource Manager JSON template to update the periodic backup options for an existing account.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "newAccountName": {
+ "type": "string",
+ "defaultValue": "[format('nosql-{0}', toLower(uniqueString(resourceGroup().id)))]",
+ "metadata": {
+ "description": "New Azure Cosmos DB account name. Max length is 44 characters."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for the new Azure Cosmos DB account."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "apiVersion": "2022-05-15",
+ "name": "[parameters('newAccountName')]",
+ "location": "[parameters('location')]",
+ "kind": "GlobalDocumentDB",
+ "properties": {
+ "databaseAccountOfferType": "Standard",
+ "locations": [
+ {
+ "locationName": "[parameters('location')]"
+ }
+ ],
+ "backupPolicy": {
+ "type": "Periodic",
+ "periodicModeProperties": {
+ "backupIntervalInMinutes": 360,
+ "backupRetentionIntervalInHours": 12,
+ "backupStorageRedundancy": "Zone"
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+Alternatively, you can use the Bicep variant of the same template.
+
+```bicep
+@description('New Azure Cosmos DB account name. Max length is 44 characters.')
+param newAccountName string = 'sql-${toLower(uniqueString(resourceGroup().id))}'
+
+@description('Location for the new Azure Cosmos DB account.')
+param location string = resourceGroup().location
+
+resource account 'Microsoft.DocumentDB/databaseAccounts@2022-05-15' = {
+ name: newAccountName
+ location: location
+ kind: 'GlobalDocumentDB'
+ properties: {
+ databaseAccountOfferType: 'Standard'
+ locations: [
+ {
+ locationName: location
+ }
+ ]
+ backupPolicy:
+ type: 'Periodic'
+ periodicModeProperties:
+ backupIntervalInMinutes: 360,
+ backupRetentionIntervalInHours: 12,
+ backupStorageRedundancy: 'Zone'
+ }
+}
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Request data restoration from a backup](periodic-backup-request-data-restore.md)
cosmos-db Periodic Backup Request Data Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-request-data-restore.md
+
+ Title: Request data restoration from a backup
+
+description: Request the restoration of your Azure Cosmos DB data from a backup if you've lost or accidentally deleted a database or container.
+++++ Last updated : 03/21/2023+++
+# Request data restoration from an Azure Cosmos DB backup
++
+If you accidentally delete your database or a container, you can [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) or [call the Azure support](https://azure.microsoft.com/support/options/) to restore the data from automatic online backups. Azure support is available for selected plans only such as **Standard**, **Developer**, and plans higher than those tiers. Azure support isn't available with **Basic** plan. To learn about different support plans, see the [Azure support plans](https://azure.microsoft.com/support/plans/) page.
+
+To restore a specific snapshot of the backup, Azure Cosmos DB requires that the data is available during the backup cycle for that snapshot.
+You should have the following details before requesting a restore:
+
+- Have your subscription ID ready.
+- Based on how your data was accidentally deleted or modified, you should prepare to have additional information. It's advised that you have the information available ahead to minimize the back-and-forth that can be detrimental in some time sensitive cases.
+- If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore.
+- If one or more databases are deleted, you should provide the Azure Cosmos DB account, and the Azure Cosmos DB database names and specify if a new database with the same name exists.
+- If one or more containers are deleted, you should provide the Azure Cosmos DB account name, database names, and the container names. And specify if a container with the same name exists.
+- If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](periodic-backup-modify-interval-retention.md) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team has enough time to restore your account.
+
+In addition to Azure Cosmos DB account name, database names, container names, you should specify the point in time to use for data restoration. It's important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.**
+
+The following screenshot illustrates how to create a support request for a container(collection/graph/table) to restore data by using Azure portal. Provide other details such as type of data, purpose of the restore, time when the data was deleted to help us prioritize the request.
++
+## Considerations for restoring the data from a backup
+
+You may accidentally delete or modify your data in one of the following scenarios:
+
+- Delete the entire Azure Cosmos DB account.
+
+- Delete one or more Azure Cosmos DB databases.
+
+- Delete one or more Azure Cosmos DB containers.
+
+- Delete or modify the Azure Cosmos DB items (for example, documents) within a container. This specific case is typically referred to as data corruption.
+
+- A shared offer database or containers within a shared offer database are deleted or corrupted.
+
+Azure Cosmos DB can restore data in all the above scenarios. A new Azure Cosmos DB account is created to hold the restored data when restoring from a backup. The name of the new account, if it's not specified, has the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a precreated Azure Cosmos DB account.
+
+When you accidentally delete an Azure Cosmos DB account, we can restore the data into a new account with the same name, if the account name isn't in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult.
+
+When you accidentally delete an Azure Cosmos DB database, we can restore the whole database or a subset of the containers within that database. It's also possible to select specific containers across databases and restore them to a new Azure Cosmos DB account.
+
+When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there's data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption.
+
+If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. This way the Azure Cosmos DB support team has enough time to restore your account.
+
+> [!NOTE]
+> After you restore the data, not all the source capabilities or settings are carried over to the restored account. The following settings are not carried over to the new account:
+>
+> - VNET access control lists
+> - Stored procedures, triggers and user-defined functions
+> - Multi-region settings
+> - Managed identity settings
+>
+
+If you assign throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore.
+
+## Get the restore details from the restored account
+
+After the restore operation completes, you may want to know the source account details from which you restored or the restore time. You can get these details from the Azure portal, PowerShell, or CLI.
+
+### [Azure portal](#tab/azure-portal)
+
+Use the following steps to get the restore details from Azure portal:
+
+1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to the restored account.
+
+1. Open the **Tags** page.
+
+1. The **Tags** page should have the tags **restoredAtTimestamp** and **restoredSourceDatabaseAccountName**. These tags describe the timestamp and the source account name that were used for the periodic restore.
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` fields are within the `tags` field:
+
+```azurecli-interactive
+az cosmosdb show \
+ --resource-group <resource-group-name> \
+ --name <account-name>
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Import the Az.CosmosDB module and run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` are within the `tags` field:
+
+```powershell-interactive
+$parameters = @{
+ ResourceGroupName = "<resource-group-name>"
+ Name = "<account-name>"
+}
+Get-AzCosmosDBAccount @parameters
+```
+++
+## Post-restore actions
+
+The primary goal of the data restore is to recover the data that you've accidentally deleted or modified. So, we recommend that you first inspect the content of the recovered data to ensure it contains what you are expecting. If everything looks good, you can migrate the data back to the primary account. Although it's possible to use the restored account as your new active account, it's not a recommended option if you have production workloads.
+
+After you restore the data, you get a notification about the name of the new account (itΓÇÖs typically in the format `<original-name>-restored1`) and the time when the account was restored to. The restored account has the same provisioned throughput, indexing policies and it is in same region as the original account. A user who is the subscription admin or a coadmin can see the restored account.
+
+### Migrate data to the original account
+
+The following are different ways to migrate data back to the original account:
+
+- Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).
+- Use the [change feed](change-feed.md) in Azure Cosmos DB.
+- You can write your own custom code.
+
+It's advised that you delete the container or database immediately after migrating the data. If you don't delete the restored databases or containers, they incur cost for request units, storage, and egress.
+
+## Next steps
+
+- Learn more about [periodic backup and restore](periodic-backup-restore-introduction.md)
+- Learn more about [continuous backup](continuous-backup-restore-introduction.md)
cosmos-db Periodic Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-restore-introduction.md
Title: Configure periodic backup
+ Title: Periodic backup/restore introduction
-description: Configure Azure Cosmos DB accounts with periodic backup and retention at a specified interval through the portal or a support ticket.
+description: Learn about Azure Cosmos DB accounts with periodic backup retention and restoration capabilities at a specified interval.
++ - Previously updated : 03/16/2023-- Last updated : 03/21/2023+
-# Configure Azure Cosmos DB account with periodic backup
+# Periodic backup and restore in Azure Cosmos DB
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters. With Azure Cosmos DB, not only your data, but also the backups of your data are highly redundant and resilient to regional disasters. The following steps show how Azure Cosmos DB performs data backup:
+Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters. With Azure Cosmos DB, not only your data, but also the backups of your data are highly redundant and resilient to regional disasters.
+
+## How Azure Cosmos DB performs data backup
+
+The following steps show how Azure Cosmos DB performs data backup:
- Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time, only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you can change the backup interval and the retention period from the Azure portal. You can change the backup configuration during or after the Azure Cosmos DB account is created. If the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given provisioned throughput container or shared throughput database for 30 days. If throughput is provisioned at the database level, the backup and restore process happens across the entire database scope.
Azure Cosmos DB automatically takes backups of your data at regular intervals. T
The following image shows how an Azure Cosmos DB container with all the three primary physical partitions in West US. The container is backed up in a remote Azure Blob Storage account in West US and then replicated to East US:
- :::image type="content" source="./media/configure-periodic-backup-restore/automatic-backup.png" alt-text="Diagram of periodic full backups taken of multiple Azure Cosmos DB entities in geo-redundant Azure Storage." lightbox="./media/configure-periodic-backup-restore/automatic-backup.png" border="false":::
+ :::image type="content" source="./media/periodic-backup-restore-introduction/automatic-backup.png" alt-text="Diagram of periodic full backups taken of multiple Azure Cosmos DB entities in geo-redundant Azure Storage." lightbox="./media/periodic-backup-restore-introduction/automatic-backup.png" border="false":::
- The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database.
-> [!NOTE]
-> For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store is not supported at this time.
-
-## Backup storage redundancy
-
-By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [blob storage](../storage/common/storage-redundancy.md) that is replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can update this default value using Azure PowerShell or CLI and define an Azure policy to enforce a specific storage redundancy option. To learn more, see [update backup storage redundancy](periodic-backup-update-storage-redundancy.md) article.
-
-Change the default geo-redundant backup storage to ensure that your backup data stays within the same region where your Azure Cosmos DB account is provisioned. You can configure the geo-redundant backup to use either locally redundant or zone-redundant storage. Storage redundancy mechanisms store multiple copies of your backups so that it's protected from planned and unplanned events. These events can include transient hardware failure, network or power outages, or massive natural disasters.
-
-You can configure storage redundancy for periodic backup mode at the time of account creation or update it for an existing account. You can use the following three data redundancy options in periodic backup mode:
--- **Geo-redundant backup storage:** This option copies your data asynchronously across the paired region.--- **Zone-redundant backup storage:** This option copies your data synchronously across three Azure availability zones in the primary region. For more information, see [Zone-redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)--- **Locally-redundant backup storage:** This option copies your data synchronously three times within a single physical location in the primary region. For more information, see [locally redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)-
-> [!NOTE]
-> Zone-redundant storage is currently available only in [specific regions](../availability-zones/az-region.md). Depending on the region you select for a new account or the region you have for an existing account; the zone-redundant option will not be available.
->
-> Updating backup storage redundancy will not have any impact on backup storage pricing.
-
-## Modify the backup interval and retention period
-
-Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos DB account creation or after the account is created. The backup configuration is set at the Azure Cosmos DB account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal as described later in this article, or via [PowerShell](periodic-backup-restore-introduction.md#modify-backup-options-using-azure-powershell) or the [Azure CLI](periodic-backup-restore-introduction.md#modify-backup-options-using-azure-cli).
-
-If you've accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
-
-### Modify backup options using Azure portal - Existing account
-
-Use the following steps to change the default backup options for an existing Azure Cosmos DB account:
-
-1. Sign into the [Azure portal.](https://portal.azure.com/)
-
-1. Navigate to your Azure Cosmos DB account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required.
-
- - **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a nonzero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval can't be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken.
-
- - **Backup Retention** - It represents the period where each backup is retained. You can configure it in hours or days. The minimum retention period canΓÇÖt be less than two times the backup interval (in hours) and it canΓÇÖt be greater than 720 hours.
-
- - **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There's an extra charge if you need more than two copies. See the Consumed Storage section in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for extra copies.
-
- - **Backup storage redundancy** - Choose the required storage redundancy option, see the [Backup storage redundancy](#backup-storage-redundancy) section for available options. By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the account is being provisioned supports it. Otherwise, the account fallback to the highest redundancy option available. You can choose other storage such as locally redundant to ensure the backup isn't replicated to another region. The changes made to an existing account are applied to only future backups. After the backup storage redundancy of an existing account is updated, it may take up to twice the backup interval time for the changes to take effect, and **you will lose access to restore the older backups immediately.**
-
- > [!NOTE]
- > You must have the Azure [Azure Cosmos DB Operator role](../role-based-access-control/built-in-roles.md#cosmos-db-operator) role assigned at the subscription level to configure backup storage redundancy.
-
- :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-existing-accounts.png" alt-text="Screenshot of configuration options including backup interval, retention, and storage redundancy for an existing Azure Cosmos DB account." border="true":::
-
-### Modify backup options using Azure portal - New account
-
-When provisioning a new account, from the **Backup Policy** tab, select **Periodic*** backup policy. The periodic policy allows you to configure the backup interval, backup retention, and backup storage redundancy. For example, you can choose **locally redundant backup storage** or **Zone redundant backup storage** options to prevent backup data replication outside your region.
--
-### Modify backup options using Azure PowerShell
-
-Use the following PowerShell cmdlet to update the periodic backup options:
-
-```azurepowershell-interactive
-Update-AzCosmosDBAccount -ResourceGroupName "resourceGroupName" `
- -Name "accountName" `
- -BackupIntervalInMinutes 480 `
- -BackupRetentionIntervalInHours 16
-```
-
-### Modify backup options using Azure CLI
-
-Use the following CLI command to update the periodic backup options:
-
-```azurecli-interactive
-az cosmosdb update --resource-group "resourceGroupName" \
- --name "accountName" \
- --backup-interval 240 \
- --backup-retention 8
-```
-
-### Modify backup options using Resource Manager template
-
-When deploying the Resource Manager template, change the periodic backup options within the `backupPolicy` object:
+## Azure Cosmos DB Backup with Azure Synapse Link
-```json
- "backupPolicy": {
- "type": "Periodic",
- "periodicModeProperties": {
- "backupIntervalInMinutes": 240,
- "backupRetentionIntervalInHours": 8,
- "backupStorageRedundancy": "Zone"
- }
-}
-```
+For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Azure Synapse Link is enabled, Azure Cosmos DB continues to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store isn't supported at this time.
-## Request data restore from a backup
+## Understanding the cost of backups
-If you accidentally delete your database or a container, you can [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) or [call the Azure support](https://azure.microsoft.com/support/options/) to restore the data from automatic online backups. Azure support is available for selected plans only such as **Standard**, **Developer**, and plans higher than those tiers. Azure support isn't available with **Basic** plan. To learn about different support plans, see the [Azure support plans](https://azure.microsoft.com/support/plans/) page.
+Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
-To restore a specific snapshot of the backup, Azure Cosmos DB requires that the data is available during the backup cycle for that snapshot.
-You should have the following details before requesting a restore:
+For example, consider a scenario where Backup Retention is configured to **240 hrs** (or **10 days**) and Backup Interval is configured to **24 hours**. This configuration implies that there are **10** copies of the backup data. If you have **1 TB** of data in an Azure region, the cost for backup storage in a given month would be: `0.12 * 1000 * 8`
-- Have your subscription ID ready.-- Based on how your data was accidentally deleted or modified, you should prepare to have additional information. It's advised that you have the information available ahead to minimize the back-and-forth that can be detrimental in some time sensitive cases.-- If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore.-- If one or more databases are deleted, you should provide the Azure Cosmos DB account, and the Azure Cosmos DB database names and specify if a new database with the same name exists.-- If one or more containers are deleted, you should provide the Azure Cosmos DB account name, database names, and the container names. And specify if a container with the same name exists.-- If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](#modify-the-backup-interval-and-retention-period) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team has enough time to restore your account.-
-In addition to Azure Cosmos DB account name, database names, container names, you should specify the point in time to use for data restoration. It's important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.**
-
-The following screenshot illustrates how to create a support request for a container(collection/graph/table) to restore data by using Azure portal. Provide other details such as type of data, purpose of the restore, time when the data was deleted to help us prioritize the request.
--
-## Considerations for restoring the data from a backup
-
-You may accidentally delete or modify your data in one of the following scenarios:
--- Delete the entire Azure Cosmos DB account.--- Delete one or more Azure Cosmos DB databases.--- Delete one or more Azure Cosmos DB containers.--- Delete or modify the Azure Cosmos DB items (for example, documents) within a container. This specific case is typically referred to as data corruption.--- A shared offer database or containers within a shared offer database are deleted or corrupted.-
-Azure Cosmos DB can restore data in all the above scenarios. A new Azure Cosmos DB account is created to hold the restored data when restoring from a backup. The name of the new account, if it's not specified, has the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a precreated Azure Cosmos DB account.
-
-When you accidentally delete an Azure Cosmos DB account, we can restore the data into a new account with the same name, if the account name isn't in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult.
-
-When you accidentally delete an Azure Cosmos DB database, we can restore the whole database or a subset of the containers within that database. It's also possible to select specific containers across databases and restore them to a new Azure Cosmos DB account.
-
-When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there's data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption.
-
-If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. This way the Azure Cosmos DB support team has enough time to restore your account.
-
-> [!NOTE]
-> After you restore the data, not all the source capabilities or settings are carried over to the restored account. The following settings are not carried over to the new account:
->
-> - VNET access control lists
-> - Stored procedures, triggers and user-defined functions
-> - Multi-region settings
-> - Managed identity settings
->
-
-If you assign throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore.
-
-## Required permissions to change retention or restore from the portal
+## Required permissions to manage retention or restoration
Principals who are part of the role [CosmosdbBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), owner, or contributor are allowed to request a restore or change the retention period.
-## Understanding Costs of extra backups
-
-Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). For example, consider a scenario where Backup Retention is configured to **240 hrs** (or 10 days) and Backup Interval is configured to **24** hrs. This configuration implies that there are 10 copies of the backup data. If you have **1 TB** of data in the West US 2 region, the cost would be `0.12 * 1000 * 8` for backup storage in given month.
-
-## Get the restore details from the restored account
-
-After the restore operation completes, you may want to know the source account details from which you restored or the restore time. You can get these details from the Azure portal, PowerShell, or CLI.
-
-### Use Azure portal
-
-Use the following steps to get the restore details from Azure portal:
-
-1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to the restored account.
-
-1. Open the **Tags** page. This page should have the tags **restoredAtTimestamp** and **restoredSourceDatabaseAccountName**. These tags describe the timestamp and the source account name that were used for the periodic restore.
-
-### Use Azure CLI
-
-Run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` fields are within the `tags` field:
-
-```azurecli-interactive
-az cosmosdb show --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup
-```
-
-### Use PowerShell
-
-Import the Az.CosmosDB module and run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` are within the `tags` field:
-
-```powershell-interactive
-Get-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount
-```
-
-## Options to manage your own backups
+## Manually managing periodic backups in Azure Cosmos DB
With Azure Cosmos DB API for NoSQL accounts, you can also maintain your own backups by using one of the following approaches: -- Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage solution of your choice.--- Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage.-
-## Post-restore actions
-
-The primary goal of the data restore is to recover the data that you've accidentally deleted or modified. So, we recommend that you first inspect the content of the recovered data to ensure it contains what you are expecting. If everything looks good, you can migrate the data back to the primary account. Although it's possible to use the restored account as your new active account, it's not a recommended option if you have production workloads.
-
-After you restore the data, you get a notification about the name of the new account (itΓÇÖs typically in the format `<original-name>-restored1`) and the time when the account was restored to. The restored account has the same provisioned throughput, indexing policies and it is in same region as the original account. A user who is the subscription admin or a coadmin can see the restored account.
-
-### Migrate data to the original account
+### Azure Data Factory
-The following are different ways to migrate data back to the original account:
+Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage solution of your choice.
-- Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).-- Use the [change feed](change-feed.md) in Azure Cosmos DB.-- You can write your own custom code.
+### Azure Cosmos DB change feed
-It's advised that you delete the container or database immediately after migrating the data. If you don't delete the restored databases or containers, they incur cost for request units, storage, and egress.
+Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage.
## Next steps -- To make a restore request, contact Azure Support by [filing a ticket in the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).-- [Create account with continuous backup](provision-account-continuous-backup.md).-- [Restore continuous backup account](restore-account-continuous-backup.md).
+> [!div class="nextstepaction"]
+> [Periodic backup storage redundancy](periodic-backup-storage-redundancy.md)
cosmos-db Periodic Backup Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-storage-redundancy.md
+
+ Title: Periodic backup storage redundancy
+
+description: Learn how to configure Azure Storage-based data redundancy for periodic backup in Azure Cosmos DB accounts.
+++++ Last updated : 03/21/2023+++
+# Periodic backup storage redundancy in Azure Cosmos DB
++
+By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [Azure Blob Storage](../storage/common/storage-redundancy.md). The blob storage is then, by default, replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can update this default value using Azure PowerShell or Azure CLI and define an Azure policy to enforce a specific storage redundancy option. For more information, see [update backup storage redundancy](periodic-backup-update-storage-redundancy.md).
+
+## Best practices
+
+Change the default geo-redundant backup storage to ensure that your backup data stays within the same region where your Azure Cosmos DB account is provisioned. You can configure the geo-redundant backup to use either locally redundant or zone-redundant storage. Storage redundancy mechanisms store multiple copies of your backups so that it's protected from planned and unplanned events. These events can include transient hardware failure, network or power outages, or massive natural disasters.
+
+## Redundancy options
+
+You can configure storage redundancy for periodic backup mode at the time of account creation or update it for an existing account. You can use the following three data redundancy options in periodic backup mode:
+
+- **Geo-redundant backup storage:** This option copies your data asynchronously across the paired region.
+
+- **Zone-redundant backup storage:** This option copies your data synchronously across three Azure availability zones in the primary region. For more information, see [Zone-redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)
+
+- **Locally-redundant backup storage:** This option copies your data synchronously three times within a single physical location in the primary region. For more information, see [locally redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)
+
+> [!NOTE]
+> Zone-redundant storage is currently available only in [specific regions](../availability-zones/az-region.md). Depending on the region you select for a new account or the region you have for an existing account; the zone-redundant option will not be available.
+>
+> Updating backup storage redundancy will not have any impact on backup storage pricing.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Update the redundancy of backup storage](periodic-backup-update-storage-redundancy.md)
cosmos-db Periodic Backup Update Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-update-storage-redundancy.md
Title: Update backup storage redundancy for Azure Cosmos DB periodic backup accounts
-description: Learn how to update the backup storage redundancy using Azure CLI, and PowerShell. You can also configure an Azure policy on your accounts to enforce the required storage redundancy.
+ Title: Update periodic backup storage redundancy
+
+description: Update the backup storage redundancy using Azure CLI or Azure PowerShell and enforce a minimum storage redundancy using Azure Policy.
++ - Previously updated : 12/03/2021-- Last updated : 03/21/2023+
-# Update backup storage redundancy for Azure Cosmos DB periodic backup accounts
+# Update periodic backup storage redundancy for Azure Cosmos DB
+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [blob storage](../storage/common/storage-redundancy.md) that is replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can override the default backup storage redundancy. This article explains how to update the backup storage redundancy using Azure CLI and PowerShell. It also shows how to configure an Azure policy on your accounts to enforce the required storage redundancy.
-## Update using Azure portal
+## Prerequisites
-Use the following steps to update backup storage redundancy from the Azure portal:
+- An existing Azure Cosmos DB account.
+ - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal).
+ - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit.
+
+## Update storage redundancy
+
+Use the following steps to update backup storage redundancy.
+
+### [Azure portal](#tab/azure-portal)
1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your Azure Cosmos DB account.
-1. Open the **Backup & Restore** pane, update the backup storage redundancy and select **Submit**. It takes few minutes for the operation to complete:
+1. Open the **Backup & Restore** pane, update the backup storage redundancy and select **Submit**. It takes few minutes for the operation to complete.
- :::image type="content" source="./media/update-backup-storage-redundancy/update-backup-storage-redundancy-portal.png" alt-text="Update backup storage redundancy from the Azure portal":::
+ :::image type="content" source="./media/periodic-backup-update-storage-redundancy/update-existing-account-portal.png" lightbox="./media/periodic-backup-update-storage-redundancy/update-existing-account-portal.png" alt-text="Screenshot of the update backup storage redundancy page from the Azure portal.":::
-## Update using CLI
+### [Azure CLI](#tab/azure-cli)
-Use the following steps to update the backup storage redundancy on an existing account using Azure CLI:
+1. Ensure you have the latest version of Azure CLI or a version higher than or equal to **2.30.0**. If you have the `cosmosdb-preview` extension installed, make sure to remove it.
-1. Install the latest version if Azure CLI or a version higher than or equal to 2.30.0. If you have the "cosmosdb-preview" extension installed, make sure to remove it.
+1. Use the [`az cosmosdb locations show`](/cli/azure/cosmosdb/locations#az-cosmosdb-locations-show) command to get the backup redundancy options available in the regions where your account exists.
-1. Use the following command to get the backup redundancy options available in the regions where your account exists:
+ ```azurecli-interactive
+ az cosmosdb locations show \
+ --location <region-name>
+ ```
- ```azurecli-interactive
- az cosmosdb locations show --location <region_Name>
- ```
+ The output should include JSON similar to this example:
- ```bash
+ ```json
{
- "id": "subscriptionId/<Subscription_ID>/providers/Microsoft.DocumentDB/locations/eastus/",
- "name": "East US",
- "properties": {
- "backupStorageRedundancies": [
- "Geo",
- "Zone",
- "Local"
- ],
- "isResidencyRestricted": false,
- "supportsAvailabilityZone": true
- },
- "type": "Microsoft.DocumentDB/locations"
+ "id": "subscriptionId/<Subscription_ID>/providers/Microsoft.DocumentDB/locations/eastus/",
+ "name": "East US",
+ "properties": {
+ "backupStorageRedundancies": [
+ "Geo",
+ "Zone",
+ "Local"
+ ],
+ "isResidencyRestricted": false,
+ "supportsAvailabilityZone": true
+ },
+ "type": "Microsoft.DocumentDB/locations"
}
- ```
+ ```
- The previous command shows a list of backup redundancies available in the specific region. Supported values are displayed in the `backupStorageRedundancies` property. For example some regions such as "East US" support three redundancy options "Geo", "Zone", and "Local" whereas some regions like "UAE North" support only "Geo" and "Local" redundancy options. Before updating, choose the backup storage redundancy option that is supported in all the regions where your account exists.
+ > [!NOTE]
+ > The previous command shows a list of backup redundancies available in the specific region. Supported values are displayed in the `backupStorageRedundancies` property. For example some regions may support up to three redundancy options: **Geo**, **Zone**, and **Local**. Other regions may support a subset of these options. Before updating, choose the backup storage redundancy option that is supported in all the regions your Azure Cosmos DB account uses.
-1. Run the following command with the chosen backup redundancy option to update the backup redundancy on an existing account:
+1. Use the [`az cosmosdb update`](/cli/azure/cosmosdb#az-cosmosdb-update) command with the chosen backup redundancy option to update the backup redundancy on an existing account.
- ```azurecli-interactive
- az cosmosdb update -n <account_Name> -g <resource_Group> --backup-redundancy "Geo"
- ```
+ ```azurecli-interactive
+ az cosmosdb update \
+ --resource-group <resource-group-name> \
+ --name <account_name> \
+ --backup-redundancy Zone
+ ```
-1. Run the following command to create a new account with the chosen backup redundancy option:
+1. Alternatively, use the [`az cosmosdb create`](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new account with the chosen backup redundancy option.
- ```azurecli-interactive
- az cosmosdb create -n <account_Name> -g <resource_Group> --backup-redundancy "Geo" --locations regionName=westus
- ```
+ ```azurecli-interactive
+ az cosmosdb create \
+ --resource-group <resource-group-name> \
+ --name <account-name> \
+ --backup-redundancy Geo \
+ --locations regionName=<azure-region>
+ ```
-## Update using PowerShell
+### [Azure PowerShell](#tab/azure-powershell)
-1. Install the latest version of Azure PowerShell or a version higher than or equal to 1.4.0
+1. Install the latest version of Azure PowerShell or a version higher than or equal to **1.4.0**.
- ```powershell-interactive
- Install-Module -Name Az.CosmosDB -RequiredVersion 1.4.0
- ```
+ ```azurepowershell-interactive
+ $parameters = @{
+ Name = "Az.CosmosDB"
+ RequiredVersion = "1.4.0"
+ }
+ Install-Module @parameters
+ ```
+
+1. Use the [`Get-AzCosmosDBLocation`](/powershell/module/az.cosmosdb/get-azcosmosdblocation) cmdlet to get the backup redundancy options available in the regions where your account exists.
-1. Use the following command to get the backup redundancy options available in the regions where your account exists:
+ ```azurepowershell-interactive
+ $parameters = @{
+ Location = "<azure-region>"
+ }
+ (Get-AzCosmosDBLocation @parameters).Properties
+ ```
- ```powershell-interactive
- $location = Get-AzCosmosDBLocation -Location <region_Name>
- $location.Properties.BackupStorageRedundancies
- ```
+ The output should include content similar to this example:
- The previous command shows a list of backup redundancies available in the specific region. Supported values are displayed in the `backupStorageRedundancies` property. For example some regions such as "East US" support three redundancy options "Geo", "Zone", and "Local" whereas some regions like "UAE North" support only "Geo" and "Local" redundancy options. Before updating, choose the backup storage redundancy option that is supported in all the regions where your account exists.
+ ```azurepowershell
+ SupportsAvailabilityZone IsResidencyRestricted BackupStorageRedundancies
+ -
+ True False {Geo, Zone, Local}
+ ```
-1. Run the following command with the chosen backup redundancy option to update the backup redundancy on an existing account:
+ > [!NOTE]
+ > The previous command shows a list of backup redundancies available in the specific region. Supported values are displayed in the `BackupStorageRedundancies` property. For example some regions may support up to three redundancy options: **Geo**, **Zone**, and **Local**. Other regions may support a subset of these options. Before updating, choose the backup storage redundancy option that is supported in all the regions your Azure Cosmos DB account uses.
- ```powershell-interactive
- Update-AzCosmosDBAccount `
- -Name <account_Name> `
- -ResourceGroupName <resource_Group> `
- -BackupStorageRedundancy "Geo"
- ```
+1. Use the [`Update-AzCosmosDBAccount`](/powershell/module/az.cosmosdb/update-azcosmosdbaccount) cmdlet with the chosen backup redundancy option to update the backup redundancy on an existing account:
-1. Run the following command to create a new account with the chosen backup redundancy option:
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName "<resource-group-name>"
+ Name = "<account-name>"
+ BackupStorageRedundancy = "Zone"
+ }
+ Update-AzCosmosDBAccount @parameters
+ ```
+
+1. Alternatively, use the [`New-AzCosmosDBAccount`](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new account with the chosen backup redundancy option:
+
+ ```azurepowershell-interactive
+ $parameters = @{
+ ResourceGroupName = "<resource-group-name>"
+ Name = "<account-name>"
+ Location = "<azure-region>"
+ BackupPolicyType = "Periodic"
+ BackupStorageRedundancy = "Geo"
+ }
+ New-AzCosmosDBAccount @parameters
+ ```
- ```powershell-interactive
- New-AzCosmosDBAccount `
- -Name <account_Name> `
- -ResourceGroupName <resource_Group> `
- -Location <region_Name> `
- -BackupPolicyType Periodic`
- -BackupStorageRedundancy "Geo"
+
- ```
+## Add an Azure Policy for backup storage redundancy
-## Add a policy for the backup storage redundancy
+Azure Policy helps you to enforce organizational standards and to assess compliance at-scale. For more information, see [what is Azure Policy?](../governance/policy/overview.md).
-Azure Policy helps you to enforce organizational standards and to assess compliance at-scale. The following sample shows how to add an Azure policy for the database accounts to have a backup redundancy of type "Zone".
+The following sample shows how to add an Azure policy for Azure Cosmos DB accounts to validate (using `audit`) that they have their backup redundancy configured to `Local`.
```json "parameters": {},
- "policyRule": {
- "if": {
- "allOf": [
- {
- "field": "Microsoft.DocumentDB/databaseAccounts/backupPolicy.periodicModeProperties.backupStorageRedundancy",
- "match": "Zone"
- }
- ]
- },
- "then": {
- "effect": "audit"
+"policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "Microsoft.DocumentDB/databaseAccounts/backupPolicy.periodicModeProperties.backupStorageRedundancy",
+ "match": "Local"
}
- }
+ ]
+ },
+ "then": {
+ "effect": "audit"
+ }
+}
``` ## Next steps
-* Provision an Azure Cosmos DB account with [periodic backup mode](periodic-backup-restore-introduction.md).
-* Provision an account with continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
-* Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template).
+> [!div class="nextstepaction"]
+> [Modify the backup interval and retention period](periodic-backup-modify-interval-retention.md)
cosmos-db Concepts Compute Start Stop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-compute-start-stop.md
+
+ Title: Start and stop cluster compute - Azure Cosmos DB for PostgreSQL
+description: Learn about how to start and stop compute on the cluster nodes
+++++ Last updated : 3/22/2023+
+# Start and stop compute on cluster nodes
++
+Azure Cosmos DB for PostgreSQL allows you to stop compute on all nodes in a cluster. Compute billing is paused when cluster is stopped and continues when computer is started again.
+
+> [!NOTE]
+> Billing for provisioned storage on all cluster nodes continues when cluster's compute is stopped.
+
+## Managing compute state on cluster nodes
+
+You can stop compute on a cluster for as long as you need.
+
+You can perform management operations such as compute or storage scaling, adding a worker node, or updating networking settings only on clusters with started compute.
+
+If cluster has [high availability (HA)](./concepts-high-availability.md) enabled, compute start and stop operations would start and stop compute on all primary and standby nodes in the cluster. You can start and stop compute on the primary cluster and any of its [read replicas](./concepts-read-replicas.md) independently.
+
+## Next steps
+
+- Learn how to start and stop [cluster compute in Azure Cosmos DB for PostgreSQL](./how-to-start-stop-cluster.md)
+- Learn about [pricing options in Azure Cosmos DB for PostgreSQL](./resources-pricing.md)
+
cosmos-db How To Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-start-stop-cluster.md
+
+ Title: Start and stop cluster - Azure Cosmos DB for PostgreSQL
+description: How to start and stop compute on the cluster nodes
+++++ Last updated : 3/03/2023+
+# Start and stop compute on a cluster
++
+Azure Cosmos DB for PostgreSQL allows you to stop and start compute on all nodes in a cluster.
+
+## Stop a running cluster
+
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Cosmos DB for PostgreSQL cluster that you want to stop.
+
+2. From the **Overview** page, click the **Stop** button in the toolbar.
+
+> [!NOTE]
+> Once the cluster is stopped, other management operations are not available for the cluster.
++
+## Start a stopped cluster
+
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Cosmos DB for PostgreSQL cluster that you want to start.
+
+2. From the **Overview** page, click the **Start** button in the toolbar.
+
+> [!NOTE]
+> Once the cluster is started, all management operations are now available for the cluster.
+
+## Next steps
+
+- Learn more about [compute start and stop in Azure Cosmos DB for PostgreSQL](./concepts-compute-start-stop.md).
+
cosmos-db Howto Ingest Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-data-factory.md
new to Data Factory, here's a quick guide on how to get started:
6. Configure **Sink**. 1. On the **Activities** page, select the **Sink** tab. Select **New** to create a sink dataset.
- 2. In the **New Dataset** dialog box, select **Azure Cosmos DB for PostgreSQL**, and then select **Continue**.
+ 2. In the **New Dataset** dialog box, select **Azure Database for PostgreSQL**, and then select **Continue**.
3. On the **Set properties** page, under **Linked service**, select **New**. 4. On the **New linked service** page, enter a name for the linked service, and select your cluster from the **Server name** list. Add connection details and test the connection.
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Title: Product updates for Azure Cosmos DB for PostgreSQL
-description: New features and features in preview
+description: Release notes, new features and features in preview
Previously updated : 02/25/2023 Last updated : 03/23/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters.
+### March 2023
+
+* General availability: Cluster compute [start / stop functionality](./concepts-compute-start-stop.md) is now supported across all configurations.
+
### February 2023 * General availability: 4 TiB, 8 TiB, and 16 TiB storage per node is now supported for [multi-node configurations](resources-compute.md#multi-node-cluster) in addition to previously supported 0.5 TiB, 1 TiB, and 2 TiB storage sizes. * See cost details for your region in 'Multi-node' section of [the Azure Cosmos DB for PostgreSQL pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/postgresql/).
-* General availability: [Latest minor PostgreSQL version updates](reference-versions.md#postgresql-versions) (11.19, 12.14, 13.10, 14.7, and 15.2) are now available in all supported regions.
- * Existing clusters will get minor Postgres version update with [the next maintenance](concepts-maintenance.md)
- * Major Postgres and minor Citus [version upgrades](concepts-upgrade.md) can be performed in-place.
- ### January 2023
cost-management-billing Automate Budget Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/automate-budget-creation.md
Languages supported by a culture code:
| zh-tw | Chinese (Traditional, Taiwan) | | cs-cz | Czech (Czech Republic) | | pl-pl | Polish (Poland) |
-| tr-tr | Turkish (Turkey) |
+| tr-tr | Turkish (T├╝rkiye) |
| da-dk | Danish (Denmark) | | en-gb | English (United Kingdom) | | hu-hu | Hungarian (Hungary) |
cost-management-billing Understand Usage Details Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/understand-usage-details-fields.md
description: This article describes the fields in the usage data files. Previously updated : 12/09/2022 Last updated : 03/27/2023
The following table describes the important terms used in the latest version of
| ConsumedService | All | Name of the service the charge is associated with. | | CostCenter┬╣ | EA, MCA | The cost center defined for the subscription for tracking costs (only available in open billing periods for MCA accounts). | | Cost | EA, pay-as-you-go | See CostInBillingCurrency. |
+| CostAllocationRuleName | EA, MCA | Name of the Cost Allocation rule that's applicable to the record. |
| CostInBillingCurrency | MCA | Cost of the charge in the billing currency before credits or taxes. | | CostInPricingCurrency | MCA | Cost of the charge in the pricing currency before credits or taxes. | | Currency | EA, pay-as-you-go | See `BillingCurrency`. |
The following table describes the important terms used in the latest version of
| ResourceLocation┬╣ | All | Datacenter location where the resource is running. See `Location`. | | ResourceName | EA, pay-as-you-go | Name of the resource. Not all charges come from deployed resources. Charges that don't have a resource type will be shown as null/empty, **Others** , or **Not applicable**. | | ResourceType | MCA | Type of resource instance. Not all charges come from deployed resources. Charges that don't have a resource type will be shown as null/empty, **Others** , or **Not applicable**. |
+| RoundingAdjustment | EA, MCA | Rounding adjustment represents the quantization that occurs during cost calculation. When the calculated costs are converted to the invoiced total, small rounding errors can occur. The rounding errors are represented as `rounding adjustment` to ensure that the costs shown in Cost Management align to the invoice. |
| ServiceFamily | MCA | Service family that the service belongs to. | | ServiceInfo┬╣ | All | Service-specific metadata. | | ServiceInfo2 | All | Legacy field with optional service-specific metadata. |
The following table describes the important terms used in the latest version of
| Term | All | Displays the term for the validity of the offer. For example: In case of reserved instances, it displays 12 months as the Term. For one-time purchases or recurring purchases, Term is one month (SaaS, Marketplace Support). Not applicable for Azure consumption. | | UnitOfMeasure | All | The unit of measure for billing for the service. For example, compute services are billed per hour. | | UnitPrice | EA, pay-as-you-go | The price per unit for the charge. |
-| CostAllocationRuleName | EA, MCA | Name of the Cost Allocation rule that's applicable to the record. |
+ ┬╣ Fields used to build a unique ID for a single cost record. Every record in your cost details file should be considered unique.
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/manage-automation.md
Languages supported by a culture code:
| zh-tw | Chinese (Traditional, Taiwan) | | cs-cz | Czech (Czech Republic) | | pl-pl | Polish (Poland) |
-| tr-tr | Turkish (Turkey) |
+| tr-tr | Turkish (T├╝rkiye) |
| da-dk | Danish (Denmark) | | en-gb | English (United Kingdom) | | hu-hu | Hungarian (Hungary) |
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md
Make sure that you have the user's email address and preferred authentication me
#### If you're not an enterprise administrator
-If you're not an enterprise administrator, contact an enterprise administrator to request that they add you to an enrollment. The enterprise administrator uses the preceding steps to add you as an enterprise administrator. After you're added to an enrollment, you receive an activation email.
+If you're not an enterprise administrator, contact an enterprise administrator to request that they add you to an enrollment. The enterprise administrator uses the preceding steps to add you as an enterprise administrator. After you're added to an enrollment, you receive an activation email. After the account is registered, it's activated in about 5 to 10 minutes.
#### If your enterprise administrator can't help you
cost-management-billing Manage Tax Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-tax-information.md
Customers in the following countries or regions can add their Tax IDs.
|Spain | Sweden | |Switzerland | Taiwan | |Tajikistan | Thailand |
-|Turkey | Ukraine |
+|T├╝rkiye | Ukraine |
|United Arab Emirates | United Kingdom | |Uzbekistan | Vietnam | |Zimbabwe | |
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 06/06/2022 Last updated : 03/27/2023
A user must have an Owner role on an Enrollment Account to create a subscription
To use a service principal (SPN) to create an EA subscription, an Owner of the Enrollment Account must [grant that service principal the ability to create subscriptions](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
-When using an SPN to create subscriptions, use the ObjectId of the Azure AD Application Registration as the Service Principal ObjectId using [Azure Active Directory PowerShell](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true ) or [Azure CLI](/cli/azure/ad/sp#az-ad-sp-list). You can also use the steps at [Find your SPN and tenant ID](assign-roles-azure-service-principals.md#find-your-spn-and-tenant-id) to find the object ID in the Azure portal for an existing SPN.
+When using an SPN to create subscriptions, use the ObjectId of the Azure AD Enterprise application as the Service Principal ID using [Azure Active Directory PowerShell](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true ) or [Azure CLI](/cli/azure/ad/sp#az-ad-sp-list). You can also use the steps at [Find your SPN and tenant ID](assign-roles-azure-service-principals.md#find-your-spn-and-tenant-id) to find the object ID in the Azure portal for an existing SPN.
For more information about the EA role assignment API request, see [Assign roles to Azure Enterprise Agreement service principal names](assign-roles-azure-service-principals.md). The article includes a list of roles (and role definition IDs) that can be assigned to an SPN.
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
Previously updated : 11/30/2022 Last updated : 03/27/2023
When you create an Azure subscription programmatically, that subscription is gov
You must have an owner, contributor, or Azure subscription creator role on an invoice section or owner or contributor role on a billing profile or a billing account to create subscriptions. You can also give the same role to a service principal name (SPN). For more information about roles and assigning permission to them, see [Subscription billing roles and tasks](understand-mca-roles.md#subscription-billing-roles-and-tasks).
-If you're using an SPN to create subscriptions, use the ObjectId of the Azure AD Application Registration as the Service Principal ObjectId using [Azure Active Directory PowerShell](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true) or [Azure CLI](/cli/azure/ad/sp#az-ad-sp-list).
+If you're using an SPN to create subscriptions, use the ObjectId of the Azure AD Enterprise application as the Principal ID using [Azure Active Directory PowerShell](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true) or [Azure CLI](/cli/azure/ad/sp#az-ad-sp-list).
> [!NOTE] > Permissions differ between the legacy API (api-version=2018-03-01-preview) and the latest API (api-version=2020-05-01). Although you may have a role sufficient to use the legacy API, you might need an EA admin to delegate you a role to use the latest API.
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
Each account requires a unique work, school, or Microsoft account. For more info
There can be only one account owner per account. However, there can be multiple accounts in an EA enrollment. Each account has a unique account owner.
+For different Azure AD accounts, it can take more than 30 minutes for permission settings to take effect.
+ ### Service administrator The service administrator role has permissions to manage services in the Azure portal and assign users to the coadministrator role.
cost-management-billing Reservation Exchange Policy Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-exchange-policy-changes.md
Previously updated : 02/14/2023 Last updated : 03/27/2023 # Changes to the Azure reservation exchange policy
You can continue to exchange VM sizes (with instance size flexibility). However,
The current cancellation policy for reserved instances isn't changing. The total canceled commitment can't exceed 50,000 USD in a 12-month rolling window for a billing profile or single enrollment.
-You may [trade in](../savings-plan/reservation-trade-in.md) your Azure reserved instances for compute for a savings plan. You can continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration and need or want more savings.
+A compute reservation exchange for another compute reservation exchange is similar to, but not the same as a reservation [trade-in](../savings-plan/reservation-trade-in.md) for a savings plan. The difference is that you can always trade in your Azure reserved instances for compute for a savings plan. There's no time limit for trade-ins.
+
+You can continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration and need or want more savings.
Learn more about [Azure savings plan for compute](../savings-plan/index.yml) and how it works with reservations.
The following examples describe scenarios that might represent your situation.
### Scenario 1
-You purchase a one-year compute reservation between October 2022 and January 2024. The compute reservation can be exchanged one more time through the end of its term, even after January 2024. Before January 2024, you can exchange it under current policy. However, when the reservation is exchanged after January 2024, the reservation is no longer exchangeable because exchanges are processed as a cancellation and new purchase. You can still trade in the reservation for a savings plan.
+You purchase a one-year compute reservation sometime between the month of October 2022 and the end of December 2023. You can exchange the compute reservation one more time through the end of its term, even after December 2023. Before January 2024, you can exchange it under current policy. However, if the reservation is exchanged after the end of December 2023, the reservation is no longer exchangeable because exchanges are processed as a cancellation, refund, and a new purchase.
+
+You can always trade in the reservation for a savings plan. There's no time limit for trade-ins.
### Scenario 2
-You purchase a three-year compute reservation before January 2024. You exchange the compute reservation after January 2024. Because an exchange is processed as a cancellation and new purchase, the reservation is no longer exchangeable. However, you can still trade in the reservation for a savings plan.
+You purchase a three-year compute reservation before January 2024. You exchange the compute reservation on or after January 1, 2024. Because an exchange is processed as a cancellation, refund, and new purchase, the reservation is no longer exchangeable.
+
+You can always trade in the reservation for a savings plan. There's no time limit for trade-ins.
### Scenario 3
-You purchase a one-year compute reservation after January 2024. It canΓÇÖt be exchanged. However, you can trade in the reservation for a savings plan.
+You purchase a one-year compute reservation on or after January 1, 2024. It canΓÇÖt be exchanged.
+
+You can always trade in the reservation for a savings plan. There's no time limit for trade-ins.
### Scenario 4
-You purchase a three-year compute reservation after January 2024. It canΓÇÖt be exchanged. However, you can trade in the reservation for a savings plan.
+You purchase a three-year compute reservation after on or after January 1, 2024. It canΓÇÖt be exchanged.
+
+You can always trade in the reservation for a savings plan. There's no time limit for trade-ins.
## Next steps
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
If your default payment method is wire transfer, check your invoice for payment
> - [Trinidad and Tobago](/legal/pay/trinidad-and-tobago) > - [Turkmenistan](/legal/pay/turkmenistan) > - [Tunisia](/legal/pay/tunisia)
-> - [Turkey](/legal/pay/turkey)
+> - [T├╝rkiye](/legal/pay/turkey)
> - [Uganda](/legal/pay/uganda) > - [Ukraine](/legal/pay/ukraine) > - [United Arab Emirates](/legal/pay/united-arab-emirates)
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
Here are details of the application's actions and arguments:
|`-EnableLocalFolderPathValidation`|| Enable security validation to disable access to file system of the local machine. | |`-eesp`,<br/>`-EnableExecuteSsisPackage`|| Enable SSIS package execution on self-hosted IR node.| |`-desp`,<br/>`-DisableExecuteSsisPackage`|| Disable SSIS package execution on self-hosted IR node.|
-|`-gesp`,<br/>`-GetExecuteSsisPackage`|| Get the value if ExecuteSsisPackage option is enabled on self-hosted IR node.|
+|`-gesp`,<br/>`-GetExecuteSsisPackage`|| Get the value if ExecuteSsisPackage option is enabled on self-hosted IR node.<br/> If the returned value is true, then ExecuteSSISPackage is enabled; If the returned value is false or null, then ExecuteSSISPackage is disabled.|
## Install and register a self-hosted IR from Microsoft Download Center
databox-online Azure Stack Edge Gpu Troubleshoot Virtual Machine Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-provisioning.md
Previously updated : 03/23/2023 Last updated : 03/24/2023 # Troubleshoot VM deployment in Azure Stack Edge Pro GPU
The following issues are the top causes of VM provisioning timeouts:
- The default gateway and DNS server couldn't be reached from the guest VM. [Learn more](#gateway-dns-server-couldnt-be-reached-from-guest-vm) - During a `cloud init` installation, `cloud init` either didn't run or there were issues while it was running. (Linux VMs only) [Learn more](#cloud-init-issues-linux-vms) - For a Linux VM deployed using a custom VM image, the Provisioning flags in the /etc/waagent.conf file are not correct. (Linux VMs only) [Learn more](#provisioning-flags-set-incorrectly-linux-vms)-- Primary NIC attached to a SRIOV enabled vSwitch [Learn more](#primary-nic-attached-to-a-sriov-enabled-vswitch)
+- Primary network interface attached to a SRIOV enabled virtual switch [Learn more](#primary-network-interface-attached-to-a-sriov-enabled-virtual-switch)
### IP assigned to the VM is already in use
To check for some of the most common issues that prevent `cloud init` from runni
| Enable provisioning | `Provisioning.Enabled=n` | | Rely on cloud-init to provision | `Provisioning.UseCloudInit=y` |
-### Primary NIC attached to a SRIOV enabled vSwitch
+### Primary network interface attached to a SRIOV enabled virtual switch
**Error description:** The primary network interface attached to a single root I/O virtualization (SRIOV) interface-enabled virtual switch caused network traffic to bypass the hyper-v, so the host could not receive DHCP requests from the VM, resulting in a provisioning timeout.
To check for some of the most common issues that prevent `cloud init` from runni
- Connect the VM primary network interface to a virtual switch without enabling accelerated networking. -- On an Azure Stack Edge Pro 1 device, virtual switches created on Port 1 to Port 4 do not enable accelerated networking. On Port 5 or Port 6, virtual switches will enable accelerated networking by default. 
+- On an Azure Stack Edge Pro 1 device, virtual switches created on Port 1 to Port 4 do not enable accelerated networking. On Port 5 or Port 6, virtual switches will enable accelerated networking by default.
+- On an Azure Stack Edge Pro 2 device, virtual switches created on Port 1 or Port 2 do not enable accelerated networking. On Port 3 or Port 4, virtual switches will enable accelerated networking by default.
## Network interface creation issues
databox-online Azure Stack Edge Technical Specifications Power Cords Regional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-technical-specifications-power-cords-regional.md
Use the following table to find the correct cord specifications for your region:
|Thailand|250|10|H05VV-F 3x0.75|TI16S3|C13|1829| |Trinidad and Tobago|125|10|SVE 18/3|NEMA 5-15P|C13|1830| |Tunisia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
-|Turkey|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
+|T├╝rkiye|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
|Turkmenistan|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Uganda|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Ukraine|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
defender-for-cloud Deploy Vulnerability Assessment Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-defender-vulnerability-management.md
You can learn more by watching this video from the Defender for Cloud in the Fie
- [Microsoft Defender for Servers](episode-five.md)
-## Availability
+## Availability
|Aspect|Details| |-|:-|
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| [Deprecation of App Service language monitoring policies](#deprecation-of-app-service-language-monitoring-policies) | April 2023 | | [Deprecation of legacy compliance standards across cloud environments](#deprecation-of-legacy-compliance-standards-across-cloud-environments) | April 2023 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | April 2023 |
+| [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services) | April 2023 |
### Changes in the recommendation "Machines should be configured securely"
We are announcing the full deprecation of support of [`PCI DSS`](/azure/complian
Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [`PCI DSS v4`](/azure/compliance/offerings/offering-pci-dss) initiative. Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+### New Azure Active Directory authentication-related recommendations for Azure Data Services
+
+**Estimated date for change: April 2023**
+
+| Recommendation Name | Recommendation Description | Policy |
+|--|--|--|
+| Azure SQL Managed Instance authentication mode should be Azure Active Directory Only | Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Azure Active Directory identities. Learn more at: aka.ms/adonlycreate | [Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f78215662-041e-49ed-a9dd-5385911b3a1f) |
+| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory (AAD) only authentication methods improves security by ensuring that Synapse Workspaces exclusively require AAD identities for authentication. Learn more at: https://aka.ms/Synapse | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) |
+| Azure Database for MySQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e) |
+| Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |
++ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).
defender-for-iot Detect Windows Endpoints Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/detect-windows-endpoints-script.md
In addition to detecting OT devices on your network, use Defender for IoT to discover Microsoft Windows workstations and servers. Same as other detected devices, detected Windows workstations and servers are displayed in the Device inventory. The **Device inventory** pages on the sensor and on-premises management console show enriched data about Windows devices, including data about the Windows operating system and applications installed, patch-level data, open ports, and more.
-This article describes how to configure Defender for IoT to detect Windows workstations and servers with local surveying, performed by distributing and running a script on each device. While you can use active scanning and scheduled WMI scans to obtain this data, working with local scripts bypasses the risks of running WMI polling on an endpoint. Running a local script is also useful for regulated networks that have waterfalls and one-way elements.
+This article describes how to use a Defender for IoT Windows-based WMI tool to get extended information from Windows devices, such as workstations, servers, and more. Run the WMI script on your Windows devices to get extended information, increasing your device inventory and security coverage. While you can also use [scheduled WMI scans](configure-windows-endpoint-monitoring.md) to obtain this data, scripts can be run locally for regulated networks with waterfalls and one-way elements if WMI connectivity isn't possible.
+
+The script described in this article returns the following details about each detected device:
+
+- IP address
+- MAC address
+- Operating system
+- Service pack
+- Installed programs
+- Last knowledge base update
For more information, see [Configure Windows Endpoint Monitoring](configure-windows-endpoint-monitoring.md).
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
When you try to connect to source in the Azure Database Migration service projec
| - | - | | When using [ExpressRoute](https://azure.microsoft.com/services/expressroute/), Azure Database Migration Service [requires](./tutorial-sql-server-to-azure-sql.md) provisioning three service endpoints on the Virtual Network subnet associated with the service:<br> -- Service Bus endpoint<br> -- Storage endpoint<br> -- Target database endpoint (e.g. SQL endpoint, Azure Cosmos DB endpoint)<br><br><br><br><br> | [Enable](./tutorial-sql-server-to-azure-sql.md) the required service endpoints for ExpressRoute connectivity between source and Azure Database Migration Service. <br><br><br><br><br><br><br><br> |
-## Lock wait timeout error when migrating a MySQL database to Azure DB for MySQL
+## Lock wait timeout error when migrating a MySQL database to Azure Database for MySQL
When you migrate a MySQL database to an Azure Database for MySQL instance via Azure Database Migration Service, the migration fails with following lock wait timeout error:
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/concepts.md
Title: Azure Event Grid concepts description: Describes Azure Event Grid and its concepts. Defines several key components of Event Grid. Previously updated : 02/16/2022 Last updated : 03/24/2023 # Concepts in Azure Event Grid
event-grid Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-firewall.md
Title: Configure IP firewall for Azure Event Grid topics or domains
description: This article describes how to configure firewall settings for Event Grid topics or domains. Previously updated : 03/07/2022 Last updated : 03/24/2023 # Configure IP firewall for Azure Event Grid topics or domains
This section shows you how to enable public or private network access for an Eve
:::image type="content" source="./media/configure-firewall/networking-link.png" alt-text="Screenshot showing the selection of Networking link at the bottom of the page. "::: 1. If you want to allow clients to connect to the topic endpoint via a public IP address, keep the **Public access** option selected.
+ You can restrict access to the topic from specific IP addresses by specifying values for the **Address range** field. Specify a single IPv4 address or a range of IP addresses in Classless inter-domain routing (CIDR) notation.
+ :::image type="content" source="./media/configure-firewall/networking-page-public-access.png" alt-text="Screenshot showing the selection of Public access option on the Networking page of the Create topic wizard. "::: 1. To allow access to the Event Grid topic via a private endpoint, select the **Private access** option.
This section shows you how to enable public or private network access for an Eve
1. Follow instructions in the [Add a private endpoint using Azure portal](configure-private-endpoints.md#use-azure-portal) section to create a private endpoint. ### For an existing topic
-1. In the [Azure portal](https://portal.azure.com), Navigate to your event grid topic or domain, and switch to the **Networking** tab.
+1. In the [Azure portal](https://portal.azure.com), Navigate to your Event Grid topic or domain, and switch to the **Networking** tab.
2. Select **Public networks** to allow all networks, including the internet, to access the resource.
- You can restrict the traffic using IP firewall rules. Specify a single IPv4 address or a range of IP addresses in Classless inter-domain routing (CIDR) notation.
+ You can restrict access to the topic from specific IP addresses by specifying values for the **Address range** field. Specify a single IPv4 address or a range of IP addresses in Classless inter-domain routing (CIDR) notation.
:::image type="content" source="./media/configure-firewall/public-networks-page.png" alt-text="Screenshot that shows the Public network access page with Public networks selected."::: 3. Select **Private endpoints only** to allow only private endpoint connections to access this resource. Use the **Private endpoint connections** tab on this page to manage connections.
az eventgrid topic update \
``` ### Create a topic with single inbound ip rule
-The following sample CLI command creates an event grid topic with inbound IP rules.
+The following sample CLI command creates an Event Grid topic with inbound IP rules.
```azurecli-interactive az eventgrid topic create \
az eventgrid topic create \
### Create a topic with multiple inbound ip rules
-The following sample CLI command creates an event grid topic two inbound IP rules in one step:
+The following sample CLI command creates an Event Grid topic two inbound IP rules in one step:
```azurecli-interactive az eventgrid topic create \
az eventgrid topic create \
``` ### Update an existing topic to add inbound IP rules
-This example creates an event grid topic first and then adds inbound IP rules for the topic in a separate command. It also updates the inbound IP rules that were set in the second command.
+This example creates an Event Grid topic first and then adds inbound IP rules for the topic in a separate command. It also updates the inbound IP rules that were set in the second command.
```azurecli-interactive
New-AzEventGridTopic -ResourceGroupName MyResourceGroupName -Name Topic1 -Locati
> When public network access is disabled for a topic or domain, traffic over public internet isn't allowed. Only private endpoint connections will be allowed to access these resources. ### Create a topic with public network access and inbound ip rules
-The following sample CLI command creates an event grid topic with public network access and inbound IP rules.
+The following sample CLI command creates an Event Grid topic with public network access and inbound IP rules.
```azurepowershell-interactive New-AzEventGridTopic -ResourceGroupName MyResourceGroupName -Name Topic1 -Location eastus -PublicNetworkAccess enabled -InboundIpRule @{ "10.0.0.0/8" = "Allow"; "10.2.0.0/8" = "Allow" } ``` ### Update an existing a topic with public network access and inbound ip rules
-The following sample CLI command updates an existing event grid topic with inbound IP rules.
+The following sample CLI command updates an existing Event Grid topic with inbound IP rules.
```azurepowershell-interactive Set-AzEventGridTopic -ResourceGroupName MyResourceGroupName -Name Topic1 -PublicNetworkAccess enabled -InboundIpRule @{ "10.0.0.0/8" = "Allow"; "10.2.0.0/8" = "Allow" } -Tag @{}
event-grid Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-private-endpoints.md
Title: Configure private endpoints for Azure Event Grid topics or domains description: This article describes how to configure private endpoints for Azure Event Grid custom topics or domain. Previously updated : 12/06/2022 Last updated : 03/24/2023
This section shows you how to enable private network access for an Event Grid to
2. On the **Basics** page, follow these steps: 1. Select an **Azure subscription** in which you want to create the private endpoint. 2. Select an **Azure resource group** for the private endpoint.
- 3. Enter a **name** for the endpoint.
- 4. Select the **region** for the endpoint. Your private endpoint must be in the same region as your virtual network, but can in a different region from the private link resource (in this example, an Event Grid topic).
- 5. Then, select **Next: Resource >** button at the bottom of the page.
+ 3. Enter a **name** for the **endpoint**.
+ 1. Update the **name** for the **network interface** if needed.
+ 1. Select the **region** for the endpoint. Your private endpoint must be in the same region as your virtual network, but can in a different region from the private link resource (in this example, an Event Grid topic).
+ 1. Then, select **Next: Resource >** button at the bottom of the page.
:::image type="content" source="./media/configure-private-endpoints/basics-page.png" alt-text="Screenshot showing the Basics page of the Create a private endpoint wizard.":::
-3. On the **Resource** page, follow these steps:
- 1. For connection method, if you select **Connect to an Azure resource in my directory**, follow these steps. This example shows how to connect to an Azure resource in your directory.
- 1. Select the **Azure subscription** in which your **topic/domain** exists.
- 1. For **Resource type**, Select **Microsoft.EventGrid/topics** or **Microsoft.EventGrid/domains** for the **Resource type**.
- 2. For **Resource**, select an topic/domain from the drop-down list.
- 3. Confirm that the **Target subresource** is set to **topic** or **domain** (based on the resource type you selected).
- 4. Select **Next: Virtual Network >** button at the bottom of the page.
-
- :::image type="content" source="./media/configure-private-endpoints/resource-page.png" alt-text="Screenshot showing the Resource page of the Create a private endpoint wizard.":::
- 2. If you select **Connect to a resource using a resource ID or an alias**, follow these steps:
- 1. Enter the ID of the resource. For example: `/subscriptions/<AZURE SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventGrid/topics/<EVENT GRID TOPIC NAME>`.
- 2. For **Resource**, enter **topic** or **domain**.
- 3. (optional) Add a request message.
- 4. Select **Next: Virtual Network >** button at the bottom of the page.
-
- :::image type="content" source="./media/configure-private-endpoints/connect-azure-resource-id.png" alt-text="Screenshot showing the Resource page with resource ID specified.":::
+3. On the **Resource** page, follow these steps, confirm that **topic** is selected for **Target sub-resource**, and then select **Next: Virtual Network >** button at the bottom of the page.
+
+ :::image type="content" source="./media/configure-private-endpoints/resource-page.png" alt-text="Screenshot showing the Resource page of the Create a private endpoint wizard.":::
4. On the **Virtual Network** page, you select the subnet in a virtual network to where you want to deploy the private endpoint. 1. Select a **virtual network**. Only virtual networks in the currently selected subscription and location are listed in the drop-down list. 2. Select a **subnet** in the virtual network you selected.
- 3. Select **Next: Tags >** button at the bottom of the page.
+ 1. Specify whether you want the **IP address** to be allocated statically or dynamically.
+ 1. Select an existing **application security group** or create one and then associate with the private endpoint.
+ 1. Select **Next: DNS >** button at the bottom of the page.
+
+ :::image type="content" source="./media/configure-private-endpoints/configuration-page.png" alt-text="Screenshot showing the Networking page of the Creating a private endpoint wizard.":::
+5. On the **DNS** page, select whether you want the private endpoint to be integrated with a **private DNS zone**, and then select **Next: Tags** at the bottom of the page.
- :::image type="content" source="./media/configure-private-endpoints/configuration-page.png" alt-text="Screenshot showing the Networking page of the Creating a private endpoint wizard":::
-5. On the **Tags** page, create any tags (names and values) that you want to associate with the private endpoint resource. Then, select **Review + create** button at the bottom of the page.
-6. On the **Review + create**, review all the settings, and select **Create** to create the private endpoint.
+ :::image type="content" source="./media/configure-private-endpoints/dns-zone-page.png" alt-text="Screenshot showing the DNS page of the Creating a private endpoint wizard.":::
+1. On the **Tags** page, create any tags (names and values) that you want to associate with the private endpoint resource. Then, select **Review + create** button at the bottom of the page.
+1. On the **Review + create**, review all the settings, and select **Create** to create the private endpoint.
### Manage private link connection
expressroute Expressroute Howto Macsec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-macsec.md
To start the configuration, sign in to your Azure account and select the subscri
> CKN must be an even-length string up to 64 hexadecimal digits (0-9, A-F). > > CAK length depends on cipher suite specified:
- > * For GcmAes128, the CAK must be an even-length string up to 32 hexadecimal digits (0-9, A-F).
- > * For GcmAes256, the CAK must be an even-length string up to 64 hexadecimal digits (0-9, A-F).
+ > * For GcmAes128 and GcmAesXpn128, the CAK must be an even-length string with 32 hexadecimal digits (0-9, A-F).
+ > * For GcmAes256 and GcmAesXpn256, the CAK must be an even-length string with 64 hexadecimal digits (0-9, A-F).
> 1. Assign the GET permission to the user identity.
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
related resources to match.
- Allowed values are _Subscription_ and _ResourceGroup_. - Sets the scope of where to fetch the related resource to match from. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource.
- - For _ResourceGroup_, would limit to the **if** condition resource's resource group or the
- resource group specified in **ResourceGroupName**.
+ - For _ResourceGroup_, would limit to the resource group in **ResourceGroupName** if specified. If **ResourceGroupName** isn't specified, would limit to the **if** condition resource's resource group, which is the default behavior.
- For _Subscription_, queries the entire subscription for the related resource. Assignment scope should be set at subscription or higher for proper evaluation. - Default is _ResourceGroup_. - **EvaluationDelay** (optional)
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/customer-managed-key.md
You can also enter the key URI here:
> Ensure all permissions for Azure Key Vault are set appropriately. For more information, see [Add an access policy to your Azure Key Vault instance](../../cosmos-db/how-to-setup-cmk.md#add-access-policy). Additionally, ensure that the soft delete is enabled in the properties of the Key Vault. Not completing these steps will result in a deployment error. For more information, see [Verify if soft delete is enabled on a key vault and enable soft delete](../../key-vault/general/key-vault-recovery.md?tabs=azure-portal#verify-if-soft-delete-is-enabled-on-a-key-vault-and-enable-soft-delete).
+> [!NOTE]
+> Using customer-managed keys in Brazil South and East Asia regions requires an Enterprise Application ID generated by Microsoft. You can request Enterprise Application ID by creating a one-time support ticket through the Azure portal. After receiving the Application ID, follow [the instructions to register the application](/azure/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys?tabs=azure-portal#the-customer-grants-the-service-providers-app-access-to-the-key-in-the-key-vault).
++ For existing FHIR accounts, you can view the key encryption choice (**Service-managed key** or **Customer-managed key**) in the **Database** blade as shown below. The configuration option can't be modified once it's selected. However, you can modify and update your key. :::image type="content" source="media/bring-your-own-key/bring-your-own-key-database.png" alt-text="Database":::
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
The FHIR service supports the `$export` operation [specified by HL7](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html) for exporting FHIR data from a FHIR server. In the FHIR service implementation, calling the `$export` endpoint causes the FHIR service to export data into a pre-configured Azure storage account.
-There are three steps in setting up the `$export` operation for the FHIR service:
+Ensure you are granted with application role - 'FHIR Data exporter role' prior to configuring export. To understand more on application roles, see [Authentication and Authorization for FHIR service](https://learn.microsoft.com/azure/healthcare-apis/authentication-authorization).
+
+Below are three steps in setting up the `$export` operation for the FHIR service-
- Enable a managed identity for the FHIR service. - Configure a new or existing Azure Data Lake Storage Gen2 (ADLS Gen2) account and give permission for the FHIR service to access the account.
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md
In this article, you learned how to access the FHIR service in Azure Health Data
>[!div class="nextstepaction"] >[What is FHIR service?](overview.md) +
+For a starter collection of sample Postman queries, please see our [samples repo](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/sample-postman-queries) on Github.
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Understand Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md
Title: Understand the MedTech service device message data transformation - Azure Health Data Services
-description: This article provides an overview of the MedTech service device messaging data transformation into FHIR Observation resources. The MedTech service ingests, normalizes, groups, transforms, and persists device message data in the FHIR service.
+ Title: Understand the MedTech service device message data processing stages - Azure Health Data Services
+description: This article provides an overview of the MedTech service device message processing stages. The MedTech service ingests, normalizes, groups, transforms, and persists device message data in the FHIR service.
Previously updated : 03/21/2023 Last updated : 03/24/2023
-# Understand the MedTech service device message data transformation
+# Understand the MedTech service device message processing stages
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article provides an overview of the device message data processing stages within the [MedTech service](overview.md). The MedTech service transforms device message data into FHIR [Observation](https://www.hl7.org/fhir/observation.html) resources for persistence in the [FHIR service](../fhir/overview.md).
+This article provides an overview of the device message processing stages within the [MedTech service](overview.md). The MedTech service transforms device message data into FHIR [Observation](https://www.hl7.org/fhir/observation.html) resources for persistence in the [FHIR service](../fhir/overview.md).
-The MedTech service device message data processing follows these steps and in this order:
+The MedTech service device message data processing follows these stages and in this order:
* Ingest * Normalize - Device mapping applied.
If no Device resource for a given device identifier exists in the FHIR service,
> [!NOTE] > The **Resolution type** can also be adjusted post deployment of the MedTech service if a different **Resolution type** is later required.
-The MedTech service provides near real-time processing and will also attempt to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after ~five minutes. This means that when there's fewer than 300 normalized messages to be processed, there may be a delay of ~five minutes before FHIR Observations are created or updated in the FHIR service.
+The MedTech service provides near real-time processing and also attempts to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after ~five minutes. This means that when there's fewer than 300 normalized messages to be processed, there may be a delay of ~five minutes before FHIR Observations are created or updated in the FHIR service.
> [!NOTE] > When multiple device messages contain data for the same FHIR Observation, have the same timestamp, and are sent within the same device message batch (for example, within the ~five minute window or in groups of 300 normalized messages), only the data corresponding to the latest device message for that FHIR Observation is persisted.
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
A device template includes the following sections:
- _Root component_. Every device model has a root component. The root component's interface describes capabilities that are specific to the device model. - _Components_. A device model may include components in addition to the root component to describe device capabilities. Each component has an interface that describes the component's capabilities. Component interfaces may be reused in other device models. For example, several phone device models could use the same camera interface. - _Inherited interfaces_. A device model contains one or more interfaces that extend the capabilities of the root component.-- _Cloud properties_. This part of the device template lets the solution developer specify any device metadata to store. Cloud properties are never synchronized with devices and only exist in the application. Cloud properties don't affect the code that a device developer writes to implement the device model.-- _Views_. This part of the device template lets the solution developer define visualizations to view data from the device, and forms to manage and control a device. The views use the device model, cloud properties, and customizations. Views don't affect the code that a device developer writes to implement the device model.
+- _Views_. This part of the device template lets the solution developer define visualizations to view data from the device, and forms to manage and control a device. Views don't affect the code that a device developer writes to implement the device model.
## Assign a device to a device template
Don't use properties to send telemetry from your device. For example, a readonly
For writable properties, the device application returns a desired state status code, version, and description to indicate whether it received and applied the property value.
+### Cloud properties
+
+You can also add cloud properties to the root component of the model. Cloud properties let you specify any device metadata to store in the IoT Central application. Cloud property values are stored in the IoT Central application and are never synchronized with a device. Cloud properties don't affect the code that a device developer writes to implement the device model.
+
+A solution developer can add cloud properties to device views and forms alongside device properties to enable an operator to manage the devices connected to the application. A solution developer can also use cloud properties as part of a rule definition to make a threshold value editable by an operator.
+
+The following DTDL snippet shows an example cloud property definition:
+
+```json
+{
+ "@id": "dtmi:azureiot:Thermostat:CustomerName",
+ "@type": [
+ "Property",
+ "Cloud",
+ "StringValue"
+ ],
+ "displayName": {
+ "en": "Customer Name"
+ },
+ "name": "CustomerName",
+ "schema": "string"
+}
+```
+ ## Telemetry IoT Central lets you view telemetry in device views and charts, and use rules to trigger actions when thresholds are reached. IoT Central uses the information in the device model, such as data types, units and display names, to determine how to display telemetry values. You can also display telemetry values on application and personal dashboards.
Offline commands are one-way notifications to the device from your solution. Off
> [!NOTE] > Offline commands are marked as `durable` if you export the model as DTDL.
-## Cloud properties
-
-Cloud properties are part of the device template, but aren't part of the device model. Cloud properties let the solution developer specify any device metadata to store in the IoT Central application. Cloud properties don't affect the code that a device developer writes to implement the device model.
-
-A solution developer can add cloud properties to device views and forms alongside device properties to enable an operator to manage the devices connected to the application. A solution developer can also use cloud properties as part of a rule definition to make a threshold value editable by an operator.
- ## Views A solution developer creates views that let operators monitor and manage connected devices. Views are part of the device template, so a view is associated with a specific device type. A view can include:
A solution developer creates views that let operators monitor and manage connect
- Charts to plot telemetry. - Tiles to display read-only device properties. - Tiles to let the operator edit writable device properties.-- Tiles to let the operator edit cloud properties.
+- Tiles to let the operator edit cloud properties.
- Tiles to let the operator call commands, including commands that expect a payload. - Tiles to display labels, images, or markdown text.
-The telemetry, properties, and commands that you can add to a view are determined by the device model, cloud properties, and customizations in the device template.
- ## Next steps Now that you've learned about device templates, a suggested next steps is to read [Telemetry, property, and command payloads](./concepts-telemetry-properties-commands.md) to learn more about the data a device exchanges with IoT Central.
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
To learn how to manage device templates by using the IoT Central UI, see [How to
## Device templates
-A device template contains a device model, cloud property definitions, and view definitions. The REST API lets you manage the device model and cloud property definitions. Use the UI to create and manage views.
+A device template contains a device model and view definitions. The REST API lets you manage the device model including cloud property definitions. Use the UI to create and manage views.
The device model section of a device template specifies the capabilities of a device you want to connect to your application. Capabilities include telemetry, properties, and commands. The model is defined using [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md).
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-connected-waste-management.md
Try to customize the following features:
Here's how:
-1. From the device template menu, select **Cloud property**.
-1. Select **+ Add Cloud Property**. In Azure IoT Central, you can add a property that is relevant to the device but isn't expected to be sent by a device. For example, a cloud property might be an alerting threshold specific to installation area, asset information, or maintenance information.
+1. Navigate to the **Connected Waste Bin** device template, and select **+ Add capability**.
+1. Add a new cloud property by selecting **Cloud Property** as **Capability type**.
+ In Azure IoT Central, you can add a property that is relevant to the device but isn't expected to be sent by a device. For example, a cloud property might be an alerting threshold specific to installation area, asset information, or maintenance information.
1. Select **Save**. ### Views
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
In this section, you add a device template for RuuviTag sensors to your applicat
You can customize the device templates in your application in three ways:
-* Customize the native built-in interfaces in your devices by changing the device capabilities.
+* Customize the native built-in interfaces in your devices by changing the device capabilities.
- For example, with a temperature sensor, you can change details such as the display name of the temperature interface, the data type, the units of measurement, and the minimum and maximum operating ranges.
+ For example, with a temperature sensor, you can change details such as the display name of the temperature interface, the data type, the units of measurement, and the minimum and maximum operating ranges.
-* Customize your device templates by adding cloud properties.
+* Customize your device templates by adding cloud properties.
Cloud properties aren't part of the built-in device capabilities. Cloud properties are custom data that your Azure IoT Central application creates, stores, and associates with your devices. Examples of cloud properties could be: * A calculated value * Metadata, such as a location that you want to associate with a set of devices
-* Customize device templates by building custom views.
+* Customize device templates by building custom views.
Views provide a way for operators to visualize telemetry and metadata for your devices, such as device metrics and health.
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
The way that you update the IoT Edge agent and IoT Edge hub containers depends o
Check the version of the IoT Edge agent and IoT Edge hub modules currently on your device using the commands `iotedge logs edgeAgent` or `iotedge logs edgeHub`. If you're using IoT Edge for Linux on Windows, you need to SSH into the Linux virtual machine to check the runtime module versions. ### Understand IoT Edge tags
If you use specific tags in your deployment (for example, mcr.microsoft.com/azur
1. On the **Modules** tab, select **Runtime Settings**.
- :::image type="content" source="./media/how-to-update-iot-edge/configure-runtime.png" alt-text="Screenshot that shows location of the Runtime Settings tab.":::
+ :::image type="content" source="media/how-to-update-iot-edge/runtime-settings.png" alt-text="Screenshot that shows location of the Runtime Settings tab.":::
1. In **Runtime Settings**, update the **Image URI** value in the **Edge Agent** section with the desired version. Don't select **Apply** yet.
- :::image type="content" source="./media/how-to-update-iot-edge/runtime-settings-edgeagent.png" alt-text="Screenshot that shows where to update the image U R I with your version in the Edge Agent.":::
+ :::image type="content" source="media/how-to-update-iot-edge/runtime-settings-agent.png" alt-text="Screenshot that shows where to update the image URI with your version in the Edge Agent.":::
1. Select the **Edge Hub** tab and update the **Image URI** value with the same desired version.
- :::image type="content" source="./media/how-to-update-iot-edge/runtime-settings-edgehub.png" alt-text="Screenshot that shows where to update the image U R I with your version in the Edge Hub.":::
+ :::image type="content" source="media/how-to-update-iot-edge/runtime-settings-hub.png" alt-text="Screenshot that shows where to update the image URI with your version in the Edge Hub.":::
1. Select **Apply** to save changes.
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
For example, we can use the following command to get the identity certificate's
sudo openssl x509 -in /var/lib/aziot/certd/certs/deviceid-random.cer -noout -nocert -fingerprint -sha256 ```
-The command outputs the certificate thumbprint:
+The command outputs the certificate SHA256 thumbprint:
```output SHA256 Fingerprint=1E:F3:1F:88:24:74:2C:4A:C1:A7:FA:EC:5D:16:C4:11:CD:85:52:D0:88:3E:39:CB:7F:17:53:40:9C:02:95:C3 ```
-If we view the thumbprint value for the *EdgeGateway* device in the Azure portal, we can see it matches the thumbprint on *EdgeGateway*:
+If we view the SHA256 thumbprint value for the *EdgeGateway* device registered in IoT Hub, we can see it matches the thumbprint on *EdgeGateway*:
:::image type="content" source="./media/iot-edge-certs/edge-id-thumbprint.png" alt-text="Screenshot from Azure portal of EdgeGateway device's thumbprint in ContosoIotHub.":::
For more information about the certificate building process, see [Create and pro
> [!NOTE] > This example doesn't address Azure IoT Hub Device Provisioning Service (DPS), which has support for X.509 CA authentication with IoT Edge when provisioned with an enrollment group. Using DPS, you upload the CA certificate or an intermediate certificate, the certificate chain is verified, then the device is provisioned. To learn more, see [DPS X.509 certificate attestation](../iot-dps/concepts-x509-attestation.md). >
+> In the Azure Portal, DPS displays the SHA1 thumbprint for the certificate rather than the SHA256 thumbprint.
+>
> DPS registers or updates the SHA256 thumbprint to IoT Hub. You can verify the thumbprint using the command `openssl x509 -in /var/lib/aziot/certd/certs/deviceid-long-random-string.cer -noout -fingerprint -sha256`. Once registered, Iot Edge uses thumbprint authentication with IoT Hub. If the device is reprovisioned and a new certificate is issued, DPS updates IoT Hub with the new thumbprint. > > IoT Hub currently doesn't support X.509 CA authentication directly with IoT Edge.
iot-edge Tutorial Python Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-python-module.md
The IoT Edge extension tries to pull your container registry credentials from Az
### Select your target architecture
-Currently, Visual Studio Code can develop Python modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you're targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.
+Visual Studio Code can develop Python modules for Linux AMD64, Linux ARM32v7, Linux ARM64v8, and Windows AMD64 devices. You need to select which architecture you're targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.
1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window.
You can continue on to the next tutorials to learn how Azure IoT Edge can help y
> [Functions](tutorial-deploy-function.md) > [Stream Analytics](tutorial-deploy-stream-analytics.md) > [Machine Learning](tutorial-deploy-machine-learning.md)
-> [Custom Vision Service](tutorial-deploy-custom-vision.md)
+> [Custom Vision Service](tutorial-deploy-custom-vision.md)
iot-hub Migrate Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md
For more information about how to test whether your devices are ready for the TL
## Optional manual IoT hub migration
-If you've prepared your devices and are ready for the TLS certificate migration before February 2023, you can manually migrate your IoT hub root certificates yourself.
+If you've prepared your devices and are ready for the TLS certificate migration, you can manually migrate your IoT hub root certificates yourself.
After you migrate to the new root certificate, it will take about 45 minutes for all devices to disconnect and reconnect with the new certificate. This timing is because the Azure IoT SDKs are programmed to reverify their connection every 45 minutes. If you've implemented a different pattern in your solution, then your experience may vary.
key-vault Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/howto-logging.md
But first, download all the blobs. With the Azure CLI, use the [az storage blob
az storage blob download --container-name "insights-logs-auditevent" --file <path-to-file> --name "<blob-name>" --account-name "<your-unique-storage-account-name>" ```
-With Azure PowerShell, use the [Gt-AzStorageBlobs](/powershell/module/az.storage/get-azstorageblob) cmdlet to get a list of the blobs. Then pipe that list to the [Get-AzStorageBlobContent](/powershell/module/az.storage/get-azstorageblobcontent) cmdlet to download the logs to your chosen path.
+With Azure PowerShell, use the [Get-AzStorageBlobs](/powershell/module/az.storage/get-azstorageblob) cmdlet to get a list of the blobs. Then pipe that list to the [Get-AzStorageBlobContent](/powershell/module/az.storage/get-azstorageblobcontent) cmdlet to download the logs to your chosen path.
```powershell-interactive $blobs = Get-AzStorageBlob -Container "insights-logs-auditevent" -Context $sa.Context | Get-AzStorageBlobContent -Destination "<path-to-file>"
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
The **AzureLoadBalancer** service tag identifies this source IP address in your
In addition to load balancer health probes, the [following operations use this IP address](../virtual-network/what-is-ip-address-168-63-129-16.md):
-* Enables the VM Agent to communicating with the platform to signal it is in a ΓÇ£ReadyΓÇ¥ state
+* Enables the VM Agent to communicate with the platform to signal it is in a ΓÇ£ReadyΓÇ¥ state
* Enables communication with the DNS virtual server to provide filtered name resolution to customers that don't define custom DNS servers. This filtering ensures that customers can only resolve the hostnames of their deployment. * Enables the VM to obtain a dynamic IP address from the DHCP service in Azure.
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
Title: Source Network Address Translation (SNAT) for outbound connections
-description: Learn how Azure Load Balancer is used for outbound internet connectivity (SNAT).
+description: Learn how Azure Load Balancer is used for outbound internet connectivity using Source Network Address Translation (SNAT).
- Previously updated : 03/01/2022+ Last updated : 03/06/2023
With outbound rules, you have full declarative control over outbound internet co
You can manually allocate SNAT ports either by "ports per instance" or "maximum number of backend instances". If you have virtual machines in the backend, it's recommended that you allocate ports by "ports per instance" to get maximum SNAT port usage.
-Ports per instance should be calculated as below:
+Calculate ports per instance as follows:
**Number of frontend IPs * 64K / Number of backend instances**
-If you have Virtual Machine Scale Sets in the backend, it's recommended to allocate ports by "maximum number of backend instances". If more VMs are added to the backend than remaining SNAT ports allowed, it's possible that virtual machine scale set scaling out could be blocked or that the new VMs won't receive sufficient SNAT ports.
+If you have Virtual Machine Scale Sets in the backend, it's recommended to allocate ports by "maximum number of backend instances". If more VMs are added to the backend than remaining SNAT ports allowed, scale out of Virtual Machine Scale Sets could be blocked, or the new VMs won't receive sufficient SNAT ports.
For more information about outbound rules, see [Outbound rules](outbound-rules.md).
For more information about Azure Virtual Network NAT, see [What is Azure Virtual
| - | | | | Public IP on VM's NIC | SNAT (Source Network Address Translation) </br> isn't used. | TCP (Transmission Control Protocol) </br> UDP (User Datagram Protocol) </br> ICMP (Internet Control Message Protocol) </br> ESP (Encapsulating Security Payload) |
-Traffic will return to the requesting client from the virtual machine's public IP address (Instance Level IP).
+Traffic returns to the requesting client from the virtual machine's public IP address (Instance Level IP).
Azure uses the public IP assigned to the IP configuration of the instance's NIC for all outbound flows. The instance has all ephemeral ports available. It doesn't matter whether the VM is load balanced or not. This scenario takes precedence over the others.
A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and imp
>[!NOTE] > This method is **NOT recommended** for production workloads as it adds risk of exhausting ports. Please refrain from using this method for production workloads to avoid potential connection failures.
-Any Azure resource that doesn't have a public IP associated to it, doesn't have a load balancer with outbound Rules in front of it, isn't part of virtual machine scale sets flexible orchestration mode, or doesn't have a NAT gateway resource associated to its subnet is allocated a minimal number of ports for outbound. This access is known as default outbound access and is the worst method to provide outbound connectivity for your applications.
+Default outbound access is when An Azure resource is allocated a minimal number of ports for outbound. This access occurs when the resource meets any of the following conditions:
+
+- doesn't have a public IP associated to it.
+- doesn't have a load balancer with outbound Rules in front of it.
+- isn't part of Virtual Machine Scale Sets flexible orchestration mode.
+- doesn't have a NAT gateway resource associated to its subnet.
Some other examples of default outbound access are: - Use of a basic SKU load balancer-- A virtual machine in Azure (without the associations mentioned above). In this case outbound connectivity is provided by the default outbound access IP. This IP is a dynamic IP assigned by Azure that you can't control. Default SNAT isn't recommended for production workloads and can cause connectivity failures.
+- A virtual machine in Azure (without the associations mentioned above). In this case, outbound connectivity is provided by the default outbound access IP. This IP is a dynamic IP assigned by Azure that you can't control. Default SNAT isn't recommended for production workloads and can cause connectivity failures.
- A virtual machine in the backend pool of a load balancer without outbound rules. As a result, you use the frontend IP address of a load balancer for outbound and inbound and are more prone to connectivity failures from SNAT port exhaustion. ### What are SNAT ports?
Ports are used to generate unique identifiers used to maintain distinct flows. T
If a port is used for inbound connections, it has a **listener** for inbound connection requests on that port. That port can't be used for outbound connections. To establish an outbound connection, an **ephemeral port** is used to provide the destination with a port on which to communicate and maintain a distinct traffic flow. When these ephemeral ports are used for SNAT, they're called **SNAT ports**.
-By definition, every IP address has 65,535 ports. Each port can either be used for inbound or outbound connections for TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). When a public IP address is added as a frontend IP to a load balancer, 64,000 ports are eligible for SNAT. While all public IPs that are added as frontend IPs can be allocated, frontend IPs are consumed one at a time. For example, if two backend instances are allocated 64,000 ports each, with access to two frontend IPs, both backend instances will consume ports from the first frontend IP until all 64,000 ports have been exhausted.
+By definition, every IP address has 65,535 ports. Each port can either be used for inbound or outbound connections for TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). When a public IP address is added as a frontend IP to a load balancer, 64,000 ports are eligible for SNAT. While all public IPs that are added as frontend IPs can be allocated, frontend IPs are consumed one at a time. For example, if two backend instances are allocated 64,000 ports each, with access to two frontend IPs, both backend instances consume ports from the first frontend IP until all 64,000 ports have been exhausted.
-Each port used in a load balancing or inbound NAT rule consumes a range of eight ports from the 64,000 available SNAT ports. This usage reduces the number of ports eligible for SNAT, if the same frontend IP is used for outbound connectivity. If ports used in load-balancing or inbound NAT rules are in the same block of eight ports as consumed by another rule, it wil not require extra ports.
+Each port used in a load balancing or inbound NAT rule consumes a range of eight ports from the 64,000 available SNAT ports. This usage reduces the number of ports eligible for SNAT, if the same frontend IP is used for outbound connectivity. If load-balancing or inbound NAT rules consumed ports are in the same block of eight ports consumed by another rule, the rules don't require extra ports.
### How does default SNAT work? When a VM creates an outbound flow, Azure translates the source IP address to an ephemeral IP address. This translation is done via SNAT.
-If using SNAT without outbound rules via a public load balancer, SNAT ports are pre-allocated as described in the default SNAT ports allocation table below.
+If using SNAT without outbound rules via a public load balancer, SNAT ports are pre-allocated as described in the following default SNAT ports allocation table:
## <a name="preallocatedports"></a> Default port allocation table
The following <a name="snatporttable"></a>table shows the SNAT port preallocatio
## Port exhaustion
-Every connection to the same destination IP and destination port will use a SNAT port. This connection maintains a distinct **traffic flow** from the backend instance or **client** to a **server**. This process gives the server a distinct port on which to address traffic. Without this process, the client machine is unaware of which flow a packet is part of.
+Every connection to the same destination IP and destination port uses a SNAT port. This connection maintains a distinct **traffic flow** from the backend instance or **client** to a **server**. This process gives the server a distinct port on which to address traffic. Without this process, the client machine is unaware of which flow a packet is part of.
Imagine having multiple browsers going to https://www.microsoft.com, which is:
Imagine having multiple browsers going to https://www.microsoft.com, which is:
* Protocol = TCP
-Without different destination ports for the return traffic (the SNAT port used to establish the connection), the client will have no way to separate one query result from another.
+Without SNAT ports for the return traffic, the client has no way to separate one query result from another.
Outbound connections can burst. A backend instance can be allocated insufficient ports. Use **connection reuse** functionality within your application. Without **connection reuse**, the risk of SNAT **port exhaustion** is increased. For more information about connection pooling with Azure App Service, see [Troubleshooting intermittent outbound connection errors in Azure App Service](../app-service/troubleshoot-intermittent-outbound-connection-errors.md#avoiding-the-problem)
-New outbound connections to a destination IP will fail when port exhaustion occurs. Connections will succeed when a port becomes available. This exhaustion occurs when the 64,000 ports from an IP address are spread thin across many backend instances. For guidance on mitigation of SNAT port exhaustion, see the [troubleshooting guide](./troubleshoot-outbound-connection.md).
+New outbound connections to a destination IP fail when port exhaustion occurs. Connections succeed when a port becomes available. This exhaustion occurs when the 64,000 ports from an IP address are spread thin across many backend instances. For guidance on mitigation of SNAT port exhaustion, see the [troubleshooting guide](./troubleshoot-outbound-connection.md).
-For TCP connections, the load balancer will use a single SNAT port for every destination IP and port. This multiuse enables multiple connections to the same destination IP with the same SNAT port. This multiuse is limited if the connection isn't to different destination ports.
+For TCP connections, the load balancer uses a single SNAT port for every destination IP and port. This multiuse enables multiple connections to the same destination IP with the same SNAT port. This multiuse is limited if the connection isn't to different destination ports.
For UDP connections, the load balancer uses a **port-restricted cone NAT** algorithm, which consumes one SNAT port per destination IP whatever the destination port.
A port is reused for an unlimited number of connections. The port is only reused
* A TCP SNAT port can be used for multiple connections to the same destination IP provided the destination ports are different.
-* SNAT exhaustion occurs when a backend instance runs out of given SNAT Ports. A load balancer can still have unused SNAT ports. If a backend instanceΓÇÖs used SNAT ports exceed its given SNAT ports, it will be unable to establish new outbound connections.
+* SNAT exhaustion occurs when a backend instance runs out of given SNAT Ports. A load balancer can still have unused SNAT ports. If a backend instanceΓÇÖs used SNAT ports exceed its given SNAT ports, it's unable to establish new outbound connections.
-* Fragmented packets will be dropped unless outbound is through an instance level public IP on the VM's NIC.
+* Fragmented packets are dropped unless outbound is through an instance level public IP on the VM's NIC.
* Secondary IP configurations of a network interface don't provide outbound communication (unless a public IP is associated to it) via a load balancer.
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
Azure Load Testing supports the following client metrics:
|Metric |Aggregate function |Threshold |Condition | Description | |||||-| |`response_time_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Response time or elapsed time, in milliseconds. Learn more about [elapsed time in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). |
-|`latency_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Latency, in milliseconds. Learn more about [latency in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). |
+|`latency` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Latency, in milliseconds. Learn more about [latency in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). |
|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) | Percentage of failed requests. | |`requests_per_sec` | `avg` (average) | Numerical value with up to two decimal places. | `>` (greater than) <BR> `<` (less than) | Number of requests per second. | |`requests` | `count` | Integer value. | `>` (greater than) <BR> `<` (less than) | Total number of requests. |
To specify fail criteria in the YAML configuration file:
failureCriteria: - avg(response_time_ms) > 300 - percentage(error) > 50
- - GetCustomerDetails: avg(latency_ms) >200
+ - GetCustomerDetails: avg(latency) >200
``` When you define a test criterion for a specific JMeter request, the request name should match the name of the JMeter sampler in the JMX file.
To specify fail criteria in the YAML configuration file:
failureCriteria: - avg(response_time_ms) > 300 - percentage(error) > 50
- - GetCustomerDetails: avg(latency_ms) >200
+ - GetCustomerDetails: avg(latency) >200
``` When you define a test criterion for a specific JMeter request, the request name should match the name of the JMeter sampler in the JMX file.
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
configurationFiles:
failureCriteria: - avg(response_time_ms) > 300 - percentage(error) > 50
- - GetCustomerDetails: avg(latency_ms) >200
+ - GetCustomerDetails: avg(latency) >200
splitAllCSVs: True env: - name: my-variable
machine-learning Algorithm Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/algorithm-cheat-sheet.md
description: A printable Machine Learning Algorithm Cheat Sheet helps you choose the right algorithm for your predictive model in Azure Machine Learning designer. -+
machine-learning Concept Automl Forecasting Deep Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-deep-learning.md
AutoML executes several preprocessing steps on your data to prepare for model tr
|Step|Description| |--|--| Fill missing data|[Impute missing values and observation gaps](./concept-automl-forecasting-methods.md#missing-data-handling) and optionally [pad or drop short time series](./how-to-auto-train-forecast.md#short-series-handling)|
-|Create calendar features|Augment the input data with [features derived from the calendar](./concept-automl-forecasting-calendar-features.md) like day of the week and, optionally, holidays for a specific region or country.|
+|Create calendar features|Augment the input data with [features derived from the calendar](./concept-automl-forecasting-calendar-features.md) like day of the week and, optionally, holidays for a specific country/region.|
|Encode categorical data|[Label encode](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) strings and other categorical types; this includes all [time series ID columns](./how-to-auto-train-forecast.md#configuration-settings).| |Target transform|Optionally apply the natural logarithm function to the target depending on the results of certain statistical tests.| |Normalization|[Z-score normalize](https://en.wikipedia.org/wiki/Standard_score) all numeric data; normalization is performed per feature and per time series group, as defined by the [time series ID columns](./how-to-auto-train-forecast.md#configuration-settings).
machine-learning Concept Deep Learning Vs Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-deep-learning-vs-machine-learning.md
description: Learn how deep learning relates to machine learning and AI. In Azure Machine Learning, use deep learning models for fraud detection, object detection, and more. -+
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Last updated 08/03/2022-+ # What is Azure Machine Learning designer?
machine-learning Concept Distributed Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-distributed-training.md
-+ Last updated 03/27/2020
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
[MLflow](https://www.mlflow.org) is an open-source framework that's designed to manage the complete machine learning lifecycle. Its ability to train and serve models on different platforms allows you to use a consistent set of tools regardless of where your experiments are running: locally on your computer, on a remote compute target, on a virtual machine, or on an Azure Machine Learning compute instance.
-> [!TIP]
-> Azure Machine Learning workspaces are MLflow-compatible, which means you can use Azure Machine Learning workspaces in the same way that you use an MLflow tracking server. Such compatibility has the following advantages:
-> * We don't host MLflow server instances under the hood. The workspace can talk the MLflow standard.
-> * You can use Azure Machine Learning workspaces as your tracking server for any MLflow code, whether it runs on Azure Machine Learning or not. You only need to configure MLflow to point to the workspace where the tracking should happen.
-> * You can run any training routine that uses MLflow in Azure Machine Learning without any change.
+Azure Machine Learning **workspaces are MLflow-compatible**, which means you can use Azure Machine Learning workspaces in the same way that you'd use an MLflow server. Such compatibility has the following advantages:
-> [!NOTE]
-> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2 and we recommend using MLflow for logging. Such strategy allows your training routines to become cloud-agnostic and portable, removing any dependency in your code with Azure Machine Learning.
+* We don't host MLflow server instances under the hood. The workspace can talk the MLflow API language.
+* You can use Azure Machine Learning workspaces as your tracking server for any MLflow code, whether it runs on Azure Machine Learning or not. You only need to configure MLflow to point to the workspace where the tracking should happen.
+* You can run any training routine that uses MLflow in Azure Machine Learning without any change.
-> [!IMPORTANT]
-> Items marked (preview) in this article are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> [!TIP]
+> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2 and we recommend using MLflow for logging. Such strategy allows your training routines to become cloud-agnostic and portable, removing any dependency in your code with Azure Machine Learning.
## Tracking with MLflow
You can submit training jobs to Azure Machine Learning by using [MLflow projects
Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning](how-to-train-mlflow-projects.md).
+> [!IMPORTANT]
+> Items marked (preview) in this article are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ### Example notebooks * [Track an MLflow project in Azure Machine Learning workspaces](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-local/train-projects-local.ipynb)
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
-+ Last updated 08/30/2022
machine-learning How To Safely Rollout Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-online-endpoints.md
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-In this article, you'll learn how to deploy a new version of a machine learning model in production without causing any disruption. You'll use blue-green deployment, also known as a safe rollout strategy, to introduce a new version of a web service to production. This strategy will allow you to roll out your new version of the web service to a small subset of users or requests before rolling it out completely.
+In this article, you'll learn how to deploy a new version of a machine learning model in production without causing any disruption. You'll use a blue-green deployment strategy (also known as a safe rollout strategy) to introduce a new version of a web service to production. This strategy will allow you to roll out your new version of the web service to a small subset of users or requests before rolling it out completely.
This article assumes you're using online endpoints, that is, endpoints that are used for online (real-time) inferencing. There are two types of online endpoints: **managed online endpoints** and **Kubernetes online endpoints**. For more information on endpoints and the differences between managed online endpoints and Kubernetes online endpoints, see [What are Azure Machine Learning endpoints?](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
-> [!Note]
-> The main example in this article uses managed online endpoints for deployment. To use Kubernetes endpoints instead, see the notes in this document inline with the managed online endpoints discussion.
+The main example in this article uses managed online endpoints for deployment. To use Kubernetes endpoints instead, see the notes in this document that are inline with the managed online endpoint discussion.
In this article, you'll learn to: > [!div class="checklist"]
-> * Define an online endpoint and a deployment called "blue" to serve version 1 of a model
+> * Define an online endpoint with a deployment called "blue" to serve version 1 of a model
> * Scale the blue deployment so that it can handle more requests > * Deploy version 2 of the model (called the "green" deployment) to the endpoint, but send the deployment no live traffic > * Test the green deployment in isolation
In this article, you'll learn to:
* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
-* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
-
- ```azurecli
- az account set --subscription <subscription id>
- az configure --defaults workspace=<azureml workspace name> group=<resource group>
- ```
- * (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues. # [Python](#tab/python)
In this article, you'll learn to:
* (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues.
+# [Studio](#tab/azure-studio)
+
+Before following the steps in this article, make sure you have the following prerequisites:
+
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+
+* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure Machine Learning workspace, or a custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+ ## Prepare your system # [Azure CLI](#tab/azure-cli)
+### Set environment variables
+
+If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
+
+ ```azurecli
+ az account set --subscription <subscription id>
+ az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
+ ```
+ ### Clone the examples repository To follow along with this article, first clone the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples). Then, go to the repository's `cli/` directory:
The information in this article is based on the [online-endpoints-safe-rollout.i
### Connect to Azure Machine Learning workspace
-The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace where you'll perform deployment tasks.
+The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace where you'll perform deployment tasks. To follow along, open your `online-endpoints-safe-rollout.ipynb` notebook.
1. Import the required libraries:
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=workspace_handle)]
+# [Studio](#tab/azure-studio)
+
+If you have Git installed on your local machine, you can follow the instructions to clone the examples repository. Otherwise, follow the instructions to download files from the examples repository.
+
+### Clone the examples repository
+
+To follow along with this article, first clone the [examples repository (azureml-examples)](https://github.com/azure/azureml-examples) and then change into the `azureml-examples/cli/endpoints/online/model-1` directory.
+
+```bash
+git clone --depth 1 https://github.com/Azure/azureml-examples
+cd azureml-examples/cli/endpoints/online/model-1
+```
+
+> [!TIP]
+> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
+
+<!-- Open a terminal in the Azure Machine Learning studio:
+
+1. Sign into [Azure Machine Learning studio](https://ml.azure.com).
+1. Select your workspace, if it isn't already open.
+1. On the left, select **Notebooks**.
+1. Select **Open terminal**.
+ -->
+
+### Download files from the examples repository
+
+If you cloned the examples repo, your local machine already has copies of the files for this example, and you can skip to the next section. If you didn't clone the repo, you can download it to your local machine.
+
+1. Go to [https://github.com/Azure/azureml-examples/](https://github.com/Azure/azureml-examples/).
+1. Go to the **<> Code** button on the page, and then select **Download ZIP** from the **Local** tab.
+1. Locate the model folder `/cli/endpoints/online/model-1/model` and scoring script `/cli/endpoints/online/model-1/onlinescoring/score.py` for a first model `model-1`.
+1. Locate the model folder `/cli/endpoints/online/model-2/model` and scoring script `/cli/endpoints/online/model-2/onlinescoring/score.py` for a second model `model-2`.
+ ## Define the endpoint and deployment
-Online endpoints are used for online (real-time) inferencing. Online endpoints contain deployments that are ready to receive data from clients and can send responses back in real time.
+Online endpoints are used for online (real-time) inferencing. Online endpoints contain deployments that are ready to receive data from clients and send responses back in real time.
+
+### Define an endpoint
+
+The following table lists key attributes to specify when you define an endpoint.
+
+| Attribute | Description |
+|-|--|
+| Name | **Required.** Name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
+| Authentication mode | The authentication method for the endpoint. Choose between key-based authentication `key` and Azure Machine Learning token-based authentication `aml_token`. A key doesn't expire, but a token does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md). |
+| Description | Description of the endpoint. |
+| Tags | Dictionary of tags for the endpoint. |
+| Traffic | Rules on how to route traffic across deployments. Represent the traffic as a dictionary of key-value pairs, where key represents the deployment name and value represents the percentage of traffic to that deployment. You can set the traffic only when the deployments under an endpoint have been created. You can also update the traffic for an online endpoint after the deployments have been created. For more information on how to use mirrored traffic, see [Allocate a small percentage of live traffic to the new deployment](#allocate-a-small-percentage-of-live-traffic-to-the-new-deployment). |
+| Mirror traffic (preview) | Percentage of live traffic to mirror to a deployment. For more information on how to use mirrored traffic, see [Test the deployment with mirrored traffic (preview)](#test-the-deployment-with-mirrored-traffic-preview). |
+
+To see a full list of attributes that you can specify when you create an endpoint, see [CLI (v2) online endpoint YAML schema](/azure/machine-learning/reference-yaml-endpoint-online) or [SDK (v2) ManagedOnlineEndpoint Class](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlineendpoint).
+
+### Define a deployment
+
+A *deployment* is a set of resources required for hosting the model that does the actual inferencing. The following table describes key attributes to specify when you define a deployment.
++
+| Attribute | Description |
+|--|-|
+| Name | **Required.** Name of the deployment. |
+| Endpoint name | **Required.** Name of the endpoint to create the deployment under. |
+| Model | The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. In the example, we have a scikit-learn model that does regression. |
+| Code path | The path to the directory on the local development environment that contains all the Python source code for scoring the model. You can use nested directories and packages. |
+| Scoring script | Python code that executes the model on a given input request. This value can be the relative path to the scoring file in the source code directory.<br>The scoring script receives data submitted to a deployed web service and passes it to the model. The script then executes the model and returns its response to the client. The scoring script is specific to your model and must understand the data that the model expects as input and returns as output.<br>In this example, we have a *score.py* file. This Python code must have an `init()` function and a `run()` function. The `init()` function will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. |
+| Environment | **Required.** The environment to host the model and code. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. The environment can be a Docker image with Conda dependencies, a Dockerfile, or a registered environment. |
+| Instance type | **Required.** The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). |
+| Instance count | **Required.** The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). |
+
+To see a full list of attributes that you can specify when you create a deployment, see [CLI (v2) managed online deployment YAML schema](/azure/machine-learning/reference-yaml-deployment-managed-online) or
+[SDK (v2) ManagedOnlineDeployment Class](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlinedeployment).
# [Azure CLI](#tab/azure-cli) ### Create online endpoint
+First set the endpoint's name and then configure it. In this article, you'll use the *endpoints/online/managed/sample/endpoint.yml* file to configure the endpoint. The following snippet shows the contents of the file:
++
+The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+
+| Key | Description |
+| -- | -- |
+| `$schema` | (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding code snippet in a browser. |
+| `name` | The name of the endpoint. |
+| `auth_mode` | Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. To get the most recent token, use the `az ml online-endpoint get-credentials` command. |
+ To create an online endpoint: 1. Set your endpoint name:
To create an online endpoint:
> [!IMPORTANT] > Endpoint names must be unique within an Azure region. For example, in the Azure `westus2` region, there can be only one endpoint with the name `my-endpoint`.
-1. Create the endpoint in the cloud, run the following code:
+1. Create the endpoint in the cloud:
+
+ Run the following code to use the `endpoint.yml` file to configure the endpoint:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="create_endpoint"::: ### Create the 'blue' deployment
-A deployment is a set of resources required for hosting the model that does the actual inferencing. To create a deployment named `blue` for your endpoint, run the following command:
+In this article, you'll use the *endpoints/online/managed/sample/blue-deployment.yml* file to configure the key aspects of the deployment. The following snippet shows the contents of the file:
++
+To create a deployment named `blue` for your endpoint, run the following command to use the `blue-deployment.yml` file to configure
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="create_blue":::
+> [!IMPORTANT]
+> The `--all-traffic` flag in the `az ml online-deployment create` allocates 100% of the endpoint traffic to the newly created blue deployment.
+
+In the `blue-deployment.yaml` file, we specify the `path` (where to upload files from) inline. The CLI automatically uploads the files and registers the model and environment. As a best practice for production, you should register the model and environment and specify the registered name and version separately in the YAML. Use the form `model: azureml:my-model:1` or `environment: azureml:my-env:1`.
+
+For registration, you can extract the YAML definitions of `model` and `environment` into separate YAML files and use the commands `az ml model create` and `az ml environment create`. To learn more about these commands, run `az ml model create -h` and `az ml environment create -h`.
+
+For more information on registering your model as an asset, see [Register your model as an asset in Machine Learning by using the CLI](how-to-manage-models.md#register-your-model-as-an-asset-in-machine-learning-by-using-the-cli). For more information on creating an environment, see [Manage Azure Machine Learning environments with the CLI & SDK (v2)](how-to-manage-environments-v2.md#create-an-environment).
+ # [Python](#tab/python) ### Create online endpoint
-To create a managed online endpoint, use the `ManagedOnlineEndpoint` class. This class allows users to configure the following key aspects of the endpoint:
-
-* `name` - Name of the endpoint. Needs to be unique at the Azure region level
-* `auth_mode` - The authentication method for the endpoint. Key-based authentication and Azure Machine Learning token-based authentication are supported. Key-based authentication doesn't expire but Azure Machine Learning token-based authentication does. Possible values are `key` or `aml_token`.
-* `identity`- The managed identity configuration for accessing Azure resources for endpoint provisioning and inference.
- * `type`- The type of managed identity. Azure Machine Learning supports `system_assigned` or `user_assigned` identity.
- * `user_assigned_identities` - List (array) of fully qualified resource IDs of the user-assigned identities. This property is required if `identity.type` is user_assigned.
-* `description`- Description of the endpoint.
+To create a managed online endpoint, use the `ManagedOnlineEndpoint` class. This class allows users to configure the key aspects of the endpoint.
1. Configure the endpoint:
To create a managed online endpoint, use the `ManagedOnlineEndpoint` class. This
### Create the 'blue' deployment
-A deployment is a set of resources required for hosting the model that does the actual inferencing. To create a deployment for your managed online endpoint, use the `ManagedOnlineDeployment` class. This class allows users to configure the following key aspects of the deployment:
-
-**Key aspects of deployment**
-* `name` - Name of the deployment.
-* `endpoint_name` - Name of the endpoint to create the deployment under.
-* `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
-* `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
-* `code_configuration` - the configuration for the source code and scoring script
- * `path`- Path to the source code directory for scoring the model
- * `scoring_script` - Relative path to the scoring file in the source code directory
-* `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
-* `instance_count` - The number of instances to use for the deployment
+To create a deployment for your managed online endpoint, use the `ManagedOnlineDeployment` class. This class allows users to configure the key aspects of the deployment.
+The following table describes the attributes of a `deployment`:
1. Configure blue deployment: [!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=configure_deployment)]
+ In this example, we specify the `path` (where to upload files from) inline. The SDK automatically uploads the files and registers the model and environment. As a best practice for production, you should register the model and environment and specify the registered name and version separately in the codes.
+
+ For more information on registering your model as an asset, see [Register your model as an asset in Machine Learning by using the SDK](how-to-manage-models.md#register-your-model-as-an-asset-in-machine-learning-by-using-the-sdk).
+
+ For more information on creating an environment, see [Manage Azure Machine Learning environments with the CLI & SDK (v2)](how-to-manage-environments-v2.md#create-an-environment).
+ > [!NOTE] > To create a deployment for a Kubernetes online endpoint, use the `KubernetesOnlineDeployment` class.
A deployment is a set of resources required for hosting the model that does the
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=deployment_traffic)]
+# [Studio](#tab/azure-studio)
+
+When you create a managed online endpoint in the Azure Machine Learning studio, you must define an initial deployment for the endpoint. Before you can define a deployment, you must have a registered model in your workspace. Let's begin by registering the model to use for the deployment.
+
+### Register your model
+
+A model registration is a logical entity in the workspace. This entity may contain a single model file or a directory of multiple files. As a best practice for production, you should register the model and environment. When creating the endpoint and deployment in this article, we'll assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model.
+
+To register the example model, follow these steps:
+
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Models** page.
+1. Select **Register**, and then choose **From local files**.
+1. Select __Unspecified type__ for the __Model type__.
+1. Select __Browse__, and choose __Browse folder__.
+
+ :::image type="content" source="media/how-to-safely-rollout-managed-endpoints/register-model-folder.png" alt-text="A screenshot of the browse folder option." lightbox="media/how-to-safely-rollout-managed-endpoints/register-model-folder.png":::
+
+1. Select the `\azureml-examples\cli\endpoints\online\model-1\model` folder from the local copy of the repo you cloned or downloaded earlier. When prompted, select __Upload__ and wait for the upload to complete.
+1. Select __Next__ after the folder upload is completed.
+1. Enter a friendly __Name__ for the model. The steps in this article assume the model is named `model-1`.
+1. Select __Next__, and then __Register__ to complete registration.
+1. Repeat the previous steps to register a `model-2` from the `\azureml-examples\cli\endpoints\online\model-2\model` folder in the local copy of the repo you cloned or downloaded earlier.
+
+For more information on working with registered models, see [Register and work with models](how-to-manage-models.md).
+
+For information on creating an environment in the studio, see [Create an environment](how-to-manage-environments-in-studio.md#create-an-environment).
+
+### Create a managed online endpoint and the 'blue' deployment
+
+Use the Azure Machine Learning studio to create a managed online endpoint directly in your browser. When you create a managed online endpoint in the studio, you must define an initial deployment. You can't create an empty managed online endpoint.
+
+One way to create a managed online endpoint in the studio is from the **Models** page. This method also provides an easy way to add a model to an existing managed online deployment. To deploy the model named `model-1` that you registered previously in the [Register your model](#register-your-model) section:
+
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Models** page.
+1. Select the model named `model-1` by checking the circle next to its name.
+1. Select **Deploy** > **Real-time endpoint**.
+
+ :::image type="content" source="media/how-to-safely-rollout-managed-endpoints/deploy-from-models-page.png" lightbox="media/how-to-safely-rollout-managed-endpoints/deploy-from-models-page.png" alt-text="A screenshot of creating a managed online endpoint from the Models UI.":::
+
+ This action opens up a window where you can specify details about your endpoint.
+
+ :::image type="content" source="media/how-to-safely-rollout-managed-endpoints/online-endpoint-wizard.png" lightbox="media/how-to-safely-rollout-managed-endpoints/online-endpoint-wizard.png" alt-text="A screenshot of a managed online endpoint create wizard.":::
+
+1. Enter an __Endpoint name__.
+1. Keep the default selections: __Managed__ for the compute type and __key-based authentication__ for the authentication type.
+1. Select __Next__, until you get to the "Deployment" page. Here, perform the following tasks:
+
+ * Name the deployment "blue".
+ * Check the box for __Enable Application Insights diagnostics and data collection__ to allow you to view graphs of your endpoint's activities in the studio later.
+
+1. Select __Next__ to go to the "Environment" page. Here, select the following options:
+
+ * __Select scoring file and dependencies__: Browse and select the `\azureml-examples\cli\endpoints\online\model-1\onlinescoring\score.py` file from the repo you cloned or downloaded earlier.
+ * __Choose an environment__ section: Select the **Scikit-learn 0.24.1** curated environment.
+
+1. Select __Next__ to go to the "Compute" page. Here, keep the default selection for the virtual machine "Standard_DS3_v2" and change the __Instance count__ to 1.
+1. Select __Next__, to accept the default traffic allocation (100%) to the blue deployment.
+1. Review your deployment settings and select the __Create__ button.
+
+ :::image type="content" source="media/how-to-safely-rollout-managed-endpoints/review-deployment-creation-page.png" lightbox="media/how-to-safely-rollout-managed-endpoints/review-deployment-creation-page.png" alt-text="A screenshot showing the review page for creating a managed online endpoint with a deployment.":::
+
+Alternatively, you can create a managed online endpoint from the **Endpoints** page in the studio.
+
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Endpoints** page.
+1. Select **+ Create**.
+
+ :::image type="content" source="media/how-to-safely-rollout-managed-endpoints/endpoint-create-managed-online-endpoint.png" lightbox="media/how-to-safely-rollout-managed-endpoints/endpoint-create-managed-online-endpoint.png" alt-text="A screenshot for creating managed online endpoint from the Endpoints tab.":::
+
+This action opens up a window for you to specify details about your endpoint and deployment. Enter settings for your endpoint and deployment as described in the previous steps 5-11, accepting defaults until you're prompted to __Create__ the deployment.
+ ## Confirm your existing deployment
We'll send a sample request using a [json](https://github.com/Azure/azureml-exam
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=test_deployment)]
+# [Studio](#tab/azure-studio)
+
+### View managed online endpoints
+
+You can view all your managed online endpoints in the **Endpoints** page. Go to the endpoint's **Details** page to find critical information including the endpoint URI, status, testing tools, activity monitors, deployment logs, and sample consumption code:
+
+1. In the left navigation bar, select **Endpoints**. Here, you can see a list of all the endpoints in the workspace.
+1. (Optional) Create a **Filter** on **Compute type** to show only **Managed** compute types.
+1. Select an endpoint name to view the endpoint's __Details__ page.
++
+### Test the endpoint with sample data
+
+Use the **Test** tab in the endpoint's details page to test your managed online deployment. Enter sample input and view the results.
+
+1. Select the **Test** tab in the endpoint's detail page. The blue deployment is already selected in the dropdown menu.
+1. Copy the sample input from the [json](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/model-1/sample-request.json) file
+1. Paste the sample input in the test box.
+1. Select **Test**.
++ ## Scale your existing deployment to handle more traffic
Using the `MLClient` created earlier, we'll get a handle to the deployment. The
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=get_endpoint_details)]
+# [Studio](#tab/azure-studio)
+
+Use the following instructions to scale the deployment up or down by adjusting the number of instances:
+
+1. In the endpoint Details page. Find the card for the blue deployment.
+1. Select the **edit icon** in the header of the blue deployment's card.
+1. Change the instance count to 2.
+1. Select **Update**.
++ ## Deploy a new model, but send it no traffic yet
Create a new deployment named `green`:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="create_green" :::
-Since we haven't explicitly allocated any traffic to `green`, it will have zero traffic allocated to it. You can verify that using the command:
+Since we haven't explicitly allocated any traffic to `green`, it has zero traffic allocated to it. You can verify that using the command:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="get_traffic" :::
Though `green` has 0% of traffic allocated, you can still invoke the endpoint an
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=test_new_deployment)]
+# [Studio](#tab/azure-studio)
+
+Create a new deployment to add to your managed online endpoint and name the deployment `green`.
+
+From the **Endpoint details page**
+
+1. Select **+ Add Deployment** button in the endpoint "Details" page.
+1. Select **Deploy a model**.
+1. Select **Next** to go to the "Model" page and select the model _model-2_.
+1. Select **Next** to go to the "Deployment" page and perform the following tasks:
+ 1. Name the deployment "green".
+ 1. Enable application insights diagnostics and data collection.
+1. Select __Next__ to go to the "Environment" page. Here, select the following options:
+ * __Select scoring file and dependencies__: Browse and select the `\azureml-examples\cli\endpoints\online\model-2\onlinescoring\score.py` file from the repo you cloned or downloaded earlier.
+ * __Choose an environment__ section: Select the **Scikit-learn 0.24.1** curated environment.
+1. Select __Next__ to go to the "Compute" page. Here, keep the default selection for the virtual machine "Standard_DS3_v2" and change the __Instance count__ to 1.
+1. Select __Next__ to go to the "Traffic" page. Here, keep the default traffic allocation to the deployments (100% traffic to "blue" and 0% traffic to "green").
+1. Select __Next__ to review your deployment settings.
+
+ :::image type="content" source="media/how-to-safely-rollout-managed-endpoints/add-green-deployment-from-endpoint-page.png" lightbox="media/how-to-safely-rollout-managed-endpoints/add-green-deployment-from-endpoint-page.png" alt-text="A screenshot of Add deployment option from Endpoint details page.":::
+1. Select __Create__ to create the deployment.
+
+Alternatively, you can use the **Models** page to add a deployment:
+
+1. In the left navigation bar, select the **Models** page.
+1. Select a model by checking the circle next to the model name.
+1. Select **Deploy** > **Real-time endpoint**.
+1. Choose to deploy to an existing managed online endpoint.
+ :::image type="content" source="media/how-to-safely-rollout-managed-endpoints/add-green-deployment-from-models-page.png" lightbox="media/how-to-safely-rollout-managed-endpoints/add-green-deployment-from-models-page.png" alt-text="A screenshot of Add deployment option from Models page.":::
+1. Follow the previous steps 3 to 9 to finish creating the green deployment.
+
+### Test the new deployment
+
+Though `green` has 0% of traffic allocated, you can still invoke the endpoint and deployment. Use the **Test** tab in the endpoint's details page to test your managed online deployment. Enter sample input and view the results.
+
+1. Select the **Test** tab in the endpoint's detail page.
+1. Select the green deployment from the dropdown menu.
+1. Copy the sample input from the [json](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/model-2/sample-request.json) file.
+1. Paste the sample input in the test box.
+1. Select **Test**.
+ ## Test the deployment with mirrored traffic (preview) [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-Once you've tested your `green` deployment, you can 'mirror' (or copy) a percentage of the live traffic to it. Mirroring traffic (also called shadowing) doesn't change the results returned to clients. Requests still flow 100% to the `blue` deployment. The mirrored percentage of the traffic is copied and submitted to the `green` deployment so you can gather metrics and logging without impacting your clients. Mirroring is useful when you want to validate a new deployment without impacting clients; for example, to check if latency is within acceptable bounds and that there are no HTTP errors. Testing the new deployment with traffic mirroring/shadowing is also known as [shadow testing](https://microsoft.github.io/code-with-engineering-playbook/automated-testing/shadow-testing/). The deployment receiving the mirrored traffic (in this case, the `green` deployment) can also be called the shadow deployment.
+Once you've tested your `green` deployment, you can *mirror* (or copy) a percentage of the live traffic to it. Traffic mirroring (also called shadowing) doesn't change the results returned to clientsΓÇörequests still flow 100% to the `blue` deployment. The mirrored percentage of the traffic is copied and submitted to the `green` deployment so that you can gather metrics and logging without impacting your clients. Mirroring is useful when you want to validate a new deployment without impacting clients. For example, you can use mirroring to check if latency is within acceptable bounds or to check that there are no HTTP errors. Testing the new deployment with traffic mirroring/shadowing is also known as [shadow testing](https://microsoft.github.io/code-with-engineering-playbook/automated-testing/shadow-testing/). The deployment receiving the mirrored traffic (in this case, the `green` deployment) can also be called the *shadow deployment*.
+
+Mirroring has the following limitations:
+* Mirroring is supported for the CLI (v2) (version 2.4.0 or above) and Python SDK (v2) (version 1.0.0 or above). If you use an older version of CLI/SDK to update an endpoint, you'll lose the mirror traffic setting.
+* Mirroring isn't currently supported for Kubernetes online endpoints.
+* You can mirror traffic to only one deployment in an endpoint.
+* The maximum percentage of traffic you can mirror is 50%. This limit is to reduce the effect on your [endpoint bandwidth quota](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) (default 5 MBPS)ΓÇöyour endpoint bandwidth is throttled if you exceed the allocated quota. For information on monitoring bandwidth throttling, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#metrics-at-endpoint-scope).
-> [!WARNING]
-> Mirroring traffic uses your [endpoint bandwidth quota](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) (default 5 MBPS). Your endpoint bandwidth will be throttled if you exceed the allocated quota. For information on monitoring bandwidth throttling, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#metrics-at-endpoint-scope).
+Also note the following behaviors:
-> [!IMPORTANT]
-> Mirrored traffic is supported for the CLI (v2) (version 2.4.0 or above) and Python SDK (v2) (version 1.0.0 or above). If you update the endpoint using an older version of CLI/SDK or Studio UI, the setting for mirrored traffic will be removed.
+* A deployment can be configured to receive only live traffic or mirrored traffic, not both.
+* When you invoke an endpoint, you can send traffic directly to a deployment by specifying the deployment's name, so that the endpoint returns the output of the deploymentΓÇöwhether it has been configured to receive mirrored traffic or live traffic. You can use the `--deployment-name` option [for CLI v2](/cli/azure/ml/online-endpoint#az-ml-online-endpoint-invoke-optional-parameters), or `deployment_name` option [for SDK v2](/python/api/azure-ai-ml/azure.ai.ml.operations.onlineendpointoperations#azure-ai-ml-operations-onlineendpointoperations-invoke) to specify the deployment.
+ > [!NOTE]
+ > When you specify the deployment to receive traffic, Azure Machine Learning will not mirror traffic to the shadow deployment. Azure Machine Learning mirrors traffic to the shadow deployment from traffic sent to the endpoint when you don't specify a deployment.
+
+Now, let's set the green deployment to receive 10% of mirrored traffic. Clients will still receive predictions from the blue deployment only.
+ # [Azure CLI](#tab/azure-cli)
for i in {1..20} ; do
done ```
-# [Python](#tab/python)
-
-The following command mirrors 10% of the traffic to the `green` deployment:
-
-[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=new_deployment_traffic)]
-
-You can test mirror traffic by invoking the endpoint several times:
-[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=several_tests_to_mirror_traffic)]
---
-Mirroring has the following limitations:
-* You can only mirror traffic to one deployment.
-* Mirror traffic isn't currently supported for Kubernetes online endpoints.
-* The maximum mirrored traffic you can configure is 50%. This limit is to reduce the impact on your endpoint bandwidth quota.
-
-Also note the following behavior:
-* A deployment can only be set to live or mirror traffic, not both.
-* You can send traffic directly to the mirror deployment by specifying the deployment set for mirror traffic.
-* You can send traffic directly to a live deployment by specifying the deployment set for live traffic, but in this case the traffic won't be mirrored to the mirror deployment. Mirror traffic is routed from traffic sent to endpoint without specifying the deployment.
-
-> [!TIP]
-> You can use `--deployment-name` option [for CLI v2](/cli/azure/ml/online-endpoint#az-ml-online-endpoint-invoke-optional-parameters), or `deployment_name` option [for SDK v2](/python/api/azure-ai-ml/azure.ai.ml.operations.onlineendpointoperations#azure-ai-ml-operations-onlineendpointoperations-invoke) to specify the deployment to be routed to.
--
-# [Azure CLI](#tab/azure-cli)
You can confirm that the specific percentage of the traffic was sent to the `green` deployment by seeing the logs from the deployment: ```azurecli
After testing, you can set the mirror traffic to zero to disable mirroring:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="reset_mirror_traffic" ::: # [Python](#tab/python)+
+The following command mirrors 10% of the traffic to the `green` deployment:
+
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=new_deployment_traffic)]
+
+You can test mirror traffic by invoking the endpoint several times:
+[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=several_tests_to_mirror_traffic)]
+ You can confirm that the specific percentage of the traffic was sent to the `green` deployment by seeing the logs from the deployment: ```python
After testing, you can set the mirror traffic to zero to disable mirroring:
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=disable_traffic_mirroring)]
+# [Studio](#tab/azure-studio)
+
+To mirror 10% of the traffic to the `green` deployment:
+
+1. From the endpoint Details page, Select **Update traffic**.
+1. Slide the button to **Enable mirrored traffic (Preview)**.
+1. Select the **green** deployment in the "Deployment name" dropdown menu.
+1. Keep the default traffic allocation of 10%.
+1. Select **Update**.
++
+The endpoint details page now shows mirrored traffic allocation of 10% to the `green` deployment.
++
+Now, when you send requests to the endpoint's URI, 10% of those requests will be routed to the `green` deployment. After testing, you can disable mirroring:
+
+1. From the endpoint Details page, Select **Update traffic**.
+1. Slide the button next to **Enable mirrored traffic (Preview)** again to disable mirrored traffic.
+1. Select **Update**.
++
-## Test the new deployment with a small percentage of live traffic
+## Allocate a small percentage of live traffic to the new deployment
# [Azure CLI](#tab/azure-cli)
Once you've tested your `green` deployment, allocate a small percentage of traff
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=allocate_some_traffic)]
+# [Studio](#tab/azure-studio)
+
+Once you've tested your `green` deployment, allocate a small percentage of traffic to it:
+
+1. In the endpoint Details page, Select **Update traffic**.
+1. Adjust the deployment traffic by allocating 10% to the green deployment and 90% to the blue deployment.
+1. Select **Update**.
+
-Now, your `green` deployment will receive 10% of requests.
+> [!TIP]
+> The total traffic percentage must sum to either 0% (to disable traffic) or 100% (to enable traffic).
+
+Now, your `green` deployment receives 10% of all live traffic. Clients will receive predictions from both the `blue` and `green` deployments.
:::image type="content" source="./media/how-to-safely-rollout-managed-endpoints/endpoint-concept.png" alt-text="Diagram showing traffic split between deployments.":::
Once you're fully satisfied with your `green` deployment, switch all traffic to
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=allocate_all_traffic)]
+# [Studio](#tab/azure-studio)
+
+Once you're fully satisfied with your `green` deployment, switch all traffic to it.
+
+1. In the endpoint Details page, Select **Update traffic**.
+1. Adjust the deployment traffic by allocating 100% to the green deployment and 0% to the blue deployment.
+1. Select **Update**.
+ ## Remove the old deployment
+Use the following steps to delete an individual deployment from a managed online endpoint. Deleting an individual deployment does affect the other deployments in the managed online endpoint:
+ # [Azure CLI](#tab/azure-cli) :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="delete_blue" :::
Once you're fully satisfied with your `green` deployment, switch all traffic to
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=remove_old_deployment)]
+# [Studio](#tab/azure-studio)
+
+> [!NOTE]
+> You cannot delete a deployment that has live traffic allocated to it. You must first [set traffic allocation](#send-all-traffic-to-your-new-deployment) for the deployment to 0% before deleting it.
+
+1. In the endpoint Details page, find the blue deployment.
+1. Select the **delete icon** next to the deployment name.
+ ## Delete the endpoint and deployment # [Azure CLI](#tab/azure-cli)
-If you aren't going use the deployment, you should delete it with:
+If you aren't going to use the endpoint and deployment, you should delete them. By deleting the endpoint, you'll also delete all its underlying deployments.
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-safe-rollout-online-endpoints.sh" ID="delete_endpoint" ::: # [Python](#tab/python)
-If you aren't going use the deployment, you should delete it with:
+If you aren't going to use the endpoint and deployment, you should delete them. By deleting the endpoint, you'll also delete all its underlying deployments.
[!notebook-python[](~/azureml-examples-main/sdk/python/endpoints/online/managed/online-endpoints-safe-rollout.ipynb?name=delete_endpoint)]
+# [Studio](#tab/azure-studio)
+
+If you aren't going to use the endpoint and deployment, you should delete them. By deleting the endpoint, you'll also delete all its underlying deployments.
+
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Endpoints** page.
+1. Select an endpoint by checking the circle next to the model name.
+1. Select **Delete**.
+
+Alternatively, you can delete a managed online endpoint directly by selecting the **Delete** icon in the endpoint's details page.
+ ## Next steps - [Explore online endpoint samples](https://github.com/Azure/azureml-examples/tree/v2samplesreorg/sdk/python/endpoints) - [Deploy models with REST](how-to-deploy-with-rest.md)-- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)
+- [Use network isolation with managed online endpoints](how-to-secure-online-endpoint.md)
- [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md) - [Monitor managed online endpoints](how-to-monitor-online-endpoints.md) - [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
machine-learning How To Select Algorithms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-select-algorithms.md
description: How to select Azure Machine Learning algorithms for supervised and unsupervised learning in clustering, classification, or regression experiments. -+
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
To create a job, a standalone Spark job can be defined as a YAML specification f
- `runtime_version` - defines the Spark runtime version. The following Spark runtime versions are currently supported: - `3.1` - `3.2`
+ > [!IMPORTANT]
+ >
+ > End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
An example is shown here: ```yaml
To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
- `Standard_E64S_V3` - `runtime_version` - a key that defines the Spark runtime version. The following Spark runtime versions are currently supported: - `3.1.0`
- - `3.2.0`
+ - `3.2.0`
+ > [!IMPORTANT]
+ >
+ > End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
- `compute` - the name of an attached Synapse Spark pool. - `inputs` - the inputs for the Spark job. This parameter should pass a dictionary with mappings of the input data bindings used in the job. This dictionary has these values: - a dictionary key defines the input name
To submit a standalone Spark job using the Azure Machine Learning studio UI:
1. If you selected **Spark automatic compute (Preview)**: 1. Select **Virtual machine size**. 1. Select **Spark runtime version**.
+ > [!IMPORTANT]
+ >
+ > End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
1. If you selected **Attached compute**: 1. Select an attached Synapse Spark pool from the **Select Azure Machine Learning attached compute** menu. 1. Select **Next**.
This functionality isn't available in the Studio UI. The Studio UI doesn't suppo
## Next steps - [Code samples for Spark jobs using Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark)-- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
+- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
-+ Last updated 11/30/2022
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-keras.md
description: Learn how to train and register a Keras deep neural network classification model running on TensorFlow using Azure Machine Learning SDK (v2). -+
machine-learning How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-model.md
-+ Last updated 08/25/2022
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
description: Learn how to run your PyTorch training scripts at enterprise scale using Azure Machine Learning SDK (v2). -+
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
description: Learn how Azure Machine Learning enables you to scale out a scikit-learn training job using elastic cloud compute resources (v2). -+
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
description: Learn how Azure Machine Learning SDK (v2) enables you to scale out a TensorFlow training job using elastic cloud compute resources. -+
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
description: Learn how to use the job creation UI in Azure Machine Learning studio to create a training job. -+
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
-+ Last updated 05/02/2022
machine-learning Interactive Data Wrangling With Apache Spark Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md
The Notebooks UI also provides options for Spark session configuration, for the
1. Select **Configure session** at the bottom of the screen. 1. Select a version of **Apache Spark** from the dropdown menu.
+ > [!IMPORTANT]
+ >
+ > End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
1. Select **Instance type** from the dropdown menu. The following instance types are currently supported: - `Standard_E4s_v3` - `Standard_E8s_v3`
machine-learning Reference Yaml Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule.md
Current schedule supports the following timezones. The key can be used directly
| UTC +02:00 | ISRAEL_STANDARD_TIME | "Israel Standard Time" | | UTC +02:00 | KALININGRAD_STANDARD_TIME | "Kaliningrad Standard Time" | | UTC +02:00 | LIBYA_STANDARD_TIME | "Libya Standard Time" |
-| UTC +03:00 | TURKEY_STANDARD_TIME | "Turkey Standard Time" |
+| UTC +03:00 | TÜRKIYE_STANDARD_TIME | "Türkiye Standard Time" |
| UTC +03:00 | ARABIC_STANDARD_TIME | "Arabic Standard Time" | | UTC +03:00 | ARAB_STANDARD_TIME | "Arab Standard Time" | | UTC +03:00 | BELARUS_STANDARD_TIME | "Belarus Standard Time" |
managed-grafana Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/encryption.md
Previously updated : 07/22/2022- Last updated : 03/23/2023+ # Encryption in Azure Managed Grafana
This article provides a short description of encryption within Azure Managed Gra
## Encryption in Azure Cosmos DB and Azure Database for PostgreSQL
-Managed Grafana leverages encryption offered by Azure Cosmos DB and Azure Database for PostgreSQL.
+Azure Managed Grafana leverages encryption offered by Azure Cosmos DB and Azure Database for PostgreSQL.
Data stored in Azure Cosmos DB and Azure Database for PostgreSQL is encrypted at rest on storage devices and in transport over the network.
For more information, go to [Encryption at rest in Azure Cosmos DB](../cosmos-db
## Server-side encryption
-The encryption model used by Managed Grafana is the server-side encryption model with Service-Managed keys.
+The encryption model used by Azure Managed Grafana is the server-side encryption model with Service-Managed keys.
-In this model, all key management aspects such as key issuance, rotation, and backup are managed by Microsoft. The Azure resource providers create the keys, place them in secure storage, and retrieve them when needed. For more information, go to [Server-side encryption using Service-Managed key](../security/fundamentals/encryption-models.md).
+In this model, all key management aspects such as key issuance, rotation, and backup are managed by Microsoft. The Azure resource providers create the keys, place them in secure storage, and retrieve them when needed. For more information, go to [Server-side encryption using service-managed keys](../security/fundamentals/encryption-models.md#server-side-encryption-using-service-managed-keys).
## Next steps
managed-grafana Grafana App Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/grafana-app-ui.md
Previously updated : 3/31/2022 Last updated : 3/23/2022+
-# Grafana UI
+# Grafana user interface
-This reference covers the Grafana web application's main UI components, including panels, visualizations, and dashboards. For consistency, it links to the corresponding topics in the Grafana documentation.
+This reference covers the Grafana web application's main UI components, including panels, visualizations, and dashboards. For consistency, this document links to the corresponding topics in the Grafana documentation.
## Panels
-A Grafana panel is a basic building block in Grafana. Each panel displays a dataset from a data source query using a [visualization](#visualizations). For more information about panels, refer to the following items:
+A Grafana panel is a basic building block in Grafana. Each panel displays a dataset from a data source query using a visualization For more information about panels, refer to the following items:
-* [Working with Grafana panels](https://grafana.com/docs/grafana/latest/panels/working-with-panels/)
+* [Working with Grafana panels](https://grafana.com/docs/grafana/latest/panels-visualizations/#panels-and-visualizations/)
* [Query a data source](https://grafana.com/docs/grafana/latest/panels/query-a-data-source/)
-* [Modify visualization text and background colors](https://grafana.com/docs/grafana/latest/panels/specify-thresholds/)
* [Override field values](https://grafana.com/docs/grafana/latest/panels/override-field-values/) * [Transform data](https://grafana.com/docs/grafana/latest/panels/transform-data/) * [Format data using value mapping](https://grafana.com/docs/grafana/latest/panels/format-data/)
A Grafana panel is a basic building block in Grafana. Each panel displays a data
## Visualizations
-Grafana [panels](#panels) support various visualizations, which are visual representations of underlying data. These representations are often graphical and include:
+Grafana panels support various visualizations, which are visual representations of underlying data. These representations are often graphical and include:
* Graphs and charts * [Time series](https://grafana.com/docs/grafana/latest/visualizations/time-series/)
A Grafana dashboard is a collection of [panels](#panels) arranged in rows and co
## Next steps > [!div class="nextstepaction"]
-> [How to share an Azure Managed Grafana instance](./how-to-share-grafana-workspace.md)
+> [Create a Grafana dashboard](./how-to-create-dashboard.md)
managed-grafana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/high-availability.md
Previously updated : 7/27/2022 - Last updated : 3/23/2023 + # Azure Managed Grafana service reliability
When the zone redundancy option is enabled, VMs are spread across [availability
In a zone-wide outage, no user action is required. An impacted Managed Grafana instance will rebalance itself to take advantage of the healthy zone automatically. The Managed Grafana service will attempt to heal the affected instances during zone recovery. > [!NOTE]
-> Zone redundancy can only be enabled when creating the Managed Grafana instance, and can't be modified subsequently. The zone redundancy option comes with an additional cost. Go to [Azure Managed Grafana pricing](https://azure.microsoft.com/pricing/details/managed-grafana/) for details.
+> Zone redundancy can only be enabled when creating the Azure Managed Grafana instance, and can't be modified subsequently. The zone redundancy option comes with an additional cost. Go to [Azure Managed Grafana pricing](https://azure.microsoft.com/pricing/details/managed-grafana/) for details.
### With zone redundancy disabled
-Zone redundancy is disabled in the Managed Grafana Standard tier by default. In this scenario, virtual machines are created as single-region resources and should not be expected to survive zone-downs scenarios as they can go down at same time.
+Zone redundancy is disabled in the Azure Managed Grafana Standard tier by default. In this scenario, virtual machines are created as single-region resources and should not be expected to survive zone-downs scenarios as they can go down at same time.
## Supported regions Zone redundancy support is enabled in the following regions: - | Americas | Europe | Africa | Asia Pacific | ||-|-|-| | East US | West Europe | | Australia East |
For a complete list of regions where Managed Grafana is available, see [Products
## Next steps > [!div class="nextstepaction"]
-> [Create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md)
+> [Enable zone redundancy](./how-to-enable-zone-redundancy.md)
managed-grafana How To Api Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-api-calls.md
Previously updated : 08/11/2022 Last updated : 03/23/2023+ # Tutorial: Call Grafana APIs programmatically
In this tutorial, you learn how to:
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-* An Azure Managed Grafana workspace. [Create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md).
-* An Azure Active Directory (Azure AD) application with a service principal. [Create an Azure AD application and service principal](../active-directory/develop/howto-create-service-principal-portal.md). For simplicity, use an application located in the same Azure AD tenant as your Managed Grafana instance.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+- An Azure Managed Grafana workspace. [Create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md).
+- An Azure Active Directory (Azure AD) application with a service principal. [Create an Azure AD application and service principal](../active-directory/develop/howto-create-service-principal-portal.md). For simplicity, use an application located in the same Azure AD tenant as your Azure Managed Grafana instance.
## Sign in to Azure
You now need to gather some information, which you'll use to get a Grafana API a
> [!NOTE] > You can only access a secret's value immediately after creating it. Copy the value before leaving the page to use it in the next step of this tutorial.
-1. Find your Grafana endpoint URL:
+1. Find the Grafana endpoint URL:
1. In the Azure portal, enter *Azure Managed Grafana* in the **Search resources, services, and docs (G+ /)** bar. 1. Select **Azure Managed Grafana** and open your Managed Grafana workspace.
managed-grafana How To Deterministic Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-deterministic-ip.md
Previously updated : 08/24/2022 Last updated : 03/23/2022 # Use deterministic outbound IPs
-In this guide, learn how to activate deterministic outbound IP support used by Azure Managed Grafana to communicate with its data sources, disable public access and set up a firewall rule to allow inbound requests from your Grafana instance.
+In this guide, learn how to activate deterministic outbound IP support used by Azure Managed Grafana to communicate with data sources, disable public access and set up a firewall rule to allow inbound requests from your Grafana instance.
## Prerequisites
In this guide, learn how to activate deterministic outbound IP support used by A
## Enable deterministic outbound IPs
-Deterministic outbound IP support is disabled by default in Azure Managed Grafana. You can enable this feature during the creation of the instance, or you can activate it on an instance that's already been created.
+Deterministic outbound IP support is disabled by default in Azure Managed Grafana. You can enable this feature during the creation of the instance, or you can activate it on an existing instance.
### Create an Azure Managed Grafana workspace with deterministic outbound IPs enabled
For more information about creating a new instance, go to [Quickstart: Create an
Run the [az grafana create](/cli/azure/grafana#az-grafana-create) command to create an Azure Managed Grafana instance with deterministic outbound IPs enabled. Replace `<azure-managed-grafana-name>` and `<resource-group>` with the name of the new Azure Managed Grafana instance and a resource group.
-```azurecli-interactive
+```azurecli
az grafana create --name <azure-managed-grafana-name> --resource-group <resource-group> --deterministic-outbound-ip Enabled ```
az grafana create --name <azure-managed-grafana-name> --resource-group <resource
#### [Portal](#tab/portal)
- 1. In the Azure portal, under **Settings** select **Configuration**, and then under **Deterministic outbound IP**, select **Enable**.
+ 1. In the Azure portal, under **Settings** select **Configuration**, and then under **General settings** > **Deterministic outbound IP**, select **Enable**.
:::image type="content" source="media/deterministic-ips/enable-deterministic-ip-addresses.png" alt-text="Screenshot of the Azure platform. Enable deterministic IPs."::: 1. Select **Save** to confirm the activation of deterministic outbound IP addresses.
az grafana create --name <azure-managed-grafana-name> --resource-group <resource
Run the [az grafana update](/cli/azure/grafana#az-grafana-update) command to update your Azure Managed Grafana instance and enable deterministic outbound IPs. Replace `<azure-managed-grafana-name>` with the name of your Azure Managed Grafana instance.
-```azurecli-interactive
+```azurecli
az grafana update --name <azure-managed-grafana-name> --deterministic-outbound-ip Enabled ```
Check if the Azure Managed Grafana endpoint can still access your data source.
Run the [az grafana data-source query](/cli/azure/grafana/data-source#az-grafana-data-source-query) command to query the data source. Replace `<azure-managed-grafana-name>` and `<data-source-name>` with the name of your Azure Managed Grafana instance and the name of your data source.
-```azurecli-interactive
+```azurecli
az grafana data-source query --name <azure-managed-grafana-name> --data-source <data-source-name> --output table ```
If the following error message is displayed, Azure Managed Grafana can't access
## Next steps > [!div class="nextstepaction"]
-> [Call Grafana APIs](how-to-api-calls.md)
+> [Set up private access](how-to-set-up-private-access.md)
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
Previously updated : 3/08/2022 Last updated : 02/23/2023 # How to modify access permissions to Azure Monitor
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
Previously updated : 3/31/2022 Last updated : 3/23/2023 # What is Azure Managed Grafana? Azure Managed Grafana is a data visualization platform built on top of the Grafana software by Grafana Labs. It's built as a fully managed Azure service operated and supported by Microsoft. Grafana helps you bring together metrics, logs and traces into a single user interface. With its extensive support for data sources and graphing capabilities, you can view and analyze your application and infrastructure telemetry data in real-time.
-Azure Managed Grafana is optimized for the Azure environment. It works seamlessly with many Azure services. It provides the following integration features:
+Azure Managed Grafana is optimized for the Azure environment. It works seamlessly with many Azure services and provides the following integration features:
* Built-in support for [Azure Monitor](../azure-monitor/index.yml) and [Azure Data Explorer](/azure/data-explorer/) * User authentication and access control using Azure Active Directory identities
To learn more about how Grafana works, visit the [Getting Started documentation]
## Why use Azure Managed Grafana?
-Managed Grafana lets you bring together all your telemetry data into one place. It can access a wide variety of data sources supported, including your data stores in Azure and elsewhere. By combining charts, logs and alerts into one view, you can get a holistic view of your application and infrastructure, and correlate information across multiple datasets.
+Azure Managed Grafana lets you bring together all your telemetry data into one place. It can access a wide variety of data sources supported, including your data stores in Azure and elsewhere. By combining charts, logs and alerts into one view, you can get a holistic view of your application and infrastructure, and correlate information across multiple datasets.
As a fully managed service, Azure Managed Grafana lets you deploy Grafana without having to deal with setup. The service provides high availability, SLA guarantees and automatic software updates.
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
Previously updated : 08/12/2022 Last updated : 03/23/2022
Get started by creating an Azure Managed Grafana workspace using the Azure porta
>[!NOTE] > If you don't meet this requirement, once you've created a new Azure Managed Grafana instance, ask a User Access Administrator, subscription Owner or resource group Owner to grant you a Grafana Admin, Grafana Editor or Grafana Viewer role on the instance.
-## Create a Managed Grafana workspace
+## Create an Azure Managed Grafana workspace
1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-in-replication.md
The results should appear similar to the following. Make sure to note the binary
All Data-in replication functions are done by stored procedures. You can find all procedures at [Data-in replication Stored Procedures](../reference-stored-procedures.md). The stored procedures can be run in the MySQL shell or MySQL Workbench.
- To link two servers and start replication, login to the target replica server in the Azure DB for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure DB for MySQL server.
+ To link two servers and start replication, login to the target replica server in the Azure Database for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure Database for MySQL server.
```sql CALL mysql.az_replication_change_master('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>');
network-watcher Network Watcher Security Group View Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-overview.md
Title: Introduction to effective security rules view in Azure Network Watcher
+ Title: Effective security rules
+ description: Learn about Azure Network Watcher effective security rules view capability. Previously updated : 03/18/2022 Last updated : 03/27/2023 -+
-# Introduction to Effective security rules view in Azure Network Watcher
+# Effective security rules view in Azure Network Watcher
-Network Security groups are associated at a subnet level or at a NIC level. When associated at a subnet level, it applies to all the VM instances in the subnet. Effective security rules view returns all the configured NSGs and rules that are associated at a NIC and subnet level for a virtual machine providing insight into the configuration. In addition, the effective security rules are returned for each of the NICs in a VM. Using Effective security rules view, you can assess a VM for network vulnerabilities such as open ports. You can also validate if your Network Security Group is working as expected based on a [comparison between the configured and the approved security rules](network-watcher-nsg-auditing-powershell.md).
+[Network security groups](../virtual-network/network-security-groups-overview.md) can be associated at a subnet level or at a network interface level. When associated at a subnet level, it applies to all virtual machines (VMs) in the virtual network subnet. With effective security rules view in Network Watcher, you can see all inbound and outbound security rules that apply to a virtual machineΓÇÖs network interface(s). These rules are set by the network security groups that are associated at the virtual machine's subnet level and network interface level. Using effective security rules view, you can assess a virtual machine for network vulnerabilities such as open ports.
-In addition to network security rules placed via NSGs, Network WatcherΓÇÖs Effective security rules blade also shows the security admin rules associated with
-[Azure Virtual Network Manager (AVNM).](../virtual-network-manager/overview.md) Azure Virtual Network Manager is a management service that enables users to group, configure, deploy and manage Virtual Networks globally across subscriptions. AVNM security configuration allows users to define a collection of rules that can be applied to one or more network security groups at the global level. These security rules have a higher priority than network security group (NSG) rules.
+In addition to security rules set by network security groups, effective security rules view also shows the security admin rules associated with
+[Azure Virtual Network Manager](../virtual-network-manager/overview.md). Azure Virtual Network Manager is a management service that enables users to group, configure, deploy and manage virtual networks globally across subscriptions. Azure Virtual Network Manager security configuration allows users to define a collection of rules that can be applied to one or more network security groups at the global level. These security rules have a higher priority than network security group rules.
-A more extended use case is in security compliance and auditing. You can define a prescriptive set of security rules as a model for security governance in your organization. A periodic compliance audit can be implemented in a programmatic way by comparing the prescriptive rules with the effective rules for each of the VMs in your network.
+A more extended use case is in security compliance and auditing. You can define a prescriptive set of security rules as a model for security governance in your organization. You can implement a periodic compliance audit in a programmatic way by comparing the prescriptive rules with the effective rules for each of the virtual machines in your network.
-In the portal rules are displayed for each Network Interface and grouped by inbound vs outbound. This provides a simple view into the rules applied to a virtual machine. A download button is provided to easily download all the security rules no matter the tab into a CSV file.
+In Azure portal, rules are displayed for each network interface and grouped by inbound vs outbound. This provides a simple view into the rules applied to a virtual machine. A download button is provided to easily download all the security rules into a CSV file.
-![security group view][1]
-Rules can be selected and a new blade opens up to show the Network Security Group and source and destination prefixes. From this blade you can navigate directly to the Network Security Group resource.
+You can select a rule to see associated source and destination prefixes.
-![drilldown][2]
### Next steps
-You can also use the *Effective Security Groups* feature through other methods listed below:
-* [REST API](/rest/api/virtualnetwork/NetworkInterfaces/ListEffectiveNetworkSecurityGroups)
-* [PowerShell](/powershell/module/az.network/get-azeffectivenetworksecuritygroup)
-* [Azure CLI](/cli/azure/network/nic#az-network-nic-list-effective-nsg)
-
-Learn how to audit your Network Security Group settings by visiting [Audit Network Security Group settings with PowerShell](network-watcher-nsg-auditing-powershell.md)
-
-[1]: ./media/network-watcher-security-group-view-overview/updated-security-group-view.png
-[2]: ./media/network-watcher-security-group-view-overview/figure1.png
+- To learn about Network Watcher, see [What is Azure Network Watcher?](network-watcher-monitoring-overview.md)
+- To learn how traffic is evaluated with network security groups, see [How network security groups work](../virtual-network/network-security-group-how-it-works.md).
networking Connectivty Interoperability Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/connectivty-interoperability-control-plane.md
Title: 'Interoperability in Azure : Control plane analysis'
+ Title: Interoperability in Azure - Control plane analysis
description: This article provides the control plane analysis of the test setup you can use to analyze interoperability between ExpressRoute, a site-to-site VPN, and virtual network peering in Azure.---+ - Previously updated : 10/18/2018- Last updated : 03/24/2023+
-# Interoperability in Azure : Control plane analysis
+# Interoperability in Azure - Control plane analysis
-This article describes the control plane analysis of the [test setup][Setup]. You can also review the [test setup configuration][Configuration] and the [data plane analysis][Data-Analysis] of the test setup.
+This article describes the control plane analysis of the [test setup](./connectivty-interoperability-preface.md). You can also review the [test setup configuration](./connectivty-interoperability-configuration.md) and the [data plane analysis](./connectivty-interoperability-data-plane.md) of the test setup.
Control plane analysis essentially examines routes that are exchanged between networks within a topology. Control plane analysis can help you understand how different networks view the topology.
-## Hub and spoke VNet perspective
+## Hub and spoke virtual network perspective
-The following figure illustrates the network from the perspective of a hub virtual network (VNet) and a spoke VNet (highlighted in blue). The figure also shows the autonomous system number (ASN) of different networks and routes that are exchanged between different networks:
+The following figure illustrates the network from the perspective of a hub virtual network and a spoke virtual network (highlighted in blue). The figure also shows the autonomous system number (ASN) of different networks and routes that are exchanged between different networks:
-![1][1]
-The ASN of the VNet's Azure ExpressRoute gateway is different from the ASN of Microsoft Enterprise Edge Routers (MSEEs). An ExpressRoute gateway uses a private ASN (a value of **65515**) and MSEEs use public ASN (a value of **12076**) globally. When you configure ExpressRoute peering, because MSEE is the peer, you use **12076** as the peer ASN. On the Azure side, MSEE establishes eBGP peering with the ExpressRoute gateway. The dual eBGP peering that the MSEE establishes for each ExpressRoute peering is transparent at the control plane level. Therefore, when you view an ExpressRoute route table, you see the VNet's ExpressRoute gateway ASN for the VNet's prefixes.
+The ASN of the virtual network's Azure ExpressRoute gateway is different from the ASN of Microsoft Enterprise edge routers (MSEEs). An ExpressRoute gateway uses a private ASN (a value of **65515**) and MSEEs use public ASN (a value of **12076**) globally. When you configure ExpressRoute peering, because MSEE is the peer, you use **12076** as the peer ASN. On the Azure side, MSEE establishes eBGP peering with the ExpressRoute gateway. The dual eBGP peering that the MSEE establishes for each ExpressRoute peering is transparent at the control plane level. Therefore, when you view an ExpressRoute route table, you see the virtual network's ExpressRoute gateway ASN for the VNet's prefixes.
The following figure shows a sample ExpressRoute route table:
-![5][5]
Within Azure, the ASN is significant only from a peering perspective. By default, the ASN of both the ExpressRoute gateway and the VPN gateway in Azure VPN Gateway is **65515**.
-## On-premises Location 1 and the remote VNet perspective via ExpressRoute 1
+## On-premises Location 1 and the remote virtual network perspective via ExpressRoute 1
-Both on-premises Location 1 and the remote VNet are connected to the hub VNet via ExpressRoute 1. They share the same perspective of the topology, as shown in the following diagram:
+Both on-premises Location 1 and the remote virtual network are connected to the hub virtual network via ExpressRoute 1. They share the same perspective of the topology, as shown in the following diagram:
-![2][2]
-## On-premises Location 1 and the branch VNet perspective via a site-to-site VPN
+## On-premises Location 1 and the branch virtual network perspective via a site-to-site VPN
-Both on-premises Location 1 and the branch VNet are connected to a hub VNet's VPN gateway via a site-to-site VPN connection. They share the same perspective of the topology, as shown in the following diagram:
+Both on-premises Location 1 and the branch virtual network are connected to a hub virtual network's VPN gateway via a site-to-site VPN connection. They share the same perspective of the topology, as shown in the following diagram:
-![3][3]
## On-premises Location 2 perspective
-On-premises Location 2 is connected to a hub VNet via private peering of ExpressRoute 2:
+On-premises Location 2 is connected to a hub virtual network via private peering of ExpressRoute 2:
-![4][4]
## ExpressRoute and site-to-site VPN connectivity in tandem ### Site-to-site VPN over ExpressRoute
-You can configure a site-to-site VPN by using ExpressRoute Microsoft peering to privately exchange data between your on-premises network and your Azure VNets. With this configuration, you can exchange data with confidentiality, authenticity, and integrity. The data exchange also is anti-replay. For more information about how to configure a site-to-site IPsec VPN in tunnel mode by using ExpressRoute Microsoft peering, see [Site-to-site VPN over ExpressRoute Microsoft peering][S2S-Over-ExR].
+You can configure a site-to-site VPN by using ExpressRoute Microsoft peering to privately exchange data between your on-premises network and your Azure Virtual Networks. With this configuration, you can exchange data with confidentiality, authenticity, and integrity. The data exchange also is anti-replay. For more information about how to configure a site-to-site IPsec VPN in tunnel mode by using ExpressRoute Microsoft peering, see [Site-to-site VPN over ExpressRoute Microsoft peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md).
The primary limitation of configuring a site-to-site VPN that uses Microsoft peering is throughput. Throughput over the IPsec tunnel is limited by the VPN gateway capacity. The VPN gateway throughput is lower than ExpressRoute throughput. In this scenario, using the IPsec tunnel for highly secure traffic and using private peering for all other traffic helps optimize the ExpressRoute bandwidth utilization.
The primary limitation of configuring a site-to-site VPN that uses Microsoft pee
ExpressRoute serves as a redundant circuit pair to ensure high availability. You can configure geo-redundant ExpressRoute connectivity in different Azure regions. Also, as demonstrated in our test setup, within an Azure region, you can use a site-to-site VPN to create a failover path for your ExpressRoute connectivity. When the same prefixes are advertised over both ExpressRoute and a site-to-site VPN, Azure prioritizes ExpressRoute. To avoid asymmetrical routing between ExpressRoute and the site-to-site VPN, on-premises network configuration should also reciprocate by using ExpressRoute connectivity before it uses site-to-site VPN connectivity.
-For more information about how to configure coexisting connections for ExpressRoute and a site-to-site VPN, see [ExpressRoute and site-to-site coexistence][ExR-S2S-CoEx].
+For more information about how to configure coexisting connections for ExpressRoute and a site-to-site VPN, see [ExpressRoute and site-to-site coexistence](../expressroute/expressroute-howto-coexist-resource-manager.md).
-## Extend back-end connectivity to spoke VNets and branch locations
+## Extend back-end connectivity to spoke virtual networks and branch locations
-### Spoke VNet connectivity by using VNet peering
+### Spoke virtual network connectivity by using virtual network peering
-Hub and spoke VNet architecture is widely used. The hub is a VNet in Azure that acts as a central point of connectivity between your spoke VNets and to your on-premises network. The spokes are VNets that peer with the hub, and which you can use to isolate workloads. Traffic flows between the on-premises datacenter and the hub through an ExpressRoute or VPN connection. For more information about the architecture, see [Implement a hub-spoke network topology in Azure][Hub-n-Spoke].
+Hub and spoke virtual network architecture is widely used. The hub is a virtual network in Azure that acts as a central point of connectivity between your spoke virtual networks and to your on-premises network. The spokes are virtual networks that peer with the hub, and which you can use to isolate workloads. Traffic flows between the on-premises datacenter and the hub through an ExpressRoute or VPN connection. For more information about the architecture, see [Implement a hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
-In VNet peering within a region, spoke VNets can use hub VNet gateways (both VPN and ExpressRoute gateways) to communicate with remote networks.
+In virtual network peering within a region, spoke virtual networks can use hub virtual network gateways (both VPN and ExpressRoute gateways) to communicate with remote networks.
-### Branch VNet connectivity by using site-to-site VPN
+### Branch virtual network connectivity by using site-to-site VPN
-You might want branch VNets, which are in different regions, and on-premises networks to communicate with each other via a hub VNet. The native Azure solution for this configuration is site-to-site VPN connectivity by using a VPN. An alternative is to use a network virtual appliance (NVA) for routing in the hub.
+You might want branch virtual networks, which are in different regions, and on-premises networks to communicate with each other via a hub virtual network. The native Azure solution for this configuration is site-to-site VPN connectivity by using a VPN. An alternative is to use a network virtual appliance (NVA) for routing in the hub.
-For more information, see [What is VPN Gateway?][VPN] and [Deploy a highly available NVA][Deploy-NVA].
+For more information, see [What is VPN Gateway?](../vpn-gateway/vpn-gateway-about-vpngateways.md) and [Deploy a highly available NVA](/azure/architecture/reference-architectures/dmz/nva-ha).
## Next steps
-Learn about [data plane analysis][Data-Analysis] of the test setup and Azure network monitoring feature views.
+Learn about [data plane analysis](./connectivty-interoperability-data-plane.md) of the test setup and Azure network monitoring feature views.
+
+See the [ExpressRoute FAQ](../expressroute/expressroute-faqs.md) to:
-See the [ExpressRoute FAQ][ExR-FAQ] to:
- Learn how many ExpressRoute circuits you can connect to an ExpressRoute gateway.+ - Learn how many ExpressRoute gateways you can connect to an ExpressRoute circuit.-- Learn about other scale limits of ExpressRoute.--
-<!--Image References-->
-[1]: ./media/backend-interoperability/hubview.png "Hub and spoke VNet perspective of the topology"
-[2]: ./media/backend-interoperability/loc1exrview.png "Location 1 and remote VNet perspective of the topology via ExpressRoute 1"
-[3]: ./media/backend-interoperability/loc1vpnview.png "Location 1 and branch VNet perspective of the topology via a site-to-site VPN"
-[4]: ./media/backend-interoperability/loc2view.png "Location 2 perspective of the topology"
-[5]: ./media/backend-interoperability/exr1-routetable.png "ExpressRoute 1 route table"
-
-<!--Link References-->
-[Setup]: ./connectivty-interoperability-preface.md
-[Configuration]: ./connectivty-interoperability-configuration.md
-[ExpressRoute]: ../expressroute/expressroute-introduction.md
-[VPN]: ../vpn-gateway/vpn-gateway-about-vpngateways.md
-[VNet]: ../virtual-network/tutorial-connect-virtual-networks-portal.md
-[Configuration]: ./connectivty-interoperability-configuration.md
-[Control-Analysis]:
-[Data-Analysis]: ./connectivty-interoperability-data-plane.md
-[ExR-FAQ]: ../expressroute/expressroute-faqs.md
-[S2S-Over-ExR]: ../expressroute/site-to-site-vpn-over-microsoft-peering.md
-[ExR-S2S-CoEx]: ../expressroute/expressroute-howto-coexist-resource-manager.md
-[Hub-n-Spoke]: /azure/architecture/reference-architectures/hybrid-networking/hub-spoke
-[Deploy-NVA]: /azure/architecture/reference-architectures/dmz/nva-ha
-[VNet-Config]: ../virtual-network/virtual-network-manage-peering.md
+
+- Learn about other scale limits of ExpressRoute.
networking Connectivty Interoperability Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/connectivty-interoperability-data-plane.md
Title: 'Interoperability in Azure : Data plane analysis'
+ Title: Interoperability in Azure - Data plane analysis
description: This article provides the data plane analysis of the test setup you can use to analyze interoperability between ExpressRoute, a site-to-site VPN, and virtual network peering in Azure.----+ - Previously updated : 10/18/2018-- Last updated : 03/24/2023+
-# Interoperability in Azure : Data plane analysis
+# Interoperability in Azure - Data plane analysis
-This article describes the data plane analysis of the [test setup][Setup]. You can also review the [test setup configuration][Configuration] and the [control plane analysis][Control-Analysis] of the test setup.
+This article describes the data plane analysis of the [test setup](./connectivty-interoperability-preface.md). You can also review the [test setup configuration](./connectivty-interoperability-configuration.md) and the [control plane analysis](./connectivty-interoperability-control-plane.md) of the test setup.
Data plane analysis examines the path taken by packets that traverse from one local network (LAN or virtual network) to another within a topology. The data path between two local networks isn't necessarily symmetrical. Therefore, in this article, we analyze a forwarding path from a local network to another network that's separate from the reverse path.
-## Data path from the hub VNet
+## Data path from the hub virtual network
-### Path to the spoke VNet
+### Path to the spoke virtual network
-Virtual network (VNet) peering emulates network bridge functionality between the two VNets that are peered. Traceroute output from a hub VNet to a VM in the spoke VNet is shown here:
+Virtual network peering emulates network bridge functionality between the two virtual networks that are peered. Traceroute output from a hub virtual network to a VM in the spoke virtual network is shown here:
```console C:\Users\rb>tracert 10.11.30.4
Tracing route to 10.11.30.4 over a maximum of 30 hops
Trace complete. ```
-The following figure shows the graphical connection view of the hub VNet and the spoke VNet from the perspective of Azure Network Watcher:
+The following figure shows the graphical connection view of the hub virtual network and the spoke virtual network from the perspective of Azure Network Watcher:
-![1][1]
+### Path to the branch virtual network
-### Path to the branch VNet
+Traceroute output from a hub virtual network to a VM in the branch virtual network is shown here:
-Traceroute output from a hub VNet to a VM in the branch VNet is shown here:
```console C:\Users\rb>tracert 10.11.30.68
Tracing route to 10.11.30.68 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the VPN gateway in Azure VPN Gateway of the hub VNet. The second hop is the VPN gateway of the branch VNet. The IP address of the VPN gateway of the branch VNet isn't advertised in the hub VNet. The third hop is the VM on the branch VNet.
+In this traceroute, the first hop is the VPN gateway in Azure VPN Gateway of the hub virtual network. The second hop is the VPN gateway of the branch virtual network. The IP address of the VPN gateway of the branch virtual network isn't advertised in the hub virtual network. The third hop is the VM on the branch virtual network.
-The following figure shows the graphical connection view of the hub VNet and the branch VNet from the perspective of Network Watcher:
+The following figure shows the graphical connection view of the hub virtual network and the branch virtual network from the perspective of Network Watcher:
-![2][2]
For the same connection, the following figure shows the grid view in Network Watcher:
-![3][3]
### Path to on-premises Location 1
-Traceroute output from a hub VNet to a VM in on-premises Location 1 is shown here:
+Traceroute output from a hub virtual network to a VM in on-premises Location 1 is shown here:
```console C:\Users\rb>tracert 10.2.30.10
Tracing route to 10.2.30.10 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the Azure ExpressRoute gateway tunnel endpoint to a Microsoft Enterprise Edge Router (MSEE). The second and third hops are the customer edge (CE) router and the on-premises Location 1 LAN IPs. These IP addresses aren't advertised in the hub VNet. The fourth hop is the VM in the on-premises Location 1.
-
+In this traceroute, the first hop is the Azure ExpressRoute gateway tunnel endpoint to a Microsoft Enterprise edge router (MSEE). The second and third hops are the customer edge (CE) router and the on-premises Location 1 LAN IPs. These IP addresses aren't advertised in the hub virtual network. The fourth hop is the VM in the on-premises Location 1.
### Path to on-premises Location 2
-Traceroute output from a hub VNet to a VM in on-premises Location 2 is shown here:
+Traceroute output from a hub virtual network to a VM in on-premises Location 2 is shown here:
```console C:\Users\rb>tracert 10.1.31.10
Tracing route to 10.1.31.10 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the ExpressRoute gateway tunnel endpoint to an MSEE. The second and third hops are the CE router and the on-premises Location 2 LAN IPs. These IP addresses aren't advertised in the hub VNet. The fourth hop is the VM on the on-premises Location 2.
+In this traceroute, the first hop is the ExpressRoute gateway tunnel endpoint to an MSEE. The second and third hops are the CE router and the on-premises Location 2 LAN IPs. These IP addresses aren't advertised in the hub virtual network. The fourth hop is the VM on the on-premises Location 2.
-### Path to the remote VNet
+### Path to the remote virtual network
-Traceroute output from a hub VNet to a VM in the remote VNet is shown here:
+Traceroute output from a hub virtual network to a VM in the remote virtual network is shown here:
```console C:\Users\rb>tracert 10.17.30.4
Tracing route to 10.17.30.4 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the ExpressRoute gateway tunnel endpoint to an MSEE. The second hop is the remote VNet's gateway IP. The second hop IP range isn't advertised in the hub VNet. The third hop is the VM on the remote VNet.
+In this traceroute, the first hop is the ExpressRoute gateway tunnel endpoint to an MSEE. The second hop is the remote virtual network's gateway IP. The second hop IP range isn't advertised in the hub virtual network. The third hop is the VM on the remote virtual network.
-## Data path from the spoke VNet
+## Data path from the spoke virtual network
-The spoke VNet shares the network view of the hub VNet. Through VNet peering, the spoke VNet uses the remote gateway connectivity of the hub VNet as if it's directly connected to the spoke VNet.
+The spoke virtual network shares the network view of the hub virtual network. Through virtual network peering, the spoke virtual network uses the remote gateway connectivity of the hub virtual network as if it's directly connected to the spoke virtual network.
-### Path to the hub VNet
+### Path to the hub virtual network
-Traceroute output from the spoke VNet to a VM in the hub VNet is shown here:
+Traceroute output from the spoke virtual network to a VM in the hub virtual network is shown here:
```console C:\Users\rb>tracert 10.10.30.4
Tracing route to 10.10.30.4 over a maximum of 30 hops
Trace complete. ```
-### Path to the branch VNet
+### Path to the branch virtual network
-Traceroute output from the spoke VNet to a VM in the branch VNet is shown here:
+Traceroute output from the spoke virtual network to a VM in the branch virtual network is shown here:
```console C:\Users\rb>tracert 10.11.30.68
Tracing route to 10.11.30.68 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the VPN gateway of the hub VNet. The second hop is the VPN gateway of the branch VNet. The IP address of the VPN gateway of the branch VNet isn't advertised within the hub/spoke VNet. The third hop is the VM on the branch VNet.
+In this traceroute, the first hop is the VPN gateway of the hub virtual network. The second hop is the VPN gateway of the branch virtual network. The IP address of the VPN gateway of the branch virtual network isn't advertised within the hub/spoke virtual network. The third hop is the VM on the branch virtual network.
### Path to on-premises Location 1
-Traceroute output from the spoke VNet to a VM in on-premises Location 1 is shown here:
+Traceroute output from the spoke virtual network to a VM in on-premises Location 1 is shown here:
```console C:\Users\rb>tracert 10.2.30.10
Tracing route to 10.2.30.10 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the hub VNet's ExpressRoute gateway tunnel endpoint to an MSEE. The second and third hops are the CE router and the on-premises Location 1 LAN IPs. These IP addresses aren't advertised in the hub/spoke VNet. The fourth hop is the VM in the on-premises Location 1.
+In this traceroute, the first hop is the hub virtual network's ExpressRoute gateway tunnel endpoint to an MSEE. The second and third hops are the CE router and the on-premises Location 1 LAN IPs. These IP addresses aren't advertised in the hub/spoke virtual network. The fourth hop is the VM in the on-premises Location 1.
### Path to on-premises Location 2
-Traceroute output from the spoke VNet to a VM in on-premises Location 2 is shown here:
+Traceroute output from the spoke virtual network to a VM in on-premises Location 2 is shown here:
```console C:\Users\rb>tracert 10.1.31.10
Tracing route to 10.1.31.10 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the hub VNet's ExpressRoute gateway tunnel endpoint to an MSEE. The second and third hops are the CE router and the on-premises Location 2 LAN IPs. These IP addresses aren't advertised in the hub/spoke VNets. The fourth hop is the VM in the on-premises Location 2.
+In this traceroute, the first hop is the hub virtual network's ExpressRoute gateway tunnel endpoint to an MSEE. The second and third hops are the CE router and the on-premises Location 2 LAN IPs. These IP addresses aren't advertised in the hub/spoke virtual networks. The fourth hop is the VM in the on-premises Location 2.
-### Path to the remote VNet
+### Path to the remote virtual network
-Traceroute output from the spoke VNet to a VM in the remote VNet is shown here:
+Traceroute output from the spoke virtual network to a VM in the remote virtual network is shown here:
```console C:\Users\rb>tracert 10.17.30.4
Tracing route to 10.17.30.4 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the hub VNet's ExpressRoute gateway tunnel endpoint to an MSEE. The second hop is the remote VNet's gateway IP. The second hop IP range isn't advertised in the hub/spoke VNet. The third hop is the VM on the remote VNet.
+In this traceroute, the first hop is the hub virtual network's ExpressRoute gateway tunnel endpoint to an MSEE. The second hop is the remote virtual network's gateway IP. The second hop IP range isn't advertised in the hub/spoke virtual network. The third hop is the VM on the remote virtual network.
-## Data path from the branch VNet
+## Data path from the branch virtual network
-### Path to the hub VNet
+### Path to the hub virtual network
-Traceroute output from the branch VNet to a VM in the hub VNet is shown here:
+Traceroute output from the branch virtual network to a VM in the hub virtual network is shown here:
```console C:\Windows\system32>tracert 10.10.30.4
Tracing route to 10.10.30.4 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the VPN gateway of the branch VNet. The second hop is the VPN gateway of the hub VNet. The IP address of the VPN gateway of the hub VNet isn't advertised in the remote VNet. The third hop is the VM on the hub VNet.
+In this traceroute, the first hop is the VPN gateway of the branch virtual network. The second hop is the VPN gateway of the hub virtual network. The IP address of the VPN gateway of the hub virtual network isn't advertised in the remote virtual network. The third hop is the VM on the hub virtual network.
-### Path to the spoke VNet
+### Path to the spoke virtual network
-Traceroute output from the branch VNet to a VM in the spoke VNet is shown here:
+Traceroute output from the branch virtual network to a VM in the spoke virtual network is shown here:
```console C:\Users\rb>tracert 10.11.30.4
Tracing route to 10.11.30.4 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the VPN gateway of the branch VNet. The second hop is the VPN gateway of the hub VNet. The IP address of the VPN gateway of the hub VNet isn't advertised in the remote VNet. The third hop is the VM on the spoke VNet.
+In this traceroute, the first hop is the VPN gateway of the branch virtual network. The second hop is the VPN gateway of the hub virtual network. The IP address of the VPN gateway of the hub virtual network isn't advertised in the remote virtual network. The third hop is the VM on the spoke virtual network.
### Path to on-premises Location 1
-Traceroute output from the branch VNet to a VM in on-premises Location 1 is shown here:
+Traceroute output from the branch virtual network to a VM in on-premises Location 1 is shown here:
```console C:\Users\rb>tracert 10.2.30.10
Tracing route to 10.2.30.10 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first hop is the VPN gateway of the branch VNet. The second hop is the VPN gateway of the hub VNet. The IP address of the VPN gateway of the hub VNet isn't advertised in the remote VNet. The third hop is the VPN tunnel termination point on the primary CE router. The fourth hop is an internal IP address of on-premises Location 1. This LAN IP address isn't advertised outside the CE router. The fifth hop is the destination VM in the on-premises Location 1.
+In this traceroute, the first hop is the VPN gateway of the branch virtual network. The second hop is the VPN gateway of the hub virtual network. The IP address of the VPN gateway of the hub virtual network isn't advertised in the remote virtual network. The third hop is the VPN tunnel termination point on the primary CE router. The fourth hop is an internal IP address of on-premises Location 1. This LAN IP address isn't advertised outside the CE router. The fifth hop is the destination VM in the on-premises Location 1.
-### Path to on-premises Location 2 and the remote VNet
+### Path to on-premises Location 2 and the remote virtual network
-As we discussed in the control plane analysis, the branch VNet has no visibility either to on-premises Location 2 or to the remote VNet per the network configuration. The following ping results confirm:
+As we discussed in the control plane analysis, the branch virtual network has no visibility either to on-premises Location 2 or to the remote virtual network per the network configuration. The following ping results confirm:
```console C:\Users\rb>ping 10.1.31.10
Ping statistics for 10.17.30.4:
## Data path from on-premises Location 1
-### Path to the hub VNet
+### Path to the hub virtual network
-Traceroute output from on-premises Location 1 to a VM in the hub VNet is shown here:
+Traceroute output from on-premises Location 1 to a VM in the hub virtual network is shown here:
```console C:\Users\rb>tracert 10.10.30.4
Tracing route to 10.10.30.4 over a maximum of 30 hops
Trace complete. ```
-In this traceroute, the first two hops are part of the on-premises network. The third hop is the primary MSEE interface that faces the CE router. The fourth hop is the ExpressRoute gateway of the hub VNet. The IP range of the ExpressRoute gateway of the hub VNet isn't advertised to the on-premises network. The fifth hop is the destination VM.
+In this traceroute, the first two hops are part of the on-premises network. The third hop is the primary MSEE interface that faces the CE router. The fourth hop is the ExpressRoute gateway of the hub virtual network. The IP range of the ExpressRoute gateway of the hub virtual network isn't advertised to the on-premises network. The fifth hop is the destination VM.
Network Watcher provides only an Azure-centric view. For an on-premises perspective, we use Azure Network Performance Monitor. Network Performance Monitor provides agents that you can install on servers in networks outside Azure for data path analysis.
-The following figure shows the topology view of the on-premises Location 1 VM connectivity to the VM on the hub VNet via ExpressRoute:
+The following figure shows the topology view of the on-premises Location 1 VM connectivity to the VM on the hub virtual network via ExpressRoute:
-![4][4]
-As discussed earlier, the test setup uses a site-to-site VPN as backup connectivity for ExpressRoute between the on-premises Location 1 and the hub VNet. To test the backup data path, let's induce an ExpressRoute link failure between the on-premises Location 1 primary CE router and the corresponding MSEE. To induce an ExpressRoute link failure, shut down the CE interface that faces the MSEE:
+As discussed earlier, the test setup uses a site-to-site VPN as backup connectivity for ExpressRoute between the on-premises Location 1 and the hub virtual network. To test the backup data path, let's induce an ExpressRoute link failure between the on-premises Location 1 primary CE router and the corresponding MSEE. To induce an ExpressRoute link failure, shut down the CE interface that faces the MSEE:
```console C:\Users\rb>tracert 10.10.30.4
Tracing route to 10.10.30.4 over a maximum of 30 hops
Trace complete. ```
-The following figure shows the topology view of the on-premises Location 1 VM connectivity to the VM on the hub VNet via site-to-site VPN connectivity when ExpressRoute connectivity is down:
+The topology view of the on-premises Location 1 VM connectivity is shown in the following figure. This connectivity is established to the VM on the hub virtual network. The connectivity is achieved via site-to-site VPN connectivity when ExpressRoute connectivity is down:
-![5][5]
-### Path to the spoke VNet
+### Path to the spoke virtual network
-Traceroute output from on-premises Location 1 to a VM in the spoke VNet is shown here:
+Traceroute output from on-premises Location 1 to a VM in the spoke virtual network is shown here:
-Let's bring back the ExpressRoute primary connectivity to do the data path analysis toward the spoke VNet:
+Let's bring back the ExpressRoute primary connectivity to do the data path analysis toward the spoke virtual network:
```console C:\Users\rb>tracert 10.11.30.4
Trace complete.
Bring up the primary ExpressRoute 1 connectivity for the remainder of the data path analysis.
-### Path to the branch VNet
+### Path to the branch virtual network
-Traceroute output from on-premises Location 1 to a VM in the branch VNet is shown here:
+Traceroute output from on-premises Location 1 to a VM in the branch virtual network is shown here:
```console C:\Users\rb>tracert 10.11.30.68
Trace complete.
### Path to on-premises Location 2
-As we discuss in the [control plane analysis][Control-Analysis], the on-premises Location 1 has no visibility to on-premises Location 2 per the network configuration. The following ping results confirm:
+As we discuss in the [control plane analysis](./connectivty-interoperability-control-plane.md), the on-premises Location 1 has no visibility to on-premises Location 2 per the network configuration. The following ping results confirm:
```console C:\Users\rb>ping 10.1.31.10
Ping statistics for 10.1.31.10:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), ```
-### Path to the remote VNet
+### Path to the remote virtual network
-Traceroute output from on-premises Location 1 to a VM in the remote VNet is shown here:
+Traceroute output from on-premises Location 1 to a VM in the remote virtual network is shown here:
```console C:\Users\rb>tracert 10.17.30.4
Trace complete.
## Data path from on-premises Location 2
-### Path to the hub VNet
+### Path to the hub virtual network
-Traceroute output from on-premises Location 2 to a VM in the hub VNet is shown here:
+Traceroute output from on-premises Location 2 to a VM in the hub virtual network is shown here:
```console C:\Windows\system32>tracert 10.10.30.4
Tracing route to 10.10.30.4 over a maximum of 30 hops
Trace complete. ```
-### Path to the spoke VNet
+### Path to the spoke virtual network
-Traceroute output from on-premises Location 2 to a VM in the spoke VNet is shown here:
+Traceroute output from on-premises Location 2 to a VM in the spoke virtual network is shown here:
```console C:\Windows\system32>tracert 10.11.30.4
Tracing route to 10.11.30.4 over a maximum of 30 hops
Trace complete. ```
-### Path to the branch VNet, on-premises Location 1, and the remote VNet
+### Path to the branch virtual network, on-premises Location 1, and the remote virtual network
-As we discuss in the [control plane analysis][Control-Analysis], the on-premises Location 1 has no visibility to the branch VNet, to on-premises Location 1, or to the remote VNet per the network configuration.
+As we discuss in the [control plane analysis](./connectivty-interoperability-control-plane.md), the on-premises Location 1 has no visibility to the branch virtual network, to on-premises Location 1, or to the remote virtual network per the network configuration.
-## Data path from the remote VNet
+## Data path from the remote virtual network
-### Path to the hub VNet
+### Path to the hub virtual network
-Traceroute output from the remote VNet to a VM in the hub VNet is shown here:
+Traceroute output from the remote virtual network to a VM in the hub virtual network is shown here:
```console C:\Users\rb>tracert 10.10.30.4
Tracing route to 10.10.30.4 over a maximum of 30 hops
Trace complete. ```
-### Path to the spoke VNet
+### Path to the spoke virtual network
-Traceroute output from the remote VNet to a VM in the spoke VNet is shown here:
+Traceroute output from the remote virtual network to a VM in the spoke virtual network is shown here:
```console C:\Users\rb>tracert 10.11.30.4
Tracing route to 10.11.30.4 over a maximum of 30 hops
Trace complete. ```
-### Path to the branch VNet and on-premises Location 2
+### Path to the branch virtual network and on-premises Location 2
-As we discuss in the [control plane analysis][Control-Analysis], the remote VNet has no visibility to the branch VNet or to on-premises Location 2 per the network configuration.
+As we discuss in the [control plane analysis](./connectivty-interoperability-control-plane.md), the remote virtual network has no visibility to the branch virtual network or to on-premises Location 2 per the network configuration.
### Path to on-premises Location 1
-Traceroute output from the remote VNet to a VM in on-premises Location 1 is shown here:
+Traceroute output from the remote virtual network to a VM in on-premises Location 1 is shown here:
```console C:\Users\rb>tracert 10.2.30.10
Trace complete.
### Site-to-site VPN over ExpressRoute
-You can configure a site-to-site VPN by using ExpressRoute Microsoft peering to privately exchange data between your on-premises network and your Azure VNets. With this configuration, you can exchange data with confidentiality, authenticity, and integrity. The data exchange also is anti-replay. For more information about how to configure a site-to-site IPsec VPN in tunnel mode by using ExpressRoute Microsoft peering, see [Site-to-site VPN over ExpressRoute Microsoft peering][S2S-Over-ExR].
+You can configure a site-to-site VPN by using ExpressRoute Microsoft peering to privately exchange data between your on-premises network and your Azure virtual networks. With this configuration, you can exchange data with confidentiality, authenticity, and integrity. The data exchange also is anti-replay. For more information about how to configure a site-to-site IPsec VPN in tunnel mode by using ExpressRoute Microsoft peering, see [Site-to-site VPN over ExpressRoute Microsoft peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md).
The primary limitation of configuring a site-to-site VPN that uses Microsoft peering is throughput. Throughput over the IPsec tunnel is limited by the VPN gateway capacity. The VPN gateway throughput is lower than ExpressRoute throughput. In this scenario, using the IPsec tunnel for highly secure traffic and using private peering for all other traffic helps optimize the ExpressRoute bandwidth utilization.
The primary limitation of configuring a site-to-site VPN that uses Microsoft pee
ExpressRoute serves as a redundant circuit pair to ensure high availability. You can configure geo-redundant ExpressRoute connectivity in different Azure regions. Also, as demonstrated in our test setup, within an Azure region, you can use a site-to-site VPN to create a failover path for your ExpressRoute connectivity. When the same prefixes are advertised over both ExpressRoute and a site-to-site VPN, Azure prioritizes ExpressRoute. To avoid asymmetrical routing between ExpressRoute and the site-to-site VPN, on-premises network configuration should also reciprocate by using ExpressRoute connectivity before it uses site-to-site VPN connectivity.
-For more information about how to configure coexisting connections for ExpressRoute and a site-to-site VPN, see [ExpressRoute and site-to-site coexistence][ExR-S2S-CoEx].
-
-## Extend back-end connectivity to spoke VNets and branch locations
+For more information about how to configure coexisting connections for ExpressRoute and a site-to-site VPN, see [ExpressRoute and site-to-site coexistence](../expressroute/expressroute-howto-coexist-resource-manager.md).
-### Spoke VNet connectivity by using VNet peering
+## Extend back-end connectivity to spoke virtual networks and branch locations
-Hub and spoke VNet architecture is widely used. The hub is a VNet in Azure that acts as a central point of connectivity between your spoke VNets and to your on-premises network. The spokes are VNets that peer with the hub, and which you can use to isolate workloads. Traffic flows between the on-premises datacenter and the hub through an ExpressRoute or VPN connection. For more information about the architecture, see [Implement a hub-spoke network topology in Azure][Hub-n-Spoke].
+### Spoke virtual network connectivity by using virtual network peering
-In VNet peering within a region, spoke VNets can use hub VNet gateways (both VPN and ExpressRoute gateways) to communicate with remote networks.
+Hub and spoke virtual network architecture is widely used. The hub is a virtual network in Azure that acts as a central point of connectivity between your spoke virtual networks and to your on-premises network. The spokes are virtual networks that peer with the hub, and which you can use to isolate workloads. Traffic flows between the on-premises datacenter and the hub through an ExpressRoute or VPN connection. For more information about the architecture, see [Implement a hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
-### Branch VNet connectivity by using site-to-site VPN
+In virtual network peering within a region, spoke virtual networks can use hub virtual network gateways (both VPN and ExpressRoute gateways) to communicate with remote networks.
-You might want branch VNets, which are in different regions, and on-premises networks to communicate with each other via a hub VNet. The native Azure solution for this configuration is site-to-site VPN connectivity by using a VPN. An alternative is to use a network virtual appliance (NVA) for routing in the hub.
+### Branch virtual network connectivity by using site-to-site VPN
-For more information, see [What is VPN Gateway?][VPN] and [Deploy a highly available NVA][Deploy-NVA].
+You might want branch virtual networks, which are in different regions, and on-premises networks to communicate with each other via a hub virtual network. The native Azure solution for this configuration is site-to-site VPN connectivity by using a VPN. An alternative is to use a network virtual appliance (NVA) for routing in the hub.
+For more information, see [What is VPN Gateway?](../vpn-gateway/vpn-gateway-about-vpngateways.md) and [Deploy a highly available NVA](/azure/architecture/reference-architectures/dmz/nva-ha).
## Next steps
-See the [ExpressRoute FAQ][ExR-FAQ] to:
+See the [ExpressRoute FAQ](../expressroute/expressroute-faqs.md) to:
+ - Learn how many ExpressRoute circuits you can connect to an ExpressRoute gateway.+ - Learn how many ExpressRoute gateways you can connect to an ExpressRoute circuit.-- Learn about other scale limits of ExpressRoute.--
-<!--Image References-->
-[1]: ./media/backend-interoperability/HubVM-SpkVM.jpg "Network Watcher view of connectivity from a hub VNet to a spoke VNet"
-[2]: ./media/backend-interoperability/HubVM-BranchVM.jpg "Network Watcher view of connectivity from a hub VNet to a branch VNet"
-[3]: ./media/backend-interoperability/HubVM-BranchVM-Grid.jpg "Network Watcher grid view of connectivity from a hub VNet to a branch VNet"
-[4]: ./media/backend-interoperability/Loc1-HubVM.jpg "Network Performance Monitor view of connectivity from the Location 1 VM to the hub VNet via ExpressRoute 1"
-[5]: ./media/backend-interoperability/Loc1-HubVM-S2S.jpg "Network Performance Monitor view of connectivity from the Location 1 VM to the hub VNet via a site-to-site VPN"
-
-<!--Link References-->
-[Setup]: ./connectivty-interoperability-preface.md
-[Configuration]: ./connectivty-interoperability-configuration.md
-[ExpressRoute]: ../expressroute/expressroute-introduction.md
-[VPN]: ../vpn-gateway/vpn-gateway-about-vpngateways.md
-[VNet]: ../virtual-network/tutorial-connect-virtual-networks-portal.md
-[Configuration]: ./connectivty-interoperability-configuration.md
-[Control-Analysis]: ./connectivty-interoperability-control-plane.md
-[ExR-FAQ]: ../expressroute/expressroute-faqs.md
-[S2S-Over-ExR]: ../expressroute/site-to-site-vpn-over-microsoft-peering.md
-[ExR-S2S-CoEx]: ../expressroute/expressroute-howto-coexist-resource-manager.md
-[Hub-n-Spoke]: /azure/architecture/reference-architectures/hybrid-networking/hub-spoke
-[Deploy-NVA]: /azure/architecture/reference-architectures/dmz/nva-ha
-[VNet-Config]: ../virtual-network/virtual-network-manage-peering.md
-[ExR-FAQ]: ../expressroute/expressroute-faqs.md
+
+- Learn about other scale limits of ExpressRoute.
networking Connectivty Interoperability Preface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/connectivty-interoperability-preface.md
Title: 'Interoperability in Azure : Test setup | Microsoft Docs'
+ Title: Interoperability in Azure - Test setup
description: This article describes a test setup you can use to analyze interoperability between ExpressRoute, a site-to-site VPN, and virtual network peering in Azure.----+ - Previously updated : 10/18/2018-- Last updated : 03/26/2023+
-# Interoperability in Azure : Test setup
+# Interoperability in Azure - Test setup
This article describes a test setup you can use to analyze how Azure networking services interoperate at the control plane level and data plane level. Let's look briefly at the Azure networking components: -- **Azure ExpressRoute**: Use private peering in Azure ExpressRoute to directly connect private IP spaces in your on-premises network to your Azure Virtual Network deployments. ExpressRoute can help you achieve higher bandwidth and a private connection. Many ExpressRoute eco partners offer ExpressRoute connectivity with SLAs. To learn more about ExpressRoute and to learn how to configure ExpressRoute, see [Introduction to ExpressRoute][ExpressRoute].-- **Site-to-site VPN**: You can use Azure VPN Gateway as a site-to-site VPN to securely connect an on-premises network to Azure over the internet or by using ExpressRoute. To learn how to configure a site-to-site VPN to connect to Azure, see [Configure VPN Gateway][VPN].-- **VNet peering**: Use virtual network (VNet) peering to establish connectivity between VNets in Azure Virtual Network. To learn more about VNet peering, see the [tutorial on VNet peering][VNet].
+- **Azure ExpressRoute**: Use private peering in Azure ExpressRoute to directly connect private IP spaces in your on-premises network to your Azure Virtual Network deployments. ExpressRoute can help you achieve higher bandwidth and a private connection. Many ExpressRoute eco partners offer ExpressRoute connectivity with SLAs. To learn more about ExpressRoute and to learn how to configure ExpressRoute, see [Introduction to ExpressRoute](../expressroute/expressroute-introduction.md).
+
+- **Site-to-site VPN**: You can use Azure VPN Gateway as a site-to-site VPN to securely connect an on-premises network to Azure over the internet or by using ExpressRoute. To learn how to configure a site-to-site VPN to connect to Azure, see [Configure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md).
+
+- **Virtual network peering**: Use virtual network peering to establish connectivity between virtual networks in Azure. For more information about virtual network peering,[Tutorial: Connect virtual networks with VNet peering - Azure portal](../virtual-network/tutorial-connect-virtual-networks-portal.md).
## Test setup The following figure illustrates the test setup:
-![1][1]
+
+The centerpiece of the test setup is the hub virtual network in Azure Region 1. The hub virtual network is connected to different networks in the following ways:
+
+- The hub virtual network is connected to the spoke virtual network by using virtual network peering. The spoke virtual network has remote access to both gateways in the hub virtual network.
+
+- The hub virtual network is connected to the branch virtual network by using site-to-site VPN. The connectivity uses eBGP to exchange routes.
+
+- The hub virtual network is connected to the on-premises Location 1 network by using ExpressRoute private peering as the primary path. It uses site-to-site VPN connectivity as the backup path. In the rest of this article, we refer to this ExpressRoute circuit as ExpressRoute 1. By default, ExpressRoute circuits provide redundant connectivity for high availability. On ExpressRoute 1, the secondary customer edge (CE) router's subinterface that faces the secondary Microsoft Enterprise edge router (MSEE) is disabled. A red line over the double-line arrow in the preceding figure represents the disabled CE router subinterface.
-The centerpiece of the test setup is the hub VNet in Azure Region 1. The hub VNet is connected to different networks in the following ways:
+- The hub virtual network is connected to the on-premises Location 2 network by using another ExpressRoute private peering. In the rest of this article, we refer to this second ExpressRoute circuit as ExpressRoute 2.
-- The hub VNet is connected to the spoke VNet by using VNet peering. The spoke VNet has remote access to both gateways in the hub VNet.-- The hub VNet is connected to the branch VNet by using site-to-site VPN. The connectivity uses eBGP to exchange routes.-- The hub VNet is connected to the on-premises Location 1 network by using ExpressRoute private peering as the primary path. It uses site-to-site VPN connectivity as the backup path. In the rest of this article, we refer to this ExpressRoute circuit as ExpressRoute 1. By default, ExpressRoute circuits provide redundant connectivity for high availability. On ExpressRoute 1, the secondary customer edge (CE) router's subinterface that faces the secondary Microsoft Enterprise Edge Router (MSEE) is disabled. A red line over the double-line arrow in the preceding figure represents the disabled CE router subinterface.-- The hub VNet is connected to the on-premises Location 2 network by using another ExpressRoute private peering. In the rest of this article, we refer to this second ExpressRoute circuit as ExpressRoute 2.-- ExpressRoute 1 also connects both the hub VNet and the on-premises Location 1 network to a remote VNet in Azure Region 2.
+- ExpressRoute 1 also connects both the hub virtual network and the on-premises Location 1 network to a remote virtual network in Azure Region 2.
## ExpressRoute and site-to-site VPN connectivity in tandem ### Site-to-site VPN over ExpressRoute
-You can configure a site-to-site VPN by using ExpressRoute Microsoft peering to privately exchange data between your on-premises network and your Azure VNets. With this configuration, you can exchange data with confidentiality, authenticity, and integrity. The data exchange also is anti-replay. For more information about how to configure a site-to-site IPsec VPN in tunnel mode by using ExpressRoute Microsoft peering, see [Site-to-site VPN over ExpressRoute Microsoft peering][S2S-Over-ExR].
+You can configure a site-to-site VPN by using ExpressRoute Microsoft peering to privately exchange data between your on-premises network and your Azure virtual networks. With this configuration, you can exchange data with confidentiality, authenticity, and integrity. The data exchange also is anti-replay. For more information about how to configure a site-to-site IPsec VPN in tunnel mode by using ExpressRoute Microsoft peering, see [Site-to-site VPN over ExpressRoute Microsoft peering](../expressroute/site-to-site-vpn-over-microsoft-peering.md).
The primary limitation of configuring a site-to-site VPN that uses Microsoft peering is throughput. Throughput over the IPsec tunnel is limited by the VPN gateway capacity. The VPN gateway throughput is lower than ExpressRoute throughput. In this scenario, using the IPsec tunnel for highly secure traffic and using private peering for all other traffic helps optimize the ExpressRoute bandwidth utilization.
The primary limitation of configuring a site-to-site VPN that uses Microsoft pee
ExpressRoute serves as a redundant circuit pair to ensure high availability. You can configure geo-redundant ExpressRoute connectivity in different Azure regions. Also, as demonstrated in our test setup, within an Azure region, you can use a site-to-site VPN to create a failover path for your ExpressRoute connectivity. When the same prefixes are advertised over both ExpressRoute and a site-to-site VPN, Azure prioritizes ExpressRoute. To avoid asymmetrical routing between ExpressRoute and the site-to-site VPN, on-premises network configuration should also reciprocate by using ExpressRoute connectivity before it uses site-to-site VPN connectivity.
-For more information about how to configure coexisting connections for ExpressRoute and a site-to-site VPN, see [ExpressRoute and site-to-site coexistence][ExR-S2S-CoEx].
+For more information about how to configure coexisting connections for ExpressRoute and a site-to-site VPN, see [ExpressRoute and site-to-site coexistence](../expressroute/expressroute-howto-coexist-resource-manager.md).
-## Extend back-end connectivity to spoke VNets and branch locations
+## Extend back-end connectivity to spoke virtual networks and branch locations
-### Spoke VNet connectivity by using VNet peering
+### Spoke virtual network connectivity by using virtual network peering
-Hub and spoke VNet architecture is widely used. The hub is a VNet in Azure that acts as a central point of connectivity between your spoke VNets and to your on-premises network. The spokes are VNets that peer with the hub, and which you can use to isolate workloads. Traffic flows between the on-premises datacenter and the hub through an ExpressRoute or VPN connection. For more information about the architecture, see [Implement a hub-spoke network topology in Azure][Hub-n-Spoke].
+Hub and spoke virtual network architecture is widely used. The hub is a virtual network in Azure that acts as a central point of connectivity between your spoke virtual networks and to your on-premises network. The spokes are virtual networks that peer with the hub, and which you can use to isolate workloads. Traffic flows between the on-premises datacenter and the hub through an ExpressRoute or VPN connection. For more information about the architecture, see [Implement a hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
-In VNet peering within a region, spoke VNets can use hub VNet gateways (both VPN and ExpressRoute gateways) to communicate with remote networks.
+In virtual network peering within a region, spoke virtual networks can use hub virtual network gateways (both VPN and ExpressRoute gateways) to communicate with remote networks.
-### Branch VNet connectivity by using site-to-site VPN
+### Branch virtual network connectivity by using site-to-site VPN
-You might want branch VNets, which are in different regions, and on-premises networks to communicate with each other via a hub VNet. The native Azure solution for this configuration is site-to-site VPN connectivity by using a VPN. An alternative is to use a network virtual appliance (NVA) for routing in the hub.
+You might want branch virtual networks, which are in different regions, and on-premises networks to communicate with each other via a hub VNet. The native Azure solution for this configuration is site-to-site VPN connectivity by using a VPN. An alternative is to use a network virtual appliance (NVA) for routing in the hub.
-For more information, see [What is VPN Gateway?][VPN] and [Deploy a highly available NVA][Deploy-NVA].
+For more information, see [What is VPN Gateway?](../vpn-gateway/vpn-gateway-about-vpngateways.md) and [Deploy a highly available NVA](/azure/architecture/reference-architectures/dmz/nva-ha).
## Next steps
-Learn about [configuration details][Configuration] for the test topology.
+Learn about [configuration details](connectivty-interoperability-configuration.md) for the test topology.
-Learn about [control plane analysis][Control-Analysis] of the test setup and the views of different VNets or VLANs in the topology.
+Learn about [control plane analysis](connectivty-interoperability-control-plane.md) of the test setup and the views of different virtual networks or VLANs in the topology.
-Learn about the [data plane analysis][Data-Analysis] of the test setup and Azure network monitoring feature views.
+Learn about the [data plane analysis](connectivty-interoperability-data-plane.md) of the test setup and Azure network monitoring feature views.
-See the [ExpressRoute FAQ][ExR-FAQ] to:
-- Learn how many ExpressRoute circuits you can connect to an ExpressRoute gateway.-- Learn how many ExpressRoute gateways you can connect to an ExpressRoute circuit.-- Learn about other scale limits of ExpressRoute.
+See the [ExpressRoute FAQ](../expressroute/expressroute-faqs.md) to:
+- Learn how many ExpressRoute circuits you can connect to an ExpressRoute gateway.
-<!--Image References-->
-[1]: ./media/backend-interoperability/TestSetup.png "Diagram of the test topology"
+- Learn how many ExpressRoute gateways you can connect to an ExpressRoute circuit.
-<!--Link References-->
-[ExpressRoute]: ../expressroute/expressroute-introduction.md
-[VPN]: ../vpn-gateway/vpn-gateway-about-vpngateways.md
-[VNet]: ../virtual-network/tutorial-connect-virtual-networks-portal.md
-[Configuration]: connectivty-interoperability-configuration.md
-[Control-Analysis]: connectivty-interoperability-control-plane.md
-[Data-Analysis]: connectivty-interoperability-data-plane.md
-[ExR-FAQ]: ../expressroute/expressroute-faqs.md
-[S2S-Over-ExR]: ../expressroute/site-to-site-vpn-over-microsoft-peering.md
-[ExR-S2S-CoEx]: ../expressroute/expressroute-howto-coexist-resource-manager.md
-[Hub-n-Spoke]: /azure/architecture/reference-architectures/hybrid-networking/hub-spoke
-[Deploy-NVA]: /azure/architecture/reference-architectures/dmz/nva-ha
+- Learn about other scale limits of ExpressRoute.
openshift Howto Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-custom-dns.md
Title: Configure custom DNS resources in an Azure Red Hat OpenShift (ARO) cluster
-description: Discover how to add a custom DNS server on all of your nodes in Azure Red Hat OpenShift (ARO).
+ Title: Configure a custom DNS resolver in an Azure Red Hat OpenShift (ARO) cluster
+description: Discover how to add a custom DNS resolver on all of your nodes in Azure Red Hat OpenShift (ARO).
Last updated 06/02/2021 #Customer intent: As an operator or developer, I need a custom DNS configured for an Azure Red Hat OpenShift cluster
-# Configure custom DNS for your Azure Red Hat OpenShift (ARO) cluster
+# Configure a custom DNS resolver for your Azure Red Hat OpenShift (ARO) cluster
This article provides the necessary details that allow you to configure your Azure Red Hat OpenShift cluster (ARO) to use a custom DNS server. It contains the cluster requirements for a basic ARO deployment.
Create the worker restart file, this example calls the file `worker-restarts.yml
machineconfig.machineconfiguration.openshift.io/25-machineconfig-worker-reboot created ```
-The MCO will move workloads and then reboot each node one at a time. Once the workers have come back online, we will follow the same procedure to reboot the master nodes. You can verify the status of the workers by querying the nodes and validate they are all in the `Ready` state.
+The MCO moves workloads and then reboots each node one at a time. Once the workers have come back online, we'll follow the same procedure to reboot the master nodes. You can verify the status of the workers by querying the nodes and validate they're all in the `Ready` state.
>[!NOTE] > Depending on the size of the workload the cluster has, it can take several minutes for each node to reboot.
operator-nexus Howto Baremetal Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-functions.md
This article describes how to perform lifecycle management operations on Bare Me
- Start the BMM - Make the BMM unschedulable or schedulable - Reinstall the BMM image
+- Replace BMM
## Prerequisites
state on the BMM are `restarted` when the BMM is `uncordoned`.
## Reimage a BMM (reinstall a BMM image)
-The existing BMM image can be **reinstalled** using the `reimage` command but will not install a new image.
+An existing BMM image is **reinstalled** using the `reimage` command. This command doesn't install a new image.
Make sure the BMM's workloads are drained using the [`cordon`](#make-a-bmm-unschedulable-cordon) command, with `evacuate "True"`, prior to executing the `reimage` command.
az networkcloud baremetalmachine reimage ΓÇô-name "bareMetalMachineName" \
``` The reimage command restarts the BMM and uncordons it. The re-imaged BMM will have an IP address.You can start deploying workloads on the reimaged BMM.+
+## Replace BMM
+
+Use `Replace BMM` command whenever a BareMetal machine has encountered hardware issues requiring a complete or partial hardware replacement. After replace the MAC address of Baremetal Host will change, however the IDrac IP address and hostname will remain the same.
+
+```azurecli
+az networkcloud baremetalmachine replace --name "bareMetalMachineName" \
+ --bmc-credentials password="{password}" username="bmcuser" --bmc-mac-address "00:00:4f:00:57:ad" \
+ --boot-mac-address "00:00:4e:00:58:af" --machine-name "name" --serial-number "BM1219XXX" \
+ --resource-group "resourceGroupName"
+```
operator-nexus Howto Baremetal Review Read Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-review-read-output.md
+
+ Title: How to view the output of an `az networkcloud run-read-command` in the Operator Nexus Cluster Manager Storage account
+description: Step by step guide on locating the output of a `az networkcloud run-read-command` in the Cluster Manager Storage account.
++++ Last updated : 03/23/2023+++
+# How to view the output of an `az networkcloud run-read-command` in the Cluster Manager Storage account
+
+This guide walks you through accessing the output file that is created in the Cluster Manager Storage account when an `az networkcloud baremetalmachine run-read-command` is executed on a server. The name of the file is identified in the `az rest` status output.
+
+1. Open the Cluster Manager Managed Resource Group for the Cluster where the server is housed and then select the **Storage account**.
+
+1. In the Storage account details, select **Storage browser** from the navigation menu on the left side.
+
+1. In the Storage browser details, select on **Blob containers**.
+
+1. Select the baremetal-run-command-output blob container.
+
+1. Select the output file from the run-read command. The file name can be identified from the `az rest --method get` command. Additionally, the **Last modified** timestamp aligns with when the command was executed.
+
+1. You can manage & download the output file from the **Overview** pop-out.
+
+For information on running the `run-read-command`, see:
+
+- [Troubleshoot BMM issues using the run-read command](howto-baremetal-run-read.md)
operator-nexus Howto Baremetal Run Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-read.md
+
+ Title: Troubleshoot BMM issues using the `az networkcloud baremetalmachine run-read-command` for Operator Nexus
+description: Step by step guide on using the `az networkcloud baremetalmachine run-read-command` to run diagnostic commands on a BMM.
++++ Last updated : 03/23/2023+++
+# Troubleshoot BMM issues using the `az networkcloud baremetalmachine run-read-command`
+
+There may be situations where a user needs to investigate & resolve issues with an on-premises BMM. Operator Nexus provides the `az networkcloud baremetalmachine run-read-command` so users can run a curated list of read only commands to get information from a BMM.
+
+The command execution produces an output file containing the results that can be found in the Cluster Manager's Storage account.
+
+## Prerequisites
+
+1. Install the latest version of the
+ [appropriate CLI extensions](./howto-install-cli-extensions.md)
+1. Ensure that the target BMM must have its `poweredState` set to `On` and have its `readyState` set to `True`
+1. Get the Resource group name that you created for `Cluster` resource
+
+## Executing a run-read command
+
+The run-read command executes a read-only command on the specified BMM.
+
+The current list of supported commands are:
+
+- `traceroute`
+- `ping`
+- `arp`
+- `tcpdump`
+- `brctl show`
+- `dmidecode`
+- `host`
+- `ip link show`
+- `ip address show`
+- `ip maddress show`
+- `ip route show`
+- `journalctl`
+- `kubectl logs`
+- `kubectl describe`
+- `kubectl get`
+- `kubectl api-resources`
+- `kubectl api-versions`
+- `uname`
+- `uptime`
+- `fdisk -l`
+- `hostname`
+- `ifconfig -a`
+- `ifconfig -s`
+- `mount`
+- `ss`
+- `ulimit -a`
+
+The command syntax is:
+
+```azurecli
+az networkcloud baremetalmachine run-read-command --name "<machine-name>"
+ --limit-time-seconds <timeout> \
+ --commands arguments="<arg1>" arguments="<arg2>" command="<command>" --resource-group "<resourceGroupName>" \
+ --subscription "<subscription>" \
+ --debug
+```
+
+These commands to not require `arguments`:
+
+- `fdisk -l`
+- `hostname`
+- `ifconfig -a`
+- `ifconfig -s`
+- `mount`
+- `ss`
+- `ulimit -a`
+
+All other inputs are required. Multiple commands are each specified with their own `--commands` option.
+
+Each `--commands` option specifies `command` and `arguments`. For a command with multiple arguments, `arguments` is repeated for each one.
+
+`--debug` is required to get the operation status that can be queried to get the URL for the output file.
+
+### This example executes the `hostname` command and a `ping` command.
+
+```azurecli
+az networkcloud baremetalmachine run-read-command --name "bareMetalMachineName" \
+ --limit-time-seconds 60 \
+ --commands command="hostname" \
+ --commands arguments="192.168.0.99" arguments="-c" arguments="3" command="ping" \
+ --resource-group "resourceGroupName" \
+ --subscription "<subscription>" \
+ --debug
+```
+
+In the response, an HTTP status code of 202 is returned as the operation is performed asynchronously.
+
+## Checking command status and viewing output
+
+The debug output of the command execution contains the 'Azure-AsyncOperation' response header. Note the URL provided.
+
+```azurecli
+cli.azure.cli.core.sdk.policies: 'Azure-AsyncOperation': 'https://management.azure.com/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/providers/Microsoft.NetworkCloud/locations/EASTUS/operationStatuses/0797fdd7-28eb-48ec-8c70-39a3f893421d*A0123456789F331FE47B40E2BFBCE2E133FD3ED2562348BFFD8388A4AAA1271?api-version=2022-09-30-preview'
+```
+
+Check the status of the operation with the `az rest` command:
+
+```azurecli
+az rest --method get --url <Azure-AsyncOperation-URL>
+```
+
+Repeat until the response to the URL displays the result of the run-read-command.
+
+Sample output looks something like this. The `Succeeded` `status` indicates the command was executed on the BMM. The `resultUrl` provides a link to the zipped output file that contains the output from the command execution. The tar.gz file name can be used to identify the file in the Storage account of the Cluster Manager resource group.
+
+See [How To BareMetal Review Output Run-Read](howto-baremetal-review-read-output.md) for instructions on locating the output file in the Storage Account. You can also use the link to directly access the output zip file.
+
+```azurecli
+az rest --method get --url https://management.azure.com/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/providers/Microsoft.NetworkCloud/locations/EASTUS/operationStatuses/932a8fe6-12ef-419c-bdc2-5bb11a2a071d*C0123456789E735D5D572DECFF4EECE2DFDC121CC3FC56CD50069249183110F?api-version=2022-09-30-preview
+{
+ "endTime": "2023-03-01T12:38:10.8582635Z",
+ "error": {},
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/providers/Microsoft.NetworkCloud/locations/EASTUS/operationStatuses/932a8fe6-12ef-419c-bdc2-5bb11a2a071d*C0123456789E735D5D572DECFF4EECE2DFDC121CC3FC56CD50069249183110F",
+ "name": "932a8fe6-12ef-419c-bdc2-5bb11a2a071d*C0123456789E735D5D572DECFF4EECE2DFDC121CC3FC56CD50069249183110F",
+ "properties": {
+ "exitCode": "15",
+ "outputHead": "====Action Command Output====",
+ "resultUrl": "https://cmnvc94zkjhvst.blob.core.windows.net/bmm-run-command-output/af4fea82-294a-429e-9d1e-e93d54f4ea24-action-bmmruncmd.tar.gz?se=2023-03-01T16%3A38%3A07Z&sig=Lj9MS01234567898fn4qb2E1HORGh260EHdRrCJTJg%3D&sp=r&spr=https&sr=b&st=2023-03-01T12%3A38%3A07Z&sv=2019-12-12"
+ },
+ "resourceId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/m01-xx-HostedResources-xx/providers/Microsoft.NetworkCloud/bareMetalMachines/m01r750wkr3",
+ "startTime": "2023-03-01T12:37:48.2823434Z",
+ "status": "Succeeded"
+}
+```
operator-nexus Howto Configure Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric.md
Previously updated : 02/06/2023 #Required; mm/dd/yyyy format. Last updated : 03/26/2023 #Required; mm/dd/yyyy format.
This article describes how to create a Network Fabric by using the Azure Command
## Prerequisites
-* A Network Fabric Controller exists -- add link in your Azure account.
+* A Network Fabric Controller is successfully provisioned.
* A Network Fabric Controller instance in Azure manages multiple Network Fabric Resources. * You can reuse a pre-existing Network Fabric Controller.
-* Physical infrastructure installed and cabled as per BoM.
+* Physical infrastructure installed and cabled as per BOM.
* ExpressRoute connectivity established between the Azure region and your WAN (your networking). * The needed VLANs, Route-Targets and IP addresses configured in your network. * Terminal Server [installed and configured](./howto-platform-prerequisites.md#set-up-terminal-server)
This article describes how to create a Network Fabric by using the Azure Command
|--|| |-|| | resource-group | Name of the resource group | "NFResourceGroup" |True | String | | location | Location of Azure region | "eastus" |True | String |
-| resource-name | Name of the FabricResource | Austin-Fabric |True | String |
+| resource-name | Name of the FabricResource | NF-Lab1 |True | String |
| nf-sku |Fabric SKU ID, based on the ordered SKU of the BoM. Contact AFO team for specific SKU value for the BoM | M8-A400-A100-C16-aa |True | String| | nfc-id |Network Fabric Controller ARM resource ID| |True | String |
+| rack-count |Total number of compute racks | 8 |True | Integer |
+| server-count-per-rack |Total number of worker nodes per rack| 16 |True | Integer |
|| |**managed-network-config**| Details of management network ||True || |ipv4Prefix|IPv4 Prefix of the management network. This Prefix should be unique across all Network Fabrics in a Network Fabric Controller. Prefix length should be at least 19 (/20 not allowed, /18 and lower allowed) | 10.246.0.0/19|True | String |
az nf fabric create \
--resource-name "NFName" \ --nf-sku "NFSKU" \ --nfc-id ""/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName" \" \
- --nni-config '{"layer3Configuration":{"primaryIpv4Prefix":"10.246.0.124/30", "secondaryIpv4Prefix": "10.246.0.128/30", "fabricAsn":65048, "peerAsn":65048, "vlanId": 20}}' \
- --ts-config '{"primaryIpv4Prefix":"20.0.10.0/30", "secondaryIpv4Prefix": "20.0.10.4/30","username":"****", "password": "*****"}' \
- --managed-network-config '{"ipv4Prefix":"10.246.0.0/19", \
- "managementVpnConfiguration":{"optionBProperties":{"importRouteTargets":["65048:10039"], "exportRouteTargets":["65048:10039"]}}, \
- "workloadVpnConfiguration":{"optionBProperties":{"importRouteTargets":["65048:10050"], "exportRouteTargets":["65048:10050"]}}}'
+--fabric-asn 65014 \
+--ipv4-prefix 10.x.0.0/19 \
+--ipv6-prefix fda0:d59c:da05::/59 \
+--rack-count 8 \
+--server-count-per-rack 16 \
+--ts-config '{"primaryIpv4Prefix":"20.x.0.5/30","secondaryIpv4Prefix": "20.x.1.6/30","username":"*****", "password": "************", "serialNumber":"************"}' \
+--managed-network-config '{"infrastructureVpnConfiguration":{"peeringOption":"OptionB","optionBProperties":{"importRouteTargets":["65014:10039"],"exportRouteTargets":["65014:10039"]}}, "workloadVpnConfiguration":{"peeringOption": "OptionB", "optionBProperties": {"importRouteTargets": ["65014:10050"], "exportRouteTargets": ["65014:10050"]}}}'
``` Expected output:
Expected output:
```json { "annotation": null,
+ "fabricAsn": 65014,
"id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName",
+ "ipv4Prefix": "10.x.0.0/19",
+ "ipv6Prefix": "fda0:d59c:da05::/59",
"l2IsolationDomains": null, "l3IsolationDomains": null, "location": "eastus", "managementNetworkConfiguration": {
- "ipv4Prefix": "10.246.0.0/19",
- "ipv6Prefix": null,
- "managementVpnConfiguration": {
+ "infrastructureVpnConfiguration": {
+ "administrativeState": "Enabled",
+ "networkToNetworkInterconnectId": null,
"optionAProperties": null, "optionBProperties": { "exportRouteTargets": [
- "65048:10039"
+ "65014:10039"
], "importRouteTargets": [
- "65048:10039"
+ "65014:10039"
] },
- "peeringOption": "OptionA",
- "state": "Enabled"
+ "peeringOption": "OptionB"
}, "workloadVpnConfiguration": {
+ "administrativeState": "Enabled",
+ "networkToNetworkInterconnectId": null,
"optionAProperties": null, "optionBProperties": { "exportRouteTargets": [
- "65048:10050"
+ "65014:10050"
], "importRouteTargets": [
- "65048:10050"
+ "65014:10050"
] },
- "peeringOption": "OptionA",
- "state": "Enabled"
+ "peeringOption": "OptionB"
} }, "name": "NFName",
- "networkfabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName",
- "networkfabricSku": "NFSKU",
- "networkToNetworkInterconnect": {
- "layer2Configuration": null,
- "layer3Configuration": {
- "fabricAsn": 65048,
- "peerAsn": 65048,
- "primaryIpv4Prefix": "10.246.0.124/30",
- "primaryIpv6Prefix": null,
- "routerId": null,
- "secondaryIpv4Prefix": "10.246.0.128/30",
- "secondaryIpv6Prefix": null,
- "vlanId": 20
- }
- },
+ "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName",
+ "networkFabricSku": "NFSKU",
"operationalState": null, "provisioningState": "Accepted",
+ "rackCount": 8,
"racks": null, "resourceGroup": "NFResourceGroup",
+ "routerId": null,
+ "serverCountPerRack": 16,
"systemData": {
- "createdAt": "2022-11-02T06:56:05.019873+00:00",
+ "createdAt": "2023-03-10T11:06:33.818069+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2022-11-02T06:56:05.019873+00:00",
+ "lastModifiedAt": "2023-03-10T11:06:33.818069+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" },
Expected output:
"terminalServerConfiguration": { "networkDeviceId": null, "password": null,
- "primaryIpv4Prefix": "20.0.10.0/30",
+ "primaryIpv4Prefix": "20.x.0.5/30",
"primaryIpv6Prefix": null,
- "secondaryIpv4Prefix": "20.0.10.4/30",
+ "secondaryIpv4Prefix": "20.x.1.6/30",
"secondaryIpv6Prefix": null,
- "****": "root"
+ "serialNumber": "xxxxxxxx",
+ "username": "xxxxxxxx"
}, "type": "microsoft.managednetworkfabric/networkfabrics" } ```
-## List or get Network Fabric
+## List Network Fabric
```azurecli az nf fabric list --resource-group "NFResourceGroup"
Expected output:
{ "annotation": null, "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName",
- "l2IsolationDomains": null,
- "l3IsolationDomains": null,
- "location": "eastus",
- "managementNetworkConfiguration": {
- "ipv4Prefix": "10.246.0.0/19",
- "ipv6Prefix": null,
- "managementVpnConfiguration": {
- "optionAProperties": null,
- "optionBProperties": {
- "exportRouteTargets": [
- "65048:10039"
- ],
- "importRouteTargets": [
- "65048:10039"
- ]
- },
- "peeringOption": "OptionA",
- "state": "Enabled"
+ "ipv4Prefix": "10.x.0.0/19",
+ "ipv6Prefix": "fda0:d59c:da05::/59",
+ "l2IsolationDomains": null,
+ "l3IsolationDomains": null,
+ "location": "eastus",
+ "managementNetworkConfiguration": {
+ "infrastructureVpnConfiguration": {
+ "administrativeState": "Enabled",
+ "networkToNetworkInterconnectId": null,
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65014:10039"
+ ],
+ "importRouteTargets": [
+ "65014:10039"
+ ]
},
- "workloadVpnConfiguration": {
- "optionAProperties": null,
- "optionBProperties": {
- "exportRouteTargets": [
- "65048:10050"
- ],
- "importRouteTargets": [
- "65048:10050"
- ]
- },
- "peeringOption": "OptionA",
- "state": "Enabled"
- }
+ "peeringOption": "OptionB"
},
+ "workloadVpnConfiguration": {
+ "administrativeState": "Enabled",
+ "networkToNetworkInterconnectId": null,
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65014:10050"
+ ],
+ "importRouteTargets": [
+ "65014:10050"
+ ]
+ },
+ "peeringOption": "OptionB"
+ }
+ },
"name": "NFName", "networkfabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName",
- "networkfabricSku": "NFSKU",
- "networkToNetworkInterconnect": {
- "layer2Configuration": null,
- "layer3Configuration": {
- "fabricAsn": 65048,
- "peerAsn": 65048,
- "primaryIpv4Prefix": "10.246.0.124/30",
- "primaryIpv6Prefix": null,
- "routerId": null,
- "secondaryIpv4Prefix": "10.246.0.128/30",
- "secondaryIpv6Prefix": null,
- "vlanId": 20
- }
- },
- "operationalState": null,
- "provisioningState": "Failed",
- "racks": null,
- "resourceGroup": "NFResourceGroup",
- "systemData": {
- "createdAt": "2022-11-02T06:56:05.019873+00:00",
- "createdBy": "email@address.com",
- "createdByType": "User",
- "lastModifiedAt": "2022-11-02T06:56:05.019873+00:00",
- "lastModifiedBy": "email@address.com",
- "lastModifiedByType": "User"
- },
- "tags": null,
- "terminalServerConfiguration": {
- "networkDeviceId": null,
- "password": null,
- "primaryIpv4Prefix": "20.0.10.0/30",
- "primaryIpv6Prefix": null,
- "secondaryIpv4Prefix": "20.0.10.4/30",
- "secondaryIpv6Prefix": null,
- "****": "****"
- },
- "type": "microsoft.managednetworkfabric/networkfabrics"
- }
-]
-```
-
-## Add Racks
-
-On creating Network Fabric, add one aggregate rack and two or more compute racks to the Network Fabric. The number of racks should match the physical racks in the Operator Nexus instance
-
-### Add Aggregate Rack
-
-```azurecli
-az nf rack create \
resource-group "NFResourceGroup" \location "eastus" \network-rack-sku "M8-A400-A100-C16-aa" \nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName" \resource-name "AR1"
-```
-
-Expected output:
-
-```json
-{
- "annotation": null,
- "id": "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkRacks/AR1",
- "location": "eastus",
- "name": "AR1",
- "networkDevices": [
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-AR1-CE1",
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-AR1-CE2",
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-AR1-TOR17",
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-AR1-TOR18",
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-AR1-MgmtSwitch1",
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-AR1-MgmtSwitch2",
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-AR1-NPB1",
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-AR1-NPB2"
- ],
- "networkfabricId": "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName",
- "networkRackSku": "M8-A400-A100-C16-aa",
+ "networkFabricSku": "NFSKU",
+ "operationalState": null,
"provisioningState": "Succeeded",
- "resourceGroup": "NFResourceGroupName",
+ "rackCount": 8,
+ "racks": null,
+ "resourceGroup": "NFResourceGroup",
+ "routerId": null,
+ "serverCountPerRack": 16,
"systemData": {
- "createdAt": "2022-11-01T17:04:18.908946+00:00",
- "createdBy": "email@adress.com",
+ "createdAt": "2023-03-10T11:06:33.818069+00:00",
+ "createdBy": "email@address.com",
"createdByType": "User",
- "lastModifiedAt": "2022-11-01T17:04:18.908946+00:00",
+ "lastModifiedAt": "2023-03-10T11:06:33.818069+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" }, "tags": null,
- "type": "microsoft.managednetworkfabric/networkracks"
+ "terminalServerConfiguration": {
+ "networkDeviceId": null,
+ "password": null,
+ "primaryIpv4Prefix": "20.x.0.5/30",
+ "primaryIpv6Prefix": null,
+ "secondaryIpv4Prefix": "20.x.1.6/30",
+ "secondaryIpv6Prefix": null,
+ "serialNumber": "xxxxxxxx",
+ "username": "xxxxxxxx"
+ },
+ "type": "microsoft.managednetworkfabric/networkfabrics"
}
+]
```
-### Add Compute Rack 1
+## Create NNI
-```azurecli
-az nf rack create \
resource-group "NFResourceGroup" \location "eastus" \network-rack-sku "M8-A400-A100-C16-aa" \nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName" \resource-name "CR1"
+Upon creating Network Fabric, the next action is to create NNI.
+Run the following command to create the NNI:
+
+```azurecl
+az nf nni create --resource-group "NFResourceGroup" \
+--location "eastus" \
+--resource-name "NNIResourceName" \
+--fabric "NFName" \
+--is-management-type "True" \
+--use-option-b "True" \
+--layer2-configuration '{"portCount": 1, "mtu": 1500}' \
+--layer3-configuration '{"peerASN": 65014, "vlanId": 683, "primaryIpv4Prefix": "10.x.0.124/30", "secondaryIpv4Prefix": "10.x.0.128/30", "primaryIpv6Prefix": "fda0:d59c:da0a:500::7c/127", "secondaryIpv6Prefix": "fda0:d59c:da0a:500::80/127"}'
``` Expected output: ```json {
- "annotation": null,
- "id": "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkRacks/CR1",
- "location": "eastus",
- "name": "CR1",
- "networkDevices": [
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-CR1-TOR1",
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-CR1-TOR2",
- "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkDevices/NFName-CR1-MgmtSwitch"
- ],
- "networkfabricId": "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName",
- "networkRackSku": "M8-A400-A100-C16-aa",
+ "administrativeState": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName/networkToNetworkInterconnects/NNIResourceName",
+ "isManagementType": "True",
+ "layer2Configuration": {
+ "interfaces": null,
+ "mtu": 1500,
+ "portCount": 1
+ },
+ "layer3Configuration": {
+ "exportRoutePolicyId": null,
+ "fabricAsn": null,
+ "importRoutePolicyId": null,
+ "peerAsn": 65014,
+ "primaryIpv4Prefix": "10.x.0.124/30",
+ "primaryIpv6Prefix": "fda0:d59c:da0a:500::7c/127",
+ "secondaryIpv4Prefix": "10.x.0.128/30",
+ "secondaryIpv6Prefix": "fda0:d59c:da0a:500::80/127",
+ "vlanId": 683
+ },
+ "name": "NNIResourceName",
"provisioningState": "Succeeded",
- "resourceGroup": "NFResourceGroupName",
+ "resourceGroup": "NFResourceGroup",
"systemData": {
- "createdAt": "2022-11-01T17:05:21.219619+00:00",
+ "createdAt": "2023-03-10T13:35:45.952324+00:00",
"createdBy": "email@address.com", "createdByType": "User",
- "lastModifiedAt": "2022-11-01T17:05:21.219619+00:00",
+ "lastModifiedAt": "2023-03-10T13:35:45.952324+00:00",
"lastModifiedBy": "email@address.com", "lastModifiedByType": "User" },
- "tags": null,
- "type": "microsoft.managednetworkfabric/networkracks"
+ "type": "microsoft.managednetworkfabric/networkfabrics/networktonetworkinterconnects",
+ "useOptionB": "True"
} ```
-### Add Compute Rack 2
-
-```azurecli
-az nf rack create \
resource-group "NFResourceGroup" \location "eastus" \network-rack-sku "M8-A400-A100-C16-aa" \nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkfabrics/NFName" \resource-name "CR2"
-```
-
-Once all the racks are added, NFA creates the corresponding networkDevice resources.
+Once NNI created, NFA creates the corresponding Device resources.
## Next steps
-* Update the serial number in the networkDevice resource with the actual serial number on the device. The device sends the serial number as part of DHCP request.
-* Configure the terminal server with the serial numbers of all the devices (which also hosts DHCP server)
-* Provision the network devices via zero-touch provisioning mode. Based on the serial number in the DHCP request, the DHCP server responds with the boot configuration file for the corresponding device
+* Update the serial number in the Device resource with the actual serial number on the device. The device sends the serial number as part of DHCP request.
+* Configure the terminal server with the serial numbers of all the Devices (which also hosts DHCP server)
+* Provision the Device via zero-touch provisioning mode. Based on the serial number in the DHCP request, the DHCP server responds with the boot configuration file for the corresponding Device
-## Update Network Fabric devices
+## Update Network Fabric Device
-Run the following command to update Network Fabric Devices:
+Run the following command to update Device with required details:
```azurecli az nf device update \
Expected output:
} ```
-## List or get Network Fabric devices
+## List Network Fabric Device
-Run the following command to List Network Fabric devices:
+Run the following command to list Device:
```azurecli az nf device list --resource-group "NFResourceGroup"
Expected output:
} ```
-Run the following command to Get or Show details of a Network Fabric device:
+Run the following command to show details of a Device:
```azurecli az nf device show --resource-group "example-rg" --resource-name "example-device"
Expected output:
## Provision Fabric
-Once the device serial number is updated, the Network Fabric needs to be provisioned by executing the following command
+Once the Device serial number is updated, the Network Fabric needs to be provisioned by executing the following command
```azurecli
-az nf fabric provision --resource-group "NFResourceGroup" --resource-name "NFName"
+az nf fabric provision --resource-group "NFResourceGroup" --resource-name "NFName"
``` ```azurecli
-az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
+az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
``` Expected output:
Expected output:
"l3IsolationDomains": null, "location": "eastus", "managementNetworkConfiguration": {
- "ipv4Prefix": "10.246.0.0/19",
+ "ipv4Prefix": "10.x.0.0/19",
"ipv6Prefix": null, "managementVpnConfiguration": { "optionAProperties": null,
Expected output:
"layer3Configuration": { "fabricAsn": 65048, "peerAsn": 65048,
- "primaryIpv4Prefix": "10.246.0.124/30",
+ "primaryIpv4Prefix": "10.x.0.124/30",
"primaryIpv6Prefix": null, "routerId": null,
- "secondaryIpv4Prefix": "10.246.0.128/30",
+ "secondaryIpv4Prefix": "10.x.0.128/30",
"secondaryIpv6Prefix": null, "vlanId": 20 }
Expected output:
"terminalServerConfiguration": { "networkDeviceId": null, "password": null,
- "primaryIpv4Prefix": "20.0.10.0/30",
+ "primaryIpv4Prefix": "20.x.10.0/30",
"primaryIpv6Prefix": null,
- "secondaryIpv4Prefix": "20.0.10.4/30",
+ "secondaryIpv4Prefix": "20.x.10.4/30",
"secondaryIpv6Prefix": null, "****": "****" },
Expected output:
## Deleting Network Fabric
-To delete the Fabric, the operational state of Fabric shouldn't be "Provisioned". To change the operational state from Provisioned, run the same command to create the Fabric. Ensure there are no racks associated before deleting Fabric.
+To delete the Network Fabric, the operational state of shouldn't be `Provisioned`. To change the operational state from `Provisioned`, run the `deprovision` command.
```azurecli
-az nf fabric create \
resource-group "NFResourceGroup" \location "eastus" \resource-name "NFName" \nf-sku "NFSKU" \nfc-id ""/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkfabric/networkfabricControllers/NFCName" \" \
- --nni-config '{"layer3Configuration":{"primaryIpv4Prefix":"10.246.0.124/30", "secondaryIpv4Prefix": "10.246.0.128/30", "fabricAsn":65048, "peerAsn":65048, "vlanId": 20}}' \
- --ts-config '{"primaryIpv4Prefix":"20.0.10.0/30", "secondaryIpv4Prefix": "20.0.10.4/30","****":"****", "password": "*****"}' \
- --managed-network-config '{"ipv4Prefix":"10.246.0.0/19", \
- "managementVpnConfiguration":{"optionBProperties":{"importRouteTargets":["65048:10039"], "exportRouteTargets":["65048:10039"]}}, \
- "workloadVpnConfiguration":{"optionBProperties":{"importRouteTargets":["65048:10050"], "exportRouteTargets":["65048:10050"]}}}'
-
+az nf fabric deprovision --resource-group "NFResourceGroup" --resource-name "NFName"
``` Expected output:
Expected output:
"l3IsolationDomains": null, "location": "eastus", "managementNetworkConfiguration": {
- "ipv4Prefix": "10.246.0.0/19",
+ "ipv4Prefix": "10.x.0.0/19",
"ipv6Prefix": null, "managementVpnConfiguration": { "optionAProperties": null,
Expected output:
"layer3Configuration": { "fabricAsn": 65048, "peerAsn": 65048,
- "primaryIpv4Prefix": "10.246.0.124/30",
+ "primaryIpv4Prefix": "10.x.0.124/30",
"primaryIpv6Prefix": null, "routerId": null,
- "secondaryIpv4Prefix": "10.246.0.128/30",
+ "secondaryIpv4Prefix": "10.x.0.128/30",
"secondaryIpv6Prefix": null, "vlanId": 20 } }, "operationalState": null,
- "provisioningState": "Accepted",
+ "provisioningState": "deprovisioned",
"racks":["/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkRacks/AggRack". "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkRacks/CompRack1, "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkfabric/networkRacks/CompRack2]
Expected output:
"terminalServerConfiguration": { "networkDeviceId": null, "password": null,
- "primaryIpv4Prefix": "20.0.10.0/30",
+ "primaryIpv4Prefix": "20.x.10.0/30",
"primaryIpv6Prefix": null,
- "secondaryIpv4Prefix": "20.0.10.4/30",
+ "secondaryIpv4Prefix": "20.x.10.4/30",
"secondaryIpv6Prefix": null, "****": "root" },
Expected output:
} ```
-After the operationalState is no longer "Provisioned", delete all the Racks one by one
-
-```azurecli
-az nf rack delete --resource-group "NFResourceGroup" --resource-name "RackName"
-```
+After the operationalState is no longer `Provisioned`, delete the Network Fabric
```azurecli
-az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
+az nf fabric delete --resource-group "NFResourceGroup" --resource-name "NFName"
```
operator-nexus Howto Hybrid Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-hybrid-aks.md
Title: "Azure Operator Nexus: Interact with AKS-Hybrid Cluster" description: Learn how to manage (view, list, update, delete) AKS-Hybrid clusters.--++ Last updated 02/02/2023
-# How to manage and lifecyle the AKS-Hybrid cluster
+# How to manage and lifecycle the AKS-Hybrid cluster
This document shows how to manage an AKS-Hybrid cluster that you use for CNF workloads. ## Before you begin
-You'll need:
+You need:
1. You should have created an [AKS-Hybrid Cluster](./quickstarts-tenant-workload-deployment.md#section-k-how-to-create-aks-hybrid-cluster-for-deploying-cnf-workloads) 2. <`YourAKS-HybridClusterName`>: the name of your previously created AKS-Hybrid cluster
To get a list of AKS_Hybrid clusters in your Resource group:
```azurecli az hybridaks list -o table \
- --resource-group "<YourResourceGroupName>" \
- --subscription "<YourSubscription>"
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>"
``` ## Show command
To see the properties of AKS-Hybrid cluster named `YourAKS-HybridClustername`:
```azurecli az hybridaks show --name "<YourAKS-HybridClusterName>" \
- --resource-group "< YourResourceGroupName >" \
- --subscription "< YourSubscription >"
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>"
``` ## Update command
To update the properties of your AKS-Hybrid cluster:
```azurecli az hybridaks update --name "<YourAKS-HybridClustername>" \
- --resource-group "<YourResourceGroupName>" \
- --subscription "< YourSubscription>" \
- --tags "<YourAKS-HybridClusterTags>"
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>" \
+ --tags "<YourAKS-HybridClusterTags>"
``` ## Delete command
To delete the AKS-Hybrid cluster named `YourAKS-HybridClustername`:
```azurecli az hybridaks delete --name "<YourAKS-HybridClustername>" \
- --resource-group "<YourResourceGroupName >" \
- --subscription "<YourSubscription>"
+ --resource-group "<YourResourceGroupName >" \
+ --subscription "<YourSubscription>"
+```
+
+## Add node pool command
+
+To add a node pool to the AKS-Hybrid cluster named `YourAKS-HybridClustername`:
+```azurecli
+ az hybridaks nodepool add \
+ --name <name of the nodepool> \
+ --cluster-name "<YourAKS-HybridClustername>" \
+ --resource-group "<YourResourceGroupName>" \
+ --location <dc-location> \
+ --node-count <worker node count> \
+ --node-vm-size <Operator Nexus SKU> \
+ --zones <comma separated list of availability zones>
+```
+
+## Delete node pool command
+
+To delete a node pool from the AKS-Hybrid cluster named `YourAKS-HybridClustername`:
+
+```azurecli
+ az hybridaks nodepool delete \
+ --name <name of the nodepool> \
+ --cluster-name "<YourAKS-HybridClustername>" \
+ --resource-group "<YourResourceGroupName>"
```
operator-nexus Quickstarts Tenant Workload Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment.md
Title: How to deploy tenant workloads description: Learn the steps for creating VMs for VNF workloads and for creating AKS-Hybrid clusters for CNF workloads-+
You should have the following information already:
- VLAN/subnet info for each of the layer 3 network(s) - Which network(s) would need to talk to each other (remember to put VLANs/subnets that needs to talk to each other into the same L3 Isolation Domain)-- VLAN/subnet info for your `defaultcninetwork` for AKS-Hybrid cluster - BGP peering and network policies information for your L3 Isolation Domain(s) - VLANs for all your layer 2 network(s) - VLANs for all your trunked network(s)
Your VM requires at least one Cloud Services Network. You need the egress endpoi
### Step V3: create a VM
-Operator Nexus Virtual Machines (VMs) are used for hosting VNF(s) within a Telco network.
+Operator Nexus Virtual Machines (VMs) is used for hosting VNF(s) within a Telco network.
The Nexus platform provides `az networkcloud virtualmachine create` to create a customized VM. For hosting a VNF on your VM, have it [Microsoft Azure Arc-enrolled](/azure/azure-arc/servers/overview), and provide a way to ssh to it via Azure CLI.
Gather this information:
- The `resourceId` of the `cloudservicesnetwork` - The `resourceId(s)` for each of the L2/L3/Trunked Networks-- Determine which network will serve as your default gateway (can only choose 1)
+- Determine which network serves as your default gateway (can only choose 1)
- If you want to specify `networkAttachmentName` (interface name) for any of your networks - Determine the `ipAllocationMethod` for each of your L3 Network (static/dynamic) - The dimension of your VM
You need the egress endpoints you want to add to the proxy for your VM to access
For each previously created tenant network, a corresponding AKS-Hybrid vNET network needs to be created
-You'll need the Azure Resource Manager resource ID for each of the networks you created earlier. You can retrieve the Azure Resource Manager resource IDs as follows:
+You need the Azure Resource Manager resource ID for each of the networks you created earlier. You can retrieve the Azure Resource Manager resource IDs as follows:
```azurecli az networkcloud cloudservicesnetwork show -g "<YourResourceGroupName>" -n "<YourCloudServicesNetworkName>" --subscription "<YourSubscription>" -o tsv --query id
This section describes how to create an AKS-Hybrid cluster
--control-plane-count <count> \ --location <dc-location> \ --node-count <worker node count> \
- --node-vm-size <Operator Nexus SKU>
+ --node-vm-size <Operator Nexus SKU> \
+ --zones <comma separated list of availability zones>
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster.
operator-nexus Quickstarts Tenant Workload Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
Title: How to deploy tenant workloads prerequisites description: Learn the prerequisites for creating VMs for VNF workloads and for creating AKS-Hybrid clusters for CNF workloads--++ Last updated 01/25/2023 #Required; mm/dd/yyyy format.
You need:
- your Azure account and the subscription ID of Operator Nexus cluster deployment - the `custom location` resource ID of your Operator Nexus cluster
+## AKS-Hybrid availability zone
+`--zones` option in `az hybridaks create` or `az hybridaks nodepool add` can be used to distribute the AKS-Hybrid clusters across different zones for better fault tolerance and performance. When creating an AKS-Hybrid cluster, you can use the `--zones` option to schedule the cluster onto specific racks or distribute it evenly across multiple racks, improving resource utilization and fault tolerance.
+
+If you do not specify a zone when creating an AKS-Hybrid cluster through the `--zones` option, the Operator Nexus platform automatically implements a default anti-affinity rule. This anti-affinity rule aims to prevent scheduling the cluster VM on a node that already has a VM from the same cluster, but it's a best-effort approach and can't guarantee it.
+
+To obtain the list of available zones in the given Operator Nexus instance, you can use the following command.
+
+```azurecli
+ az networkcloud cluster show \
+ --resource-group <Operator Nexus on-prem cluster Resource Group> \
+ --name <Operator Nexus on-prem cluster name> \
+ --query computeRackDefinitions[*].availabilityZone
+```
+ ### Review Azure container registry [Azure Container Registry](../container-registry/container-registry-intro.md) is a managed registry service to store and manage your container images and related artifacts.
This VM image build procedure is derived from [kubevirt](https://kubevirt.io/use
To deploy your workloads, you need: - to create resource group or find a resource group to use for your workloads-- the network fabric resource ID to create isolation-domains.
+- the network fabric resource ID to create isolation-domains.
partner-solutions New Relic How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-configure-prereqs.md
To set up New Relic on Azure, you need to register the `NewRelic.Observability`
- To register the resource provider in the Azure CLI, use this command: ```azurecli
- az provider register \--namespace NewRelic.Observability \--subscription \<subscription-id\>
+ az provider register --namespace NewRelic.Observability --subscription <subscription-id>
``` ## Next steps - [Quickstart: Get started with New Relic](new-relic-create.md)-- [Troubleshoot Azure Native New Relic Service](new-relic-troubleshoot.md)
+- [Troubleshoot Azure Native New Relic Service](new-relic-troubleshoot.md)
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/application-best-practices.md
- Title: App development best practices - Azure Database for PostgreSQL flexible server
-description: Learn about best practices for building an app by using Azure Database for PostgreSQL flexible server.
------ Previously updated : 03/22/2023--
-# Best practices for building an application with Azure Database for PostgreSQL flexible server
--
-Here are some best practices to help you build a cloud-ready application by using Azure Database for PostgreSQL. These best practices can reduce development time for your app.
-
-## Configuration of application and database resources
-
-### Keep the application and database in the same region
-
-Make sure all your dependencies are in the same region when deploying your application in Azure. Spreading instances across regions or availability zones creates network latency, which might affect the overall performance of your application.
-
-### Keep your PostgreSQL server secure
-
-Configure your PostgreSQL server to be [secure](./concepts-security.md) and not accessible publicly. Use one of these options to secure your server:
--- [Firewall rules](./concepts-firewall-rules.md)-- [Virtual networks](./concepts-networking.md)-- [Private Networking](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/private-networking-patterns-in-azure-database-for-postgres/ba-p/3007149)-
-For security, you must always connect to your PostgreSQL server over SSL and configure your PostgreSQL server and your application to use TLS 1.2. See [How to connect SSL/TLS](./how-to-connect-tls-ssl.md).
--
-### Use environment variables for connection information
-
-Do not save your database credentials in your application code. Depending on the front end application, follow the guidance to set up environment variables. For App service use, see [how to configure app settings](../../app-service/configure-common.md#configure-app-settings) and for Azure Kubernetes service, see [how to use Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/).
-
-## Performance and resiliency
-
-Here are a few tools and practices that you can use to help debug performance issues with your application.
-
-### Use Connection Pooling
-
-With connection pooling, a fixed set of connections is established at the startup time and maintained. This also helps reduce the memory fragmentation on the server that is caused by the dynamic new connections established on the database server. The connection pooling can be configured on the application side if the app framework or database driver supports it. If that is not supported, the other recommended option is to leverage a proxy connection pooler service like [PgBouncer](https://pgbouncer.github.io/) or [Pgpool](https://pgpool.net/mediawiki/index.php/Main_Page) running outside the application and connecting to the database server. Both PgBouncer and Pgpool are community based tools that work with Azure Database for PostgreSQL.
-
-### Retry logic to handle transient errors
-
-Your application might experience transient errors where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds. A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./concepts-connectivity.md) to learn more.
-
-### Enable read replication to mitigate failovers
-
-You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
-
-## Database deployment
-
-### Configure CI/CD deployment pipeline
-
-Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub Actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it.
-
-### Define manual database deployment process
-
-During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
--- Create a copy of a production database on a new database by using pg_dump.-- Update the new database with your new schema changes or updates needed for your database.-- Put the production database in a read-only state. You should not have write operations on the production database until deployment is completed.-- Test your application with the newly updated database from step 1.-- Deploy your application changes and make sure the application is now using the new database that has the latest updates.-- Keep the old production database so that you can roll back the changes. You can then evaluate to either delete the old production database or export it on Azure Storage if needed.-
-> [!NOTE]
-> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests. Make sure your application code also handles any failed requests.
-
-## Database schema and queries
-
-Here are few tips to keep in mind when you build your database schema and your queries.
-
-### Use BIGINT or UUID for Primary Keys
-
-When building custom application or some frameworks they maybe using `INT` instead of `BIGINT` for primary keys. When you use ```INT```, you run the risk of where the value in your database can exceed storage capacity of ```INT``` data type. Making this change to an existing production application can be time consuming with cost more development time. Another option is to use [UUID](https://www.postgresql.org/docs/current/datatype-uuid.html) for primary keys.This identifier uses an auto-generated 128-bit string, for example ```a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11```. Learn more about [PostgreSQL data types](https://www.postgresql.org/docs/8.1/datatype.html).
-
-### Use indexes
-
-There are many types of [indexes](https://www.postgresql.org/docs/9.1/indexes.html) in Postgres which can be used in different ways. Using an index helps the server find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database server, hence avoid having too many indexes.
-
-### Use autovacuum
-
-You can optimize your server with autovacuum on an Azure Database for PostgreSQL server. PostgreSQL allow greater database concurrency but with every update results in insert and delete. For delete, the records are soft marked which will be purged later. To carry out these tasks, PostgreSQL runs a vacuum job. If you don't vacuum from time to time, the dead tuples that accumulate can result in:
--- Data bloat, such as larger databases and tables.-- Larger suboptimal indexes.-- Increased I/O.-
-Learn more about [how to optimize with autovacuum](how-to-autovacuum-tuning.md).
-
-### Use pg_stats_statements
-
-Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
-
-### Use the Query Store
-
-The [Query Store](./concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using pg_stats_statements.
-
-### Optimize bulk inserts and use transient data
-
-If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables. It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties. See [how to optimize bulk inserts](how-to-bulk-load-data.md).
-
-## Next Steps
-
-[Postgres Guide](http://postgresguide.com/)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
For flexible servers configured with high availability, these maintenance activi
- Unplanned outages include software bugs or infrastructure component failures that impact the availability of the database. If the primary server becomes unavailable, it is detected by the monitoring system and initiates a failover process. The process includes a few seconds of wait time to make sure it is not a false positive. The replication to the standby replica is severed and the standby replica is activated to be the primary database server. That includes the standby to recover any residual WAL files. Once it is fully recovered, DNS for the same end point is updated with the standby server's IP address. Clients can then retry connecting to the database server using the same connection string and resume their operations. > [!NOTE]
-Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
+> Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
After the failover, while a new standby server is being provisioned (which usually takes 5-10 minutes), applications can still connect to the primary server and proceed with their read/write operations. Once the standby server is established, it will start recovering the logs that were generated after the failover.
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md
The maximum number of connections per pricing tier and vCores are shown below. T
When connections exceed the limit, you may receive the following error: > FATAL: sorry, too many clients already.
-> [!IMPORTANT]
-> For best experience, it is recommended to you use a connection pool manager like PgBouncer to efficiently manage connections. Azure Database for PostgreSQL - Flexible Server offers pgBouncer as [built-in connection pool management solution](concepts-pgbouncer.md).
+> [!CAUTION]
+> While the maximum number of connections for certain SKUs is high, it's not recommended to set the max_connections parameter value to it's maximum. This is because although it may be a safe value when most connections are in the idle state, it can cause serious performance issues once they become active. Instead, if you require more connections, we recommend using pgBouncer, Azure's built-in connection pool management solution, in transaction mode. To start, use safe values by multiplying vCores in the range of 2 to 5, and then check the resource utilization and application performance to ensure everything is running smoothly. For more information on pgBouncer, refer to the [PgBouncer in Azure Database for PostgreSQL - Flexible Server](concepts-pgbouncer.md) documentation.
-A PostgreSQL connection, even idle, can occupy about 10 MB of memory. Also, creating new connections takes time. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. Connection pooling can be used to decrease idle connections and reuse existing connections. To learn more, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
+When using PostgreSQL for a busy database with a large number of concurrent connections, there may be a significant strain on resources. This strain can result in high CPU utilization, particularly when many connections are established simultaneously and when connections have short durations (less than 60 seconds). These factors can negatively impact overall database performance by increasing the time spent on processing connections and disconnections. It's important to note that each connection in Postgres, regardless of whether it is idle or active, consumes a significant amount of resources from your database. This can lead to performance issues beyond high CPU utilization, such as disk and lock contention, which are discussed in more detail in the PostgreSQL Wiki article on the [Number of Database Connections](https://wiki.postgresql.org/wiki/Number_Of_Database_Connections). To learn more about identifying and solving connection performance issues in Azure Postgres, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/identify-and-solve-connection-performance-in-azure-postgres/ba-p/3698375).
## Functional limitations
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
Title: PgBouncer - Azure Database for PostgreSQL - Flexible Server description: This article provides an overview with the built-in PgBouncer extension.--++
For more details on the PgBouncer configurations, please see [pgbouncer.ini](htt
> [!Note] > Upgrading of PgBouncer is managed by Azure.
-## Monitoring PgBouncer statistics
+## Monitoring PgBouncer
+
+### PgBouncer Metrics
+
+Azure Database for PostgreSQL - Flexible Server now provides six new metrics for monitoring PgBouncer connection pooling.
+
+|Display Name |Metrics ID |Unit |Description |Dimension |Default enabled|
+|-|--|--|-|||
+|**Active client connections** (Preview) |client_connections_active |Count|Connections from clients which are associated with a PostgreSQL connection |DatabaseName|No |
+|**Waiting client connections** (Preview)|client_connections_waiting|Count|Connections from clients that are waiting for a PostgreSQL connection to service them|DatabaseName|No |
+|**Active server connections** (Preview) |server_connections_active |Count|Connections to PostgreSQL that are in use by a client connection |DatabaseName|No |
+|**Idle server connections** (Preview) |server_connections_idle |Count|Connections to PostgreSQL that are idle, ready to service a new client connection |DatabaseName|No |
+|**Total pooled connections** (Preview) |total_pooled_connections |Count|Current number of pooled connections |DatabaseName|No |
+|**Number of connection pools** (Preview)|num_pools |Count|Total number of connection pools |DatabaseName|No |
+
+To learn more, please refer [pgbouncer metrics](./concepts-monitoring.md#pgbouncer-metrics)
+
+### Admin Console
PgBouncer also provides an **internal** database that you can connect to called `pgbouncer`. Once connected to the database you can execute `SHOW` commands that provide information on the current state of pgbouncer.
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
You can have a primary server in any [Azure Database for PostgreSQL region](http
When you start the create replica workflow, a blank Azure Database for PostgreSQL server is created. The new server is filled with the data that was on the primary server. For creation of replicas in the same region snapshot approach is used, therefore the time of creation doesn't depend on the size of data. Geo-replicas are created using base backup of the primary instance, which is then transmitted over the network therefore time of creation might range from minutes to several hours depending on the primary size.
+In Azure Database for PostgreSQL - Flexible Server, the create operation of replicas is considered successful only when the entire backup of the primary instance has been copied to the replica destination along with the transaction logs have been synchronized up to the threshold of maximum 1GB lag.
+
+To ensure the success of the create operation, it's recommended to avoid creating replicas during periods of high transactional load. For example, it's best to avoid creating replicas during migrations from other sources to Azure Database for PostgreSQL - Flexible Server, or during excessive bulk load operations. If you are currently in the process of performing a migration or bulk load operation, it's recommended that you wait until the operation has completed before proceeding with the creation of replicas. Once the migration or bulk load operation has finished, check whether the transaction log size has returned to its normal size. Typically, the transaction log size should be close to the value defined in the max_wal_size server parameter for your instance. You can track the transaction log storage footprint using the [Transaction Log Storage Used](concepts-monitoring.md#list-of-metrics) metric, which provides insights into the amount of storage used by the transaction log. By monitoring this metric, you can ensure that the transaction log size is within the expected range and that the replica creation process might be started.
+
+> [!IMPORTANT]
+> Read Replicas are currently supported for the General Purpose and Memory Optimized server compute tiers, Burstable server compute tier is not supported.
+
+> [!IMPORTANT]
+> When performing replica creation, deletion, and promotion operations, the primary server will enter an updating state. During this time, server management operations such as modifying server parameters, changing high availability options, or adding or removing firewalls will be unavailable. It's important to note that the updating state only affects server management operations and does not impact [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. This means that your database server will remain fully functional and able to accept connections, as well as serve read and write traffic.
+ Learn how to [create a read replica in the Azure portal](how-to-read-replicas-portal.md). If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption.md) for additional considerations.
You can stop the replication between a primary and a replica by promoting one or
- The promoted replica server cannot be made into a replica again. - If you promote a replica to be a standalone server, you cannot establish replication back to the old primary server. If you want to go back to the old primary region, you can either establish a new replica server with a new name (or) delete the old primary and create a replica using the old primary name. - If you have multiple read replicas, and if you promote one of them to be your primary server, other replica servers are still connected to the old primary. You may have to recreate replicas from the new, promoted server.-- During the create, delete and promote operations of replica, primary server will be in upgrading state.-- **Power operations**: Power operations (start/stop) are currently not supported for any node, either replica or primary, in the replication cluster.-- If server has read replicas then read replicas should be deleted first before deleting the primary server. When you promote a replica, the replica loses all links to its previous primary and other replicas.
When there is a major disaster event such as availability zone-level or regional
## Considerations
-This section summarizes considerations about the read replica feature.
+This section summarizes considerations about the read replica feature. The following considerations do apply.
+
+- **Power operations**: Power operations (start/stop) are currently not supported for any node, either replica or primary, in the replication cluster.
+- If server has read replicas then read replicas should be deleted first before deleting the primary server.
+- [In-place major version upgrade](concepts-major-version-upgrade.md) in Azure Database for PostgreSQL requires removing any read replicas that are currently enabled on the server. Once the replicas have been deleted, the primary server can be upgraded to the desired major version. After the upgrade is complete, you can recreate the replicas to resume the replication process.
### New replicas
You are free to change server parameters on your read replica server and set dif
### Scaling
-Scaling vCores or between General Purpose and Memory Optimized:
+You are free to scale up and down compute (vCores), changing the service tier from General Purpose to Memory Optimized (or vice versa) as well as scaling up the storage, but the following caveats do apply.
+
+For compute scaling:
+ * PostgreSQL requires several parameters on replicas to be [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN) to ensure that the standby does not run out of shared memory during recovery. The parameters affected are: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes.
-* **Scaling up**: First scale up a replica's compute, then scale up the primary.
+
+* **Scaling up**: First scale up a replica's compute, then scale up the primary.
+ * **Scaling down**: First scale down the primary's compute, then scale down the replica.
+* Compute on the primary must always be equal or smaller than the compute on the smallest replica.
+
+
+For storage scaling:
+
+* **Scaling up**: First scale up a replica's storage, then scale up the primary.
+
+* Storage size on the primary must be always equal or smaller than the storage size on the smallest replica.
+ ## Next steps * Learn how to [create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md).
postgresql How To Optimize Query Stats Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-optimize-query-stats-collection.md
This article describes how to optimize query statistics collection on an Azure D
If you have unique queries with long query text or you don't actively monitor **pg_stat_statements**, disable **pg_stat_statements** for best performance. To do so, change the setting to `pg_stat_statements.track = NONE`.
-Some customer workloads have seen up to a 50 percent performance improvement when **pg_stat_statements** is disabled. The tradeoff you make when you disable pg_stat_statements is the inability to troubleshoot performance issues.
- To set `pg_stat_statements.track = NONE`: - In the Azure portal, go to the [PostgreSQL resource management page and select the server parameters blade](concepts-server-parameters.md).
To set `pg_stat_statements.track = NONE`:
## Use the Query Store
-The [Query Store](concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using *pg_stat_statements*.
+Using the [Query Store](concepts-query-store.md) feature in Azure Database for PostgreSQL - Flexible Server offers a different way to monitor query execution statistics. To prevent performance overhead, it is recommended to utilize only one mechanism, either the pg_stat_statements extension or the Query Store.
## Next steps
-Consider setting `pg_stat_statements.track = NONE` in the [Azure portal](concepts-server-parameters.md) or by using the [Azure CLI](connect-azure-cli.md).
-
-For more information, see:
- - [Query Store usage scenarios](concepts-query-store-scenarios.md) - [Query Store best practices](concepts-query-store-best-practices.md)
postgresql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshoot-common-connection-issues.md
If the application persistently fails to connect to Azure Database for PostgreSQ
* Server firewall configuration: Make sure that the Azure Database for PostgreSQL server firewall is configured to allow connections from your client, including proxy servers and gateways. * Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you can't connect to must be allowed and the application names such as PostgreSQL in some firewalls.
-* User error: You might have mistyped connection parameters, such as the server name in the connection string or a missing *\@servername* suffix in the user name.
* If you see the error _Server isn't configured to allow ipv6 connections_, note that the Basic tier doesn't support VNet service endpoints. You have to remove the Microsoft.Sql endpoint from the subnet that is attempting to connect to the Basic server. * If you see the connection error _sslmode value "***" invalid when SSL support is not compiled in_ error, it means your PostgreSQL client doesn't support SSL. Most probably, the client-side libpq hasn't been compiled with the "--with-openssl" flag. Try connecting with a PostgreSQL client that has SSL support.
postgresql Quickstart Create Postgresql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-bicep.md
Title: 'Quickstart: Create an Azure DB for PostgreSQL - Bicep'
+ Title: 'Quickstart: Create an Azure Database for PostgreSQL - Bicep'
description: In this quickstart, learn how to create an Azure Database for PostgreSQL single server using Bicep.
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
The RAN that you use to broadcast the signal across the enterprise site must com
- You have received permission for the RAN to broadcast using spectrum in a certain location, for example, by grant from a telecom operator, regulatory authority or via a technological solution such as a Spectrum Access System (SAS). - The RAN units in a site have access to high-precision timing sources, such as Precision Time Protocol (PTP) and GPS location services.
-You should ask your RAN partner for the countries and frequency bands for which the RAN is approved. You may find that you need to use multiple RAN partners to cover the countries in which you provide your solution. Although the RAN, UE and packet core all communicate using standard protocols, Microsoft recommends that you perform interoperability testing for the specific 4G Long-Term Evolution (LTE) or 5G standalone (SA) protocol between Azure Private 5G Core, UEs and the RAN prior to any deployment at an enterprise customer.
+You should ask your RAN partner for the countries/regions and frequency bands for which the RAN is approved. You may find that you need to use multiple RAN partners to cover the countries/regions in which you provide your solution. Although the RAN, UE and packet core all communicate using standard protocols, Microsoft recommends that you perform interoperability testing for the specific 4G Long-Term Evolution (LTE) or 5G standalone (SA) protocol between Azure Private 5G Core, UEs and the RAN prior to any deployment at an enterprise customer.
Your RAN will transmit a Public Land Mobile Network Identity (PLMN ID) to all UEs on the frequency band it is configured to use. You should define the PLMN ID and confirm your access to spectrum. In some countries, spectrum must be obtained from the national regulator or incumbent telecommunications operator. For example, if you're using the band 48 Citizens Broadband Radio Service (CBRS) spectrum, you may need to work with your RAN partner to deploy a Spectrum Access System (SAS) domain proxy on the enterprise site so that the RAN can continuously check that it is authorized to broadcast.
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a [private endpoint](private-endpoint-overview.md) in your virtual network. > [!IMPORTANT]
-> Azure Private Link is now generally available. Both Private Endpoint and Private Link service (service behind standard load balancer) are generally available. Different Azure PaaS will onboard to Azure Private Link at different schedules. For known limitations, see [Private Endpoint](private-endpoint-overview.md#limitations) and [Private Link Service](private-link-service-overview.md#limitations).
+> Azure Private Link is now generally available. Both Private Endpoint and Private Link service (service behind standard load balancer) are generally available. For known limitations, see [Private Endpoint](private-endpoint-overview.md#limitations) and [Private Link Service](private-link-service-overview.md#limitations).
## Service availability
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
Title: What is a private endpoint?
-description: In this article, you'll learn how to use the Private Endpoint feature of Azure Private Link.
+description: In this article, you learn how to use the Private Endpoint feature of Azure Private Link.
Previously updated : 08/10/2022 Last updated : 03/24/2023 #Customer intent: As someone who has a basic network background but is new to Azure, I want to understand the capabilities of private endpoints so that I can securely connect to my Azure PaaS services within the virtual network.
A private endpoint specifies the following properties:
|Subnet | The subnet to deploy, where the private IP address is assigned. For subnet requirements, see the [Limitations](#limitations) section later in this article. | |Private-link resource | The private-link resource to connect by using a resource ID or alias, from the list of available types. A unique network identifier is generated for all traffic that's sent to this resource. | |Target subresource | The subresource to connect. Each private-link resource type has various options to select based on preference. |
-|Connection approval method | Automatic or manual. Depending on the Azure role-based access control (RBAC) permissions, your private endpoint can be approved automatically. If you're connecting to a private-link resource without Azure RBAC permissions, use the manual method to allow the owner of the resource to approve the connection. |
+|Connection approval method | Automatic or manual. Depending on the Azure role-based access control permissions, your private endpoint can be approved automatically. If you're connecting to a private-link resource without Azure role based permissions, use the manual method to allow the owner of the resource to approve the connection. |
|Request message | You can specify a message for requested connections to be approved manually. This message can be used to identify a specific request. |
-|Connection status | A read-only property that specifies whether the private endpoint is active. Only private endpoints in an approved state can be used to send traffic. Additional available states: <li>*Approved*: The connection was automatically or manually approved and is ready to be used.<li>*Pending*: The connection was created manually and is pending approval by the private-link resource owner.<li>*Rejected*: The connection was rejected by the private-link resource owner.<li>*Disconnected*: The connection was removed by the private-link resource owner. The private endpoint becomes informative and should be deleted for cleanup. </br>|
+|Connection status | A read-only property that specifies whether the private endpoint is active. Only private endpoints in an approved state can be used to send traffic. More available states: <li>*Approved*: The connection was automatically or manually approved and is ready to be used.<li>*Pending*: The connection was created manually and is pending approval by the private-link resource owner.<li>*Rejected*: The connection was rejected by the private-link resource owner.<li>*Disconnected*: The connection was removed by the private-link resource owner. The private endpoint becomes informative and should be deleted for cleanup. </br>|
As you're creating private endpoints, consider the following:
A private-link resource is the destination target of a specified private endpoin
## Network security of private endpoints
-When you use private endpoints, traffic is secured to a private-link resource. The platform validates network connections, allowing only those that reach the specified private-link resource. To access additional sub-resources within the same Azure service, additional private endpoints with corresponding targets are required. In the case of Azure Storage, for instance, you would need separate private endpoints to access the _file_ and _blob_ sub-resources.
+When you use private endpoints, traffic is secured to a private-link resource. The platform validates network connections, allowing only those that reach the specified private-link resource. To access more subresources within the same Azure service, more private endpoints with corresponding targets are required. In the case of Azure Storage, for instance, you would need separate private endpoints to access the _file_ and _blob_ subresources.
Private endpoints provide a privately accessible IP address for the Azure service, but do not necessarily restrict public network access to it. All other Azure services require additional [access controls](../event-hubs/event-hubs-ip-filtering.md), however. These controls provide an extra network security layer to your resources, providing protection that helps prevent access to the Azure service associated with the private-link resource.
You can connect to a private-link resource by using the following connection app
`Microsoft.<Provider>/<resource_type>/privateEndpointConnectionsApproval/action` -- **Manually request**: Use this method when you don't have the required permissions and want to request access. An approval workflow will be initiated. The private endpoint and later private-endpoint connections will be created in a *Pending* state. The private-link resource owner is responsible to approve the connection. After it's approved, the private endpoint is enabled to send traffic normally, as shown in the following approval workflow diagram:
+- **Manually request**: Use this method when you don't have the required permissions and want to request access. An approval workflow is initiated. The private endpoint and later private-endpoint connections are created in a *Pending* state. The private-link resource owner is responsible to approve the connection. After it's approved, the private endpoint is enabled to send traffic normally, as shown in the following approval workflow diagram:
![Diagram of the workflow approval process.](media/private-endpoint-overview/private-link-paas-workflow.png) Over a private-endpoint connection, a private-link resource owner can: - Review all private-endpoint connection details. -- Approve a private-endpoint connection. The corresponding private endpoint will be enabled to send traffic to the private-link resource. -- Reject a private-endpoint connection. The corresponding private endpoint will be updated to reflect the status.-- Delete a private-endpoint connection in any state. The corresponding private endpoint will be updated with a disconnected state to reflect the action. The private-endpoint owner can delete only the resource at this point.
+- Approve a private-endpoint connection. The corresponding private endpoint is enabled to send traffic to the private-link resource.
+- Reject a private-endpoint connection. The corresponding private endpoint is updated to reflect the status.
+- Delete a private-endpoint connection in any state. The corresponding private endpoint is updated with a disconnected state to reflect the action. The private-endpoint owner can delete only the resource at this point.
> [!NOTE] > Only private endpoints in an *Approved* state can send traffic to a specified private-link resource.
The following information lists the known limitations to the use of private endp
| Effective routes and security rules unavailable for private endpoint network interface. | Effective routes and security rules won't be displayed for the private endpoint NIC in the Azure portal. | | NSG flow logs unsupported. | NSG flow logs unavailable for inbound traffic destined for a private endpoint. | | No more than 50 members in an Application Security Group. | Fifty is the number of IP Configurations that can be tied to each respective ASG thatΓÇÖs coupled to the NSG on the private endpoint subnet. Connection failures may occur with more than 50 members. |
-| Destination port ranges supported up to a factor of 250K. | Destination port ranges are supported as a multiplication SourceAddressPrefixes, DestinationAddressPrefixes, and DestinationPortRanges. </br></br> Example inbound rule: </br> 1 source * 1 destination * 4K portRanges = 4K Valid </br> 10 sources * 10 destinations * 10 portRanges = 1K Valid </br> 50 sources * 50 destinations * 50 portRanges = 125K Valid </br> 50 sources * 50 destinations * 100 portRanges = 250K Valid </br> 100 sources * 100 destinations * 100 portRanges = 1M Invalid, NSG has too many sources/destinations/ports. |
+| Destination port ranges supported up to a factor of 250 K. | Destination port ranges are supported as a multiplication SourceAddressPrefixes, DestinationAddressPrefixes, and DestinationPortRanges. </br></br> Example inbound rule: </br> One source * one destination * 4K portRanges = 4K Valid </br> 10 sources * 10 destinations * 10 portRanges = 1 K Valid </br> 50 sources * 50 destinations * 50 portRanges = 125 K Valid </br> 50 sources * 50 destinations * 100 portRanges = 250 K Valid </br> 100 sources * 100 destinations * 100 portRanges = 1M Invalid, NSG has too many sources/destinations/ports. |
| Source port filtering is interpreted as * | Source port filtering isn't actively used as valid scenario of traffic filtering for traffic destined to a private endpoint. | | Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
-### NSG additional considerations
+### NSG more considerations
- Outbound traffic denied from a private endpoint isn't a valid scenario, as the service provider can't originate traffic. -- The following services may require all destination ports to be open when leveraging a private endpoint and adding NSG security filters:
+- The following services may require all destination ports to be open when using a private endpoint and adding NSG security filters:
- Azure Cosmos DB - For more information, see [Service port ranges](../cosmos-db/sql/sql-sdk-connection-modes.md#service-port-ranges).
The following information lists the known limitations to the use of private endp
| Limitation | Description | | | |
-| SNAT is recommended at all times. | Due to the variable nature of the private endpoint data-plane, it's recommended to SNAT traffic destined to a private endpoint to ensure return traffic is honored. |
-| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> UK North </br> UK South 2 </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
+| SNAT is recommended always. | Due to the variable nature of the private endpoint data-plane, it's recommended to SNAT traffic destined to a private endpoint to ensure return traffic is honored. |
+| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
### Application security group | Limitation | Description | | | |
-| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> UK North </br> UK South 2 </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
+| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
## Next steps - For more information about private endpoints and Private Link, see [What is Azure Private Link?](private-link-overview.md).+ - To get started with creating a private endpoint for a web app, see [Quickstart: Create a private endpoint by using the Azure portal](create-private-endpoint-portal.md).
purview Catalog Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-conditional-access.md
Previously updated : 01/14/2022 Last updated : 03/23/2023 # Customer intent: As an identity and security admin, I want to set up Azure Active Directory Conditional Access for Microsoft Purview, for secure access.
The following steps show how to configure Microsoft Purview to enforce a Conditi
1. Sign in to the Azure portal, select **Azure Active Directory**, and then select **Conditional Access**. For more information, see [Azure Active Directory Conditional Access technical reference](../active-directory/conditional-access/concept-conditional-access-conditions.md).
- :::image type="content" source="media/catalog-conditional-access/conditional-access-blade.png" alt-text="Screenshot that shows Conditional Access blade"lightbox="media/catalog-conditional-access/conditional-access-blade.png":::
+ :::image type="content" source="media/catalog-conditional-access/conditional-access-blade.png" alt-text="Screenshot that shows Conditional Access blade." lightbox="media/catalog-conditional-access/conditional-access-blade.png":::
1. In the **Conditional Access-Policies** menu, select **New policy**, provide a name, and then select **Configure rules**. 1. Under **Assignments**, select **Users and groups**, check **Select users and groups**, and then select the user or group for Conditional Access. Select **Select**, and then select **Done** to accept your selection.
- :::image type="content" source="media/catalog-conditional-access/select-users-and-groups.png" alt-text="Screenshot that shows User and Group selection"lightbox="media/catalog-conditional-access/select-users-and-groups.png":::
+ :::image type="content" source="media/catalog-conditional-access/select-users-and-groups.png" alt-text="Screenshot that shows User and Group selection." lightbox="media/catalog-conditional-access/select-users-and-groups.png":::
1. Select **Cloud apps**, select **Select apps**. You see all apps available for Conditional Access. Select **Microsoft Purview**, at the bottom select **Select**, and then select **Done**.
- :::image type="content" source="media/catalog-conditional-access/select-azure-purview.png" alt-text="Screenshot that shows Applications selection"lightbox="media/catalog-conditional-access/select-azure-purview.png":::
+ :::image type="content" source="media/catalog-conditional-access/select-azure-purview.png" alt-text="Screenshot that shows Applications selection." lightbox="media/catalog-conditional-access/select-azure-purview.png":::
1. Select **Access controls**, select **Grant**, and then check the policy you want to apply. For this example, we select **Require multi-factor authentication**.
- :::image type="content" source="media/catalog-conditional-access/grant-access.png" alt-text="Screenshot that shows Grant access tab"lightbox="media/catalog-conditional-access/grant-access.png":::
+ :::image type="content" source="media/catalog-conditional-access/grant-access.png" alt-text="Screenshot that shows Grant access tab." lightbox="media/catalog-conditional-access/grant-access.png":::
1. Set **Enable policy** to **On** and select **Create**.
purview Concept Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-network.md
Previously updated : 01/28/2023 Last updated : 03/24/2023
Microsoft Purview data governance solutions are a platform as a service (PaaS) s
For an added layer of security, you can create private endpoints for your Microsoft Purview account. You'll get a private IP address from your virtual network in Azure to the Microsoft Purview account and its managed resources. This address will restrict all traffic between your virtual network and the Microsoft Purview account to a private link for user interaction with the APIs and Microsoft Purview governance portal, or for scanning and ingestion.
-Currently, the Microsoft Purview firewall provides access control for the public endpoint of your purview account. You can use the firewall to allow all access or to block all access through the public endpoint when using private endpoints.
+Currently, the Microsoft Purview firewall provides access control for the public endpoint of your purview account. You can use the firewall to allow all access or to block all access through the public endpoint when using private endpoints. For more information see, [Microsoft Purview firewall options](catalog-firewall.md)
Based on your network, connectivity, and security requirements, you can set up and maintain Microsoft Purview accounts to access underlying services or ingestion. Use this best practices guide to define and prepare your network environment so you can access Microsoft Purview and scan data sources from your network or cloud.
You must use private endpoints for your Microsoft Purview account if you have an
- If you need to connect to the Microsoft Purview governance portal by using private endpoints, you have to deploy both account and portal private endpoints. -- To scan data sources through private connectivity, you need to configure at least one account and one ingestion private endpoint for Microsoft Purview. You must configure scans by using a self-hosted integration runtime through an authentication method other than a Microsoft Purview managed identity.
+- To scan data sources through private connectivity, you need to configure at least one account and one ingestion private endpoint for Microsoft Purview. You must configure scans by using a self-hosted integration runtime through an authentication method other than a Microsoft Purview managed identity.
- Review [Support matrix for scanning data sources through an ingestion private endpoint](catalog-private-link.md#support-matrix-for-scanning-data-sources-through-ingestion-private-endpoint) before you set up any scans. - Review [DNS requirements](catalog-private-link-name-resolution.md#deployment-options). If you're using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Microsoft Purview account endpoints to the private endpoint's IP address.
+- To scan Azure data sources through private connectivity, use [Managed VNet Runtime](catalog-managed-vnet.md). View [supported regions](catalog-managed-vnet.md#supported-regions). This option can reduce the administrative overhead of deploying and managing self-hosted integration runtime machines.
+ ### Integration runtime options -- If your data sources are in Azure, you need to set up and use a self-hosted integration runtime on a Windows virtual machine that's deployed inside the same or a peered virtual network where Microsoft Purview ingestion private endpoints are deployed. The Azure integration runtime won't work with ingestion private endpoints.
+- If your data sources are in Azure, you can choose any of the following runtime options:
+
+ - Managed VNet runtime. Use this option if your Microsoft Purview account is deployed in any of the [supported regions](catalog-managed-vnet.md#supported-regions) and you are planning to scan any of the [supported data sources](catalog-managed-vnet.md#supported-data-sources).
+
+ - Self-hosted integration runtime.
+
+ - If using self-hosted integration runtime, you need to set up and use a self-hosted integration runtime on a Windows virtual machine that's deployed inside the same or a peered virtual network where Microsoft Purview ingestion private endpoints are deployed. The Azure integration runtime won't work with ingestion private endpoints.
-- To scan on-premises data sources, you can also install a self-hosted integration runtime either on an on-premises Windows machine or on a VM inside an Azure virtual network.
+ - To scan on-premises data sources, you can also install a self-hosted integration runtime either on an on-premises Windows machine or on a VM inside an Azure virtual network.
-- When you're using private endpoints with Microsoft Purview, you need to allow network connectivity from data sources to the self-hosted integration VM on the Azure virtual network where Microsoft Purview private endpoints are deployed.
+ - When you're using private endpoints with Microsoft Purview, you need to allow network connectivity from data sources to the self-hosted integration VM on the Azure virtual network where Microsoft Purview private endpoints are deployed.
-- We recommend allowing automatic upgrade of the self-hosted integration runtime. Make sure you open required outbound rules in your Azure virtual network or on your corporate firewall to allow automatic upgrade. For more information, see [Self-hosted integration runtime networking requirements](manage-integration-runtimes.md#networking-requirements).
+ - We recommend allowing automatic upgrade of the self-hosted integration runtime. Make sure you open required outbound rules in your Azure virtual network or on your corporate firewall to allow automatic upgrade. For more information, see [Self-hosted integration runtime networking requirements](manage-integration-runtimes.md#networking-requirements).
### Authentication options
For performance and cost optimization, we highly recommended deploying one or mo
:::image type="content" source="media/concept-best-practices/network-pe-multi-region.png" alt-text="Screenshot that shows Microsoft Purview with private endpoints in a scenario of multiple virtual networks and multiple regions."lightbox="media/concept-best-practices/network-pe-multi-region.png":::
+#### Scan using Managed Vnet Runtime
+
+You can use Managed VNet Runtime to scan data sources in a private network, if your Microsoft Purview account is deployed in any of the [supported regions](catalog-managed-vnet.md#supported-regions) and you are planning to scan Any of the supported [Azure data sources](catalog-managed-vnet.md#supported-data-sources).
+
+Using Managed VNet Runtime helps to minimize the administrative overhead of managing the runtime and reduce overall scan duration.
+
+To scan any Azure data sources using Managed VNet Runtime, a managed private endpoint must be deployed within Microsoft Purview Managed Virtual Network, even if the data source already has a private network in your Azure subscription.
++
+If you need to scan on-premises data sources or additional data sources in Azure that are not supported by Managed VNet Runtime, you can deploy both Managed VNet Runtime and Self-hosted integration runtime.
++ ### If Microsoft Purview isn't available in your primary region > [!NOTE]
For this scenario:
- This option is recommended if you have data sources in both primary and secondary regions and users are connected through the primary region. - Deploy a Microsoft Purview account in your secondary region (for example, Australia East).-- Deploy Microsoft Purview portal private endpoint in the primary region (for example, Australia Southeast) for user access to Microsoft Purview governance portal.
+- Deploy Microsoft Purview governance portal private endpoint in the primary region (for example, Australia Southeast) for user access to Microsoft Purview governance portal.
- Deploy Microsoft Purview account and ingestion private endpoints in your primary region (for example, Australia southeast) to scan data sources locally in the primary region. - Deploy Microsoft Purview account and ingestion private endpoints in your secondary region (for example, Australia East) to scan data sources locally in the secondary region. - Deploy [Microsoft Purview self-hosted integration runtime]( manage-integration-runtimes.md) VMs in both primary and secondary regions. This will help to keep data Map scan traffic in the local region and send only metadata to Microsoft Purview Data Map where is configured in your secondary region (for example, Australia East).
purview Concept Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-classification.md
Previously updated : 01/04/2022 Last updated : 03/23/2023 # Data classification in the Microsoft Purview governance portal
purview Concept Elastic Data Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-elastic-data-map.md
Previously updated : 03/21/2022 Last updated : 03/23/2023 # Elastic data map in Microsoft Purview
-The Microsoft Purview data map provides the foundation for data discovery and data governance. It captures metadata about data present in analytics, software-as-a-service (SaaS), and operation systems in hybrid, on-premises, and multi-cloud environments. The Microsoft Purview data map stays up to date with its built-in scanning and classification system.
+The Microsoft Purview Data Map provides the foundation for data discovery and data governance. It captures metadata about data present in analytics, software-as-a-service (SaaS), and operation systems in hybrid, on-premises, and multicloud environments. The data map stays up to date with its built-in scanning and classification system.
+
+All Microsoft Purview accounts have a data map that starts at one capacity unit, and can elastically grow. They scale up and down based on request load and metadata stored within the data map.
-All Microsoft Purview accounts have a data map that elastically grow starting at one capacity unit. They scale up and down based on request load and metadata stored within the data map.
## Data map capacity unit The elastic data map has two components, metadata storage and operation throughput, represented as a capacity unit (CU). All Microsoft Purview accounts, by default, start with one capacity unit and elastically grow based on usage. Each data Map capacity unit includes a throughput of 25 operations/sec and 10 GB of metadata storage limit.
Operations are the throughput measure of the Microsoft Purview Data Map. They in
### Storage
-Storage is the second component of Data Map and includes technical, business, operational, and semantic metadata.
+Storage is the second component of Data Map and includes the storage of technical, business, operational, and semantic metadata.
The technical metadata includes schema, data type, columns, and so on, that are discovered from Microsoft Purview [scanning](concept-scans-and-ingestion.md). The business metadata includes automated (for example, promoted from Power BI datasets, or descriptions from SQL tables) and manual tagging of descriptions, glossary terms, and so on. Examples of semantic metadata include the collection mapping to data sources, or classifications. The operational metadata includes Data factory copy and data flow activity run status, and runs time.
The technical metadata includes schema, data type, columns, and so on, that are
Claudia is an Azure admin at Contoso who wants to provision a new Microsoft Purview account from Azure portal. While provisioning, she doesnΓÇÖt know the required size of Microsoft Purview Data Map to support the future state of the platform. However, she knows that the Microsoft Purview Data Map is billed by Capacity Units, which are affected by storage and operations throughput. She wants to provision the smallest Data Map to keep the cost low and grow the Data Map size elastically based on consumption.
-Claudia can create a Microsoft Purview account with the default Data Map size of 1 capacity unit that can automatically scale up and down. The autoscaling feature also allows for capacity to be tuned based on intermittent or planned data bursts during specific periods. Claudia follows the next steps in provisioning experience to set up network configuration and completes the provisioning.
+Claudia can create a Microsoft Purview account with the default Data Map size of one capacity unit that can automatically scale up and down. The autoscaling feature also allows for capacity to be tuned based on intermittent or planned data bursts during specific periods. Claudia follows the next steps in provisioning experience to set up network configuration and completes the provisioning.
In the Azure monitor metrics page, Claudia can see the consumption of the Data Map storage and operations throughput. She can further set up an alert when the storage or operations throughput reaches a certain limit to monitor the consumption and billing of the new Microsoft Purview account. ## Data map billing
-Customers are billed for one capacity unit (25 ops/sec and 10 GB) and extra billing is based on the consumption of each extra capacity unit rolled up to the hour. The Data Map operations scale in the increments of 25 operations/sec and metadata storage scales in the increments of 10 GB size. Microsoft Purview Data Map can automatically scale up and down within the elasticity window ([check current limits](how-to-manage-quotas.md)). However, to get the next level of elasticity window, a support ticket needs to be created.
+Customers are billed for one capacity unit (25 ops/sec and 10 GB) and extra billing is based on the consumption of each extra capacity unit rolled up to the hour. The Data Map operations scale in the increments of 25 operations/sec and metadata storage scales in the increments of 10-GB size. Microsoft Purview Data Map can automatically scale up and down within the elasticity window ([check current limits](how-to-manage-quotas.md)). However, to get the next level of elasticity window, a support ticket needs to be created.
Data Map capacity units come with a cap on operations throughput and storage. If storage exceeds the current capacity unit, customers are charged for the next capacity unit even if the operations throughput isn't used. The below table shows the Data Map capacity unit ranges. Contact support if the Data Map capacity unit goes beyond 100 capacity unit.
Data Map capacity units come with a cap on operations throughput and storage. If
- Microsoft Purview Data MapΓÇÖs operation throughput for the given hour is 50 Ops/Sec and storage size is 25 GB. Customers are billed for three capacity units. -- Microsoft Purview Data MapΓÇÖs operation throughput for the given hour is 250 Ops/Sec and storage size is 15 GB. Customers are billed for ten capacity units.
+- Microsoft Purview Data MapΓÇÖs operation throughput for the given hour is 250 Ops/Sec and storage size is 15 GB. Customers are billed for 10 capacity units.
### Detailed billing example
The Data Map billing example below shows a Data Map with growing metadata storag
:::image type="content" source="./media/concept-elastic-data-map/operations-and-metadata.png" alt-text="Chart depicting number of operations and growth of metadata over time.":::
-Each Data Map capacity unit supports 25 operations/second and 10 GB of metadata storage. The Data Map is billed hourly. It is billed for the maximum Data Map capacity units needed within the hour, with a minimum of one capacity unit. At times, you may need more operations/second within the hour, and this will increase the number of capacity units needed within that hour. At other times, your operations/second usage may be low, but you may still need a large volume of metadata storage. The metadata storage is what determines how many capacity units you need within the hour.
+Each Data Map capacity unit supports 25 operations/second and 10 GB of metadata storage. The Data Map is billed hourly. It's billed for the maximum Data Map capacity units needed within the hour, with a minimum of one capacity unit. At times, you may need more operations/second within the hour, and this will increase the number of capacity units needed within that hour. At other times, your operations/second usage may be low, but you may still need a large volume of metadata storage. The metadata storage is what determines how many capacity units you need within the hour.
The table below shows the maximum number of operations/second and metadata storage used per hour for this billing example:
Based on the Data Map operations/second and metadata storage consumption in this
## Increase operations throughput limit
-The default limit for maximum operations per second is 10 capacity units. If you are working with a very large Microsoft Purview environment and require a higher throughput, you can request a larger capacity of elasticity window by [creating a quota request](how-to-manage-quotas.md#request-quota-increase). Select "Data map capacity unit" as the quota type and provide as much relevant information as you can about your environment and the additional capacity you would like to request.
+The default limit for maximum operations per second is 10 capacity units. If you're working with a very large Microsoft Purview environment and require a higher throughput, you can request a larger capacity of elasticity window by [creating a quota request](how-to-manage-quotas.md#request-quota-increase). Select "Data map capacity unit" as the quota type and provide as much relevant information as you can about your environment and the extra capacity you would like to request.
> [!IMPORTANT] > There's no default limit for metadata storage. As you add more metadata to your data map, it will elastically increase.
-Increasing the operations throughput limit will also increase the minimum number of capacity units. If you increase the throughput limit to 20, the minimum capacity units you will be charged is 2 CUs. The below table illustrates the possible throughput options. The number you enter in the quota request is the minimum number of capacity units on the account.
+Increasing the operations throughput limit will also increase the minimum number of capacity units. If you increase the throughput limit to 20, the minimum capacity units you'll be charged is 2 CUs. The below table illustrates the possible throughput options. The number you enter in the quota request is the minimum number of capacity units on the account.
| Minimum capacity units | Operations throughput limit |
The metrics _data map capacity units_ and the _data map storage size_ can be mon
1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Microsoft Purview accounts** page and select your _Purview account_
-2. Click on **Overview** and scroll down to observe the **Monitoring** section for _Data Map Capacity Units_ and _Data Map Storage Size_ metrics over different time periods
+1. Select **Overview** and scroll down to observe the **Monitoring** section for _Data Map Capacity Units_ and _Data Map Storage Size_ metrics over different time periods
:::image type="content" source="./media/concept-elastic-data-map/data-map-metrics.png" alt-text="Screenshot of the menu showing the elastic data map metrics overview page.":::
-3. For additional settings, navigate to the **Monitoring --> Metrics** to observe the **Data Map Capacity Units** and **Data Map Storage Size**.
+1. For other settings, navigate to the **Monitoring --> Metrics** to observe the **Data Map Capacity Units** and **Data Map Storage Size**.
:::image type="content" source="./media/concept-elastic-data-map/elastic-data-map-metrics.png" alt-text="Screenshot of the menu showing the metrics.":::
-4. Click on the **Data Map Capacity Units** to view the data map capacity unit usage over the last 24 hours. Observe that hovering the mouse over the line graph will indicate the data map capacity units consumed at that particular time on the particular day.
+1. Select the **Data Map Capacity Units** to view the data map capacity unit usage over the last 24 hours. Observe that hovering the mouse over the line graph indicates the data map capacity units consumed at that particular time on the particular day.
:::image type="content" source="./media/concept-elastic-data-map/data-map-capacity-default.png" alt-text="Screenshot of the menu showing the data map capacity units consumed over 24 hours.":::
-5. Click on the **Local Time: Last 24 hours (Automatic - 1 hour)** at the top right of the screen to modify time range displayed for the graph.
+1. Select the **Local Time: Last 24 hours (Automatic - 1 hour)** at the top right of the screen to modify time range displayed for the graph.
:::image type="content" source="./media/concept-elastic-data-map/data-map-capacity-custom.png" alt-text="Screenshot of the menu showing the data map capacity units consumed over a custom time range."::: :::image type="content" source="./media/concept-elastic-data-map/data-map-capacity-time-range.png" alt-text="Screenshot of the menu showing the data map capacity units consumed over a three day time range.":::
-6. Customize the graph type by clicking on the option as indicated below.
+1. Customize the graph type by selecting the option as indicated below.
:::image type="content" source="./media/concept-elastic-data-map/data-map-capacity-graph-type.png" alt-text="Screenshot of the menu showing the options to modify the graph type.":::
-7. Click on the **New chart** to add the graph for the Data Map Storage Size chart.
+1. Select the **New chart** to add the graph for the Data Map Storage Size chart.
:::image type="content" source="./media/concept-elastic-data-map/data-map-storage-size.png" alt-text="Screenshot of the menu showing the data map storage size used."::: ## Summary With elastic Data Map, Microsoft Purview provides low-cost barrier for customers to start their data governance journey.
-Microsoft Purview DataMap can grow elastically with pay as you go model starting from as small as 1 Capacity unit.
+Microsoft Purview Data Map can grow elastically with pay as you go model starting from as small as one Capacity unit.
Customers donΓÇÖt need to worry about choosing the correct Data Map size for their data estate at provision time and deal with platform migrations in the future due to size limits. ## Next Steps
purview Create Service Principal Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-service-principal-azure.md
Title: Create a service principal in Azure
-description: This article describes how you can create a service principal in Azure
+ Title: Create a service principal in Azure.
+description: This article describes how you can create a service principal in Azure for use with Microsoft Purview.
Previously updated : 12/02/2022 Last updated : 03/24/2023 # Customer intent: As an Azure AD Global Administrator or other roles such as Application Administrator, I need to create a new service principal, in order to register an application in the Azure AD tenant.
-# Creating a service principal
+# Creating a service principal for use with Microsoft Purview
-You can create a new or use an existing service principal in your Azure Active Directory tenant.
+You can create a new or use an existing service principal in your Azure Active Directory tenant to use to authenticate with other services.
## App registration
You can create a new or use an existing service principal in your Azure Active D
:::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-aad.png" alt-text="Screenshot that shows the link to the Azure Active Directory.":::
-1. Select **App registrations** and **+ New registration**
+1. Select **App registrations** and **+ New registration**.
:::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-new-reg.png" alt-text="Screenshot that shows the link to New registration.":::
You can create a new or use an existing service principal in your Azure Active D
1. Select **Accounts in this organizational directory only**.
-1. For **Redirect URI** select **Web** and enter any URL you want; it doesn't have to be real or work.
+1. For **Redirect URI** select **Web** and enter any URL you want. If you have an authentication endpoint for your organization you want to use, this is the place. Otherwise `https://example.com/auth` will do.
1. Then select **Register**. :::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-register.png" alt-text="Screenshot that shows the details for the new app registration.":::
+1. Copy the **Application (client) ID** value. We'll use this later to create a credential in Microsoft Purview.
+ :::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-new-app.png" alt-text="Screenshot that shows the newly created application."::: ## Adding a secret to the client credentials
-1. Select the app from the **App registrations**
+1. Select the app from the **App registrations**.
:::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-app-select.png" alt-text="Screenshot that shows the app for registration.":::
-1. Select **Add a certificate or secret**
+1. Select **Add a certificate or secret**.
:::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-add-secret.png" alt-text="Screenshot that shows the app.":::
-1. Select **+ New client secret** under **Client secrets**
+1. Select **+ New client secret** under **Client secrets**.
:::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-new-client-secret.png" alt-text="Screenshot that shows the client secret menu.":::
-1. Provide a **Description** and set the **Expires** for the secret
+1. Provide a **Description** and set the **Expires** for the secret.
:::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-secret-desc.png" alt-text="Screenshot that shows the client secret details.":::
+1. Copy the value of the **Secret value**. We'll use this later to create a secret in Azure Key Vault.
+ :::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-client-secret.png" alt-text="Screenshot that shows the client secret.":::
-1. Copy the value of **Client credentials** from **Overview**
+## Adding the secret to your Azure Key Vault
- :::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-client-cred.png" alt-text="Screenshot that shows the app Overview.":::
+To allow Microsoft Purview to use this service principal to authenticate with other services, you'll need to store this credential in Azure Key Vault.
-## Adding the secret to the key vault
+* If you need an Azure Key vault, you can [follow these steps to create one.](../key-vault/general/quick-create-portal.md)
+* To grant your Microsoft Purview account access to the Azure Key Vault, you can [follow these steps](manage-credentials.md#microsoft-purview-permissions-on-the-azure-key-vault).
-1. Navigate to your **Key vault**
+1. Navigate to your **Key vault**.
:::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-key-vault.png" alt-text="Screenshot that shows the Key vault.":::
You can create a new or use an existing service principal in your Azure Active D
:::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-generate-secret.png" alt-text="Screenshot that options in the Key vault.":::
-1. Enter the **Name** of your choice and **Value** as the **Client secret** from your Service Principal
-
+1. Enter the **Name** of your choice, and save it to create a credential in Microsoft Purview.
+
+1. Enter the **Value** as the **Secret value** from your Service Principal.
+ :::image type="content" source="media/create-service-principal-azure/create-service-principal-azure-sp-secret.png" alt-text="Screenshot that shows the Key vault to create a secret.":::
-1. Select **Create** to complete
+1. Select **Create** to complete.
+
+## Create a credential for your secret in Microsoft Purview
+
+To enable Microsoft Purview to use this service principal to authenticate with other services, you'll need to follow these three steps.
+
+1. [Connect your Azure Key Vault to Microsoft Purview](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. [Grant your service principal authentication on your source](microsoft-purview-connector-overview.md) - Follow instructions on each source page to grant appropriate authentication.
+1. [Create a new credential in Microsoft Purview](manage-credentials.md#create-a-new-credential) - You'll use the service principal's application (client) ID and the name of the secret you created in your Azure Key Vault.
purview How To Bulk Edit Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-bulk-edit-assets.md
Previously updated : 01/25/2022 Last updated : 03/23/2023 # How to bulk edit assets
This article describes how you can update assets in bulk to add glossary terms,
1. Use Microsoft Purview search or browse to discover assets you wish to edit.
-1. In the search results, if you focus on an asset a checkbox appears.
+1. In the search results, each data asset has a checkbox you can select to add the asset to the selection list.
:::image type="content" source="media/how-to-bulk-edit-assets/asset-checkbox.png" alt-text="Screenshot of the bulk edit checkbox.":::
-1. You can add an asset to the bulk edit list from the asset detail page. Select **Select for bulk edit** to add the asset to the bulk edit list.
+1. You can also add an asset to the bulk edit list from the asset detail page. Select **Select for bulk edit** to add the asset to the bulk edit list.
- :::image type="content" source="media/how-to-bulk-edit-assets/asset-list.png" alt-text="Screenshot of the asset.":::
+ :::image type="content" source="media/how-to-bulk-edit-assets/asset-list.png" alt-text="Screenshot of the asset page with the bulk edit box highlighted.":::
-1. Select the checkbox to add it to the bulk edit list. You can see the selected assets by clicking **View selected**.
+1. Select the checkbox to add it to the bulk edit list. You can see the selected assets by selecting the **View selected** button.
- :::image type="content" source="media/how-to-bulk-edit-assets/selected-list.png" alt-text="Screenshot of the list.":::\
+ :::image type="content" source="media/how-to-bulk-edit-assets/selected-list.png" alt-text="Screenshot of the asset list with the View Selected button highlighted.":::
-## How to bulk edit assets
+## Bulk edit assets
1. When all assets have been chosen, select **View selected** to pull up the selected assets.
This article describes how you can update assets in bulk to add glossary terms,
1. **Add** will append a new annotation to the selected data assets. 1. **Replace with** will replace all of the annotations for the selected data assets with the annotation selected. 1. **Remove** will remove all annotations for selected data assets.
-
+
+ You can edit multiple assets at once by selecting **Select a new attribute**.
+ :::image type="content" source="media/how-to-bulk-edit-assets/add-list.png" alt-text="Screenshot of the add.":::
+1. When you have made all your updates, select **Apply**.
+ 1. Once complete, close the bulk edit blade by selecting **Close** or **Remove all and close**. Close won't remove the selected assets whereas remove all and close will remove all the selected assets. :::image type="content" source="media/how-to-bulk-edit-assets/close-list.png" alt-text="Screenshot of the close.":::
-> [!Important]
+> [!IMPORTANT]
> The recommended number of assets for bulk edit are 25. Selecting more than 25 might cause performance issues. > The **View Selected** box will be visible only if there is at least one asset selected.
purview How To Integrate With Azure Security Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-integrate-with-azure-security-products.md
Previously updated : 01/23/2022 Last updated : 03/23/2023 # Integrate Microsoft Purview with Azure security products
Classifications and labels applied to data resources in Microsoft Purview are in
To take advantage of this [enrichment in Microsoft Defender for Cloud](../security-center/information-protection.md), no more steps are needed in Microsoft Purview. Start exploring the security enrichments with Microsoft Defender for Cloud's [Inventory page](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/25) where you can see the list of data sources with classifications and sensitivity labels. ### Supported data sources+ The integration supports data sources in Azure and AWS; sensitive data discovered in these resources is shared with Microsoft Defender for Cloud:+ - [Azure Blob Storage](./register-scan-azure-blob-storage-source.md) - [Azure Cosmos DB](./register-scan-azure-cosmos-database.md) - [Azure Data Explorer](./register-scan-azure-data-explorer.md)
The integration supports data sources in Azure and AWS; sensitive data discovere
- [Amazon S3](./register-scan-amazon-s3.md) ### Known issues
-1. Data sensitivity information is currently not shared for sources hosted inside virtual machines - like SAP, Erwin, and Teradata.
-2. Data sensitivity information is currently not shared for Amazon RDS.
-3. Data sensitivity information is currently not shared for Azure PaaS data sources registered using a connection string.
-5. Unregistering the data source in Microsoft Purview doesn't remove the data sensitivity enrichment in Microsoft Defender for Cloud.
-6. Deleting the Microsoft Purview account will persist the data sensitivity enrichment for 30 days in Microsoft Defender for Cloud.
-7. Custom classifications defined in the Microsoft Purview compliance portal or Microsoft Purview governance portal aren't shared with Microsoft Defender for Cloud.
+
+- Data sensitivity information is currently not shared for sources hosted inside virtual machines - like SAP, Erwin, and Teradata.
+- Data sensitivity information is currently not shared for Amazon RDS.
+- Data sensitivity information is currently not shared for Azure PaaS data sources registered using a connection string.
+- Unregistering the data source in Microsoft Purview doesn't remove the data sensitivity enrichment in Microsoft Defender for Cloud.
+- Deleting the Microsoft Purview account will persist the data sensitivity enrichment for 30 days in Microsoft Defender for Cloud.
+- Custom classifications defined in the Microsoft Purview compliance portal or Microsoft Purview governance portal aren't shared with Microsoft Defender for Cloud.
### FAQ+ #### **Why don't I see the AWS data source I have scanned with Microsoft Purview in Microsoft Defender for Cloud?** Data sources must be onboarded to Microsoft Defender for Cloud as well. Learn more about how to [connect your AWS accounts](../security-center/quickstart-onboard-aws.md) and see your AWS data sources in Microsoft Defender for Cloud.
Customize the Microsoft Purview workbook and analytics rules to best suit the ne
For more information, see [Tutorial: Integrate Microsoft Sentinel and Microsoft Purview](../sentinel/purview-solution.md). ## Next steps+ - [Experiences in Microsoft Defender for Cloud enriched using sensitivity from Microsoft Purview](../security-center/information-protection.md)
purview How To Lineage Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-powerbi.md
Previously updated : 01/30/2022 Last updated : 03/23/2023 # How to get lineage from Power BI into Microsoft Purview
-This article elaborates on the data lineage aspects of Power BI source in Microsoft Purview. The prerequisite to see data lineage in Microsoft Purview for Power BI is to [scan your Power BI.](../purview/register-scan-power-bi-tenant.md)
+This article elaborates on the data lineage for Power BI sources in Microsoft Purview.
+
+## Prerequisites
+
+To see data lineage in Microsoft Purview for Power BI, you must first [register and scan your Power BI source.](../purview/register-scan-power-bi-tenant.md)
## Common scenarios
-1. After the Power BI source is scanned, data consumers can perform root cause analysis of a report or dashboard from Microsoft Purview. For any data discrepancy in a report, users can easily identify the upstream datasets and contact their owners if necessary.
+After a Power BI source has been scanned, lineage information for your current data assets, and data assets referenced by Power BI, will automatically be added in the Microsoft Purview Data Catalog.
+
+1. Data consumers can perform root cause analysis of a report or dashboard from Microsoft Purview. For any data discrepancy in a report, users can easily identify the upstream datasets and contact their owners if necessary.
-2. Data producers can see the downstream reports or dashboards consuming their dataset. Before making any changes to their datasets, the data owners can make informed decisions.
+1. Data producers can see the downstream reports or dashboards consuming their dataset. Before making any changes to their datasets, the data owners can make informed decisions.
-2. Users can search by name, endorsement status, sensitivity label, owner, description, and other business facets to return the relevant Power BI artifacts.
+1. Users can search by name, endorsement status, sensitivity label, owner, description, and other business facets to return the relevant Power BI artifacts.
## Power BI artifacts in Microsoft Purview
Once the [scan of your Power BI](../purview/register-scan-power-bi-tenant.md) is
## Lineage of Power BI artifacts in Microsoft Purview
-Users can search for the Power BI artifact by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, classification and other information are shown. Under the lineage tab, asset relationships are shown with the upstream and downstream dependencies.
+Users can search for a Power BI artifact by name, description, or other details to see relevant results. Under the asset overview and properties tabs, the basic details such as description, classification are shown. Under the lineage tab, asset relationships are shown with the upstream and downstream dependencies.
-Microsoft Purview captures lineage among Power BI artifacts (e.g. Dataflow -> Dataset -> Report -> Dashboard) as well as external data assets.
+Microsoft Purview captures lineage among Power BI artifacts (for example: Dataflow -> Dataset -> Report -> Dashboard) and external data assets.
>[!NOTE]
-> For lineage between Power BI artifacts and external data assets, currently the supported source types are: Azure SQL Database, Azure Blob Storage, Azure Data Lake Store Gen1, and Azure Data Lake Store Gen2.
+> For lineage between Power BI artifacts and external data assets, currently the supported source types are:
+>* Azure SQL Database
+>* Azure Blob Storage
+>* Azure Data Lake Store Gen1
+>* Azure Data Lake Store Gen2
:::image type="content" source="./media/how-to-lineage-powerbi/powerbi-lineage.png" alt-text="Screenshot showing how lineage is rendered for Power BI." lightbox="./media/how-to-lineage-powerbi/powerbi-lineage.png":::
-In addition, column level lineage (Power BI sub-artifact lineage) and transformation inside of Power BI datasets are captured when using Azure SQL Database as source. For measures, you can further click into the column -> Properties -> expression to see the transformation details.
+In addition, column level lineage (Power BI subartifact lineage) and transformation inside of Power BI datasets are captured when using Azure SQL Database as source. For measures, you can further select column -> Properties -> expression to see the transformation details.
>[!NOTE] > Column level lineage and transformations is supported when using Azure SQL Database as source. Other sources are currently not supported.
In addition, column level lineage (Power BI sub-artifact lineage) and transforma
## Known limitations
-* Limited information is currently shown for the Data sources from which the Power BI Dataflow or Power BI Dataset is created. For example, for SQL server source of Power BI dataset, only server/database name is captured.
-* Few measures aren't shown in the sub-artifact lineage, for example, `COUNTROWS`.
-* In the lineage graph, when selecting measure that is derived by columns using COUNT function, underlying column isn't selected automatically. Check the measure expression in the column properties tab to identify the underlying column.
-* If you used to scan Power BI before sub-artifact lineage is supported, you may see a database asset along with the new table assets in the lineage graph, which isn't removed.
-* In case you have the dataset table connected to another dataset table, when the middle dataset disables "Enable load" option inside the Power BI desktop, the lineage cannot be extracted.
+* Limited information is currently shown for data sources where the Power BI Dataflow or Power BI Dataset is created. For example, for a SQL server source of Power BI dataset, only server/database name is captured.
+* Some measures aren't shown in the subartifact lineage, for example, `COUNTROWS`.
+* In the lineage graph, when selecting a measure that is derived by columns using the COUNT function, the underlying column isn't selected automatically. Check the measure expression in the column properties tab to identify the underlying column.
+* If you scanned your Power BI source before subartifact lineage was supported, you may see a database asset along with the new table assets in the lineage graph, which isn't removed.
+* In case you have the dataset table connected to another dataset table, when the middle dataset disables the "Enable load" option inside the Power BI desktop, and the lineage can't be extracted.
## Next steps
purview How To Lineage Spark Atlas Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-spark-atlas-connector.md
- Title: Metadata and Lineage from Apache Atlas Spark connector
-description: This article describes the data lineage extraction from Spark using Atlas Spark connector.
----- Previously updated : 12/05/2022-
-# How to use Apache Atlas connector to collect Spark lineage
-
-Apache Atlas Spark Connector is a hook to track Spark SQL/DataFrame data movements and push metadata changes to Microsoft Purview Atlas endpoint.
-
-## Supported scenarios
-
-This connector supports following tracking:
-1. SQL DDLs like "CREATE/ALTER DATABASE", "CREATE/ALTER TABLE".
-2. SQL DMLs like "CREATE TABLE HelloWorld AS SELECT", "INSERT INTO...", "LOAD DATA [LOCAL] INPATH", "INSERT OVERWRITE [LOCAL] DIRECTORY" and so on.
-3. DataFrame movements that have inputs and outputs.
-
-This connector relies on query listener to retrieve query and examine the impacts. It will correlate with other systems like Hive, HDFS to track the life cycle of data in Atlas.
-Since Microsoft Purview supports Atlas API and Atlas native hook, the connector can report lineage to Microsoft Purview after configured with Spark. The connector could be configured per job or configured as the cluster default setting.
-
-## Configuration requirement
-
-The connectors require a version of Spark 2.4.0+. But Spark version 3 isn't supported. The Spark supports three types of listener required to be set:
-
-| Listener | Since Spark Version|
-| - | - |
-| spark.extraListeners | 1.3.0 |
-| spark.sql.queryExecutionListeners | 2.3.0 |
-| spark.sql.streaming.streamingQueryListeners | 2.4.0 |
-
->[!IMPORTANT]
-> * If the Spark cluster version is below 2.4.0, Stream query lineage and most of the query lineage will not be captured.
->
-> * Spark version 3 is not supported.
-
-### Step 1. Prepare Spark Atlas connector package
-The following steps are documented based on DataBricks as an example:
-
-1. Generate package
- 1. Pull code from GitHub: https://github.com/hortonworks-spark/spark-atlas-connector
- 2. [For Windows], Comment out the **maven-enforcer-plugin** in spark-atlas-connector\pom.xml to remove the dependency on Unix.
-
- ```web
- <requireOS>
- <family>unix</family>
- </requireOS>
- ```
-
- c. Run command **mvn package -DskipTests** in the project root to build.
-
- d. Get jar from *~\spark-atlas-connector-assembly\target\spark-atlas-connector-assembly-0.1.0-SNAPSHOT.jar*
-
- e. Put the package where the spark cluster could access. For DataBricks cluster, the package could upload to dbfs folder, such as /FileStore/jars.
-
-2. Prepare Connector config
- 1. Get Kafka Endpoint and credential in Azure portal of the Microsoft Purview Account
- 1. Provide your account with *ΓÇ£Microsoft Purview Data CuratorΓÇ¥* permission
-
- :::image type="content" source="./media/how-to-lineage-spark-atlas-connector/assign-purview-data-curator-role.png" alt-text="Screenshot showing data curator role assignment" lightbox="./media/how-to-lineage-spark-atlas-connector/assign-purview-data-curator-role.png":::
-
- 1. Endpoint: Get from *Atlas Kafka endpoint primary connection string*. Endpoint part
- 1. Credential: Entire *Atlas Kafka endpoint primary connection string*
-
- :::image type="content" source="./media/how-to-lineage-spark-atlas-connector/atlas-kafka-endpoint.png" alt-text="Screenshot showing atlas kafka endpoint" lightbox="./media/how-to-lineage-spark-atlas-connector/atlas-kafka-endpoint.png":::
-
- 1. Prepare *atlas-application.properties* file, replace the *atlas.kafka.bootstrap.servers* and the password value in *atlas.kafka.sasl.jaas.config*
-
- ```script
- atlas.client.type=kafka
- atlas.kafka.sasl.mechanism=PLAIN
- atlas.kafka.security.protocol=SASL_SSL
- atlas.kafka.bootstrap.servers= atlas-46c097e6-899a-44aa-9a30-6ccd0b2a2a91.servicebus.windows.net:9093
- atlas.kafka.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="<connection string got from your Microsoft Purview account>";
- ```
-
- c. Make sure the atlas configuration file is in the DriverΓÇÖs classpath generated in [step 1 Generate package section above](../purview/how-to-lineage-spark-atlas-connector.md#step-1-prepare-spark-atlas-connector-package). In cluster mode, ship this config file to the remote Drive *--files atlas-application.properties*
--
-### Step 2. Prepare your Microsoft Purview account
-After the Atlas Spark model definition is successfully created, follow below steps
-1. Get spark type definition from GitHub https://github.com/apache/atlas/blob/release-2.1.0-rc3/addons/models/1000-Hadoop/1100-spark_model.json
-
-2. Assign role:
- 1. Navigate to your Microsoft Purview account and select Access control (IAM)
- 1. Add Users and grant your service principal *Microsoft Purview Data source administrator* role
-3. Get auth token:
- 1. Open "postman" or similar tools
- 1. Use the service principal used in previous step to get the bearer token:
- * Endpoint: https://login.windows.net/microsoft.com/oauth2/token
- * grant_type: client_credentials
- * client_id: {service principal ID}
- * client_secret: {service principal key}
- * resource: `https://purview.azure.net`
-
- :::image type="content" source="./media/how-to-lineage-spark-atlas-connector/postman-examples.png" alt-text="Screenshot showing postman example" lightbox="./media/how-to-lineage-spark-atlas-connector/postman-examples.png":::
-
-4. Post Spark Atlas model definition to Microsoft Purview Account:
- 1. Get Atlas Endpoint of the Microsoft Purview account from properties section of Azure portal.
- 1. Post Spark type definition into the Microsoft Purview account:
- * Post: {{endpoint}}/api/atlas/v2/types/typedefs
- * Use the generated access token
- * Body: choose raw and copy all content from GitHub https://github.com/apache/atlas/blob/release-2.1.0-rc3/addons/models/1000-Hadoop/1100-spark_model.json
--
-### Step 3. Prepare Spark job
-1. Write your Spark job as normal
-2. Add connector settings in your Spark jobΓÇÖs source code.
-Set *'atlas.conf'* system property value in code like below to make sure *atlas-application.properties* file could be found.
-
- **System.setProperty("atlas.conf", "/dbfs/FileStore/jars/")**
-
-3. Build your spark job source code to generate jar file.
-4. Put the Spark application jar file in a location where your cluster could access. For example, put the jar file in *"/dbfs/FileStore/jars/"DataBricks*
-
-### Step 4. Prepare to run job
-
-1. Below instructions are for each job Setting:
-To capture specific jobsΓÇÖ lineage, use spark-submit to kick off a job with their parameter.
-
- In the job parameter set:
-* Path of the connector Jar file.
-* Three listeners: extraListeners, queryExecutionListeners, streamingQueryListeners as the connector.
-
-| Listener | Details |
-| - | - |
-| spark.extraListeners | com.hortonworks.spark.atlas.SparkAtlasEventTracker|
-| spark.sql.queryExecutionListeners | com.hortonworks.spark.atlas.SparkAtlasEventTracker
-| spark.sql.streaming.streamingQueryListeners | com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker |
-
-* The path of your Spark job application Jar file.
-
-Setup Databricks job: Key part is to use spark-submit to run a job with listeners setup properly. Set the listener info in task parameter.
-
-Below is an example parameter for the spark job.
-
-```script
-["--jars","dbfs:/FileStore/jars/spark-atlas-connector-assembly-0.1.0-SNAPSHOT.jar ","--conf","spark.extraListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker","--conf","spark.sql.queryExecutionListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker","--conf","spark.sql.streaming.streamingQueryListeners=com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker","--class","com.microsoft.SparkAtlasTest","dbfs:/FileStore/jars/08cde51d_34d8_4913_a930_46f376606d7f-sparkatlas_1_6_SNAPSHOT-17452.jar"]
-```
-
-Below is an example of spark submit from command line:
-
-```script
-spark-submit --class com.microsoft.SparkAtlasTest --master yarn --deploy-mode --files /data/atlas-application.properties --jars /data/ spark-atlas-connector-assembly-0.1.0-SNAPSHOT.jar
conf spark.extraListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker conf spark.sql.queryExecutionListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker conf spark.sql.streaming.streamingQueryListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker
-/data/worked/sparkApplication.jar
-```
-
-2. Below instructions are for Cluster Setting:
-The connector jar and listenerΓÇÖs setting should be put in Spark clustersΓÇÖ: *conf/spark-defaults.conf*. Spark-submit will read the options in *conf/spark-defaults.conf* and pass them to your application.
-
-### Step 5. Run and Check lineage in Microsoft Purview account
-Kick off The Spark job and check the lineage info in your Microsoft Purview account.
--
-## Known limitations with the connector for Spark lineage
-1. Supports SQL/DataFrame API (in other words, it doesn't support RDD). This connector relies on query listener to retrieve query and examine the impacts.
-
-2. All "inputs" and "outputs" from multiple queries are combined into single "spark_process" entity.
-
- "spark_process" maps to an "applicationId" in Spark. It allows admin to track all changes that occurred as part of an application. But also causes lineage/relationship graph in "spark_process" to be complicated and less meaningful.
-3. Only part of inputs is tracked in Streaming query.
-
-* Kafka source supports subscribing with "pattern" and this connector doesn't enumerate all existing matching topics, or even all possible topics
-
-* The "executed plan" provides actual topics with (micro) batch reads and processes. As a result, only inputs that participate in (micro) batch are included as "inputs" of "spark_process" entity.
-
-4. This connector doesn't support columns level lineage.
-
-5. It doesn't track tables that are dropped (Spark models).
-
- The "drop table" event from Spark only provides db and table name, which is NOT sufficient to create the unique key to recognize the table.
-
- The connector depends on reading the Spark Catalog to get table information. Spark have already dropped the table when this connector notices the table is dropped, so drop table won't work.
--
-## Next steps
--- [Learn about Data lineage in Microsoft Purview](catalog-lineage-user-guide.md)-- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-request-access.md
Previously updated : 02/17/2023 Last updated : 03/23/2023
purview How To View Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-view-self-service-data-access-policy.md
Title: View self-service policies
-description: This article describes how to view auto-generated self-service access policies
+description: This article describes how to view autogenerated self-service access policies
Previously updated : 03/22/2022 Last updated : 03/23/2023 # How to view self-service data access policies In a Microsoft Purview catalog, you can now [request access](how-to-request-access.md) to data assets. If policies are currently available for the data source type and the data source has [Data Use Management enabled](how-to-enable-data-use-management.md), a self-service policy is generated when a data access request is approved.
-This article describes how to view self-service data access policies that have been auto-generated by approved access requests.
+This article describes how to view self-service data access policies that have been autogenerated by approved access requests.
## Prerequisites
purview How To Workflow Manage Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-runs.md
Previously updated : 03/01/2022 Last updated : 03/23/2023
This article outlines how to manage workflows that are already running.
1. This will present a window that shows all the actions that are completed, actions that are in-progress, and the next action for that workflow run.
- :::image type="content" source="./media/how-to-workflow-manage-runs/workflow-details.png" alt-text="Screenshot of the workflow runs page, with an example workflow name selected, and the workflow details page overlaid, showing workflow run, submission time, run I D, status, and a list of all steps in the request timeline.":::
+ :::image type="content" source="./media/how-to-workflow-manage-runs/workflow-details.png" alt-text="Screenshot of the workflow runs page, with an example workflow name selected, and the workflow details page overlaid, showing workflow run, submission time, run ID, status, and a list of all steps in the request timeline.":::
1. You can select any of the actions in the request timeline to see the specific status and sub steps details.
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/supported-classifications.md
Yes
-
-### Taiwan national identification number
+### Taiwanese identification number
#### Format
reliability Asm Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/asm-retirement.md
+
+ Title: Azure Service Manager Retirement
+description: Azure Service Manager Retirement documentation for all classic compute, networking and storage resources
++ Last updated : 03/24/2023+++++
+# Azure Service Manager Retirement
+
+Azure Service Manager (ASM) is the old control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations, and has been in use since 2011. However, ASM is retiring in August 2024, and customers can now migrate to [Azure Resource Manager (ARM)](/azure/azure-resource-manager/management/overview). ARM provides a management layer that enables you to create, update, and delete resources in your Azure account. You can use management features like access control, locks, and tags to secure and organize your resources after deployment
+
+## Benefits of migrating to ARM
+Migrating from the classic resource model to ARM offers several benefits, including:
+
+- Manage your infrastructure through declarative templates rather than scripts.
+
+- Deploy, manage, and monitor all the resources for your solution as a group, rather than handling these resources individually.
+
+- Redeploy your solution throughout the development lifecycle and have confidence your resources are deployed in a consistent state.
+
+- Define the dependencies between resources so they're deployed in the correct order.
+
+- Apply access control to all services because Azure role-based access control (Azure RBAC) is natively integrated into the management platform.
+
+- Apply tags to resources to logically organize all the resources in your subscription.
+
+- Clarify your organization's billing by viewing costs for a group of resources sharing the same tag.
+
+There are many service-related benefits which can be found in the migration guides.
+
+## Services being retired
+To help with this transition, we are providing a range of resources and tools, including documentation and migration guides. We encourage you to begin planning your migration to ARM as soon as possible to ensure that you can continue to take advantage of the latest Azure features and capabilities.
+Here is a list of classic resources being retired and their retirement dates:
+
+| Classic Resource | Retirement Date |
+|||
+|[VM (classic)](https://azure.microsoft.com/updates/classicvmretirment) | Sep 23 |
+|[Azure Batch Cloud Service Pools](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024) | Feb 24 |
+|[Cloud Services (classic)](https://azure.microsoft.com/updates/cloud-services-retirement-announcement) | Aug 24 |
+|[App Services](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement) | Aug 24 |
+|[Classic Resource Providers](https://azure.microsoft.com/updates/azure-classic-resource-providers-will-be-retired-on-31-august-2024/) | Aug 24 |
+|[Integration Services Environment](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) | Aug 24 |
+|[Classic Storage](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/) | Aug 24 |
+|[Classic Virtual Network](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 |
+|[Classic Application Gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 |
+|[Classic Reserved IP addresses](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) |Aug 24|
+|[Classic ExpressRoute Gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) |Aug 24 |
+|[Classic VPN gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 |
+|[API Management](/azure/api-management/breaking-changes/stv1-platform-retirement-august-2024) | Aug 24 |
+|[Azure Redis cache](/azure/azure-cache-for-redis/cache-faq#caches-with-a-dependency-on-cloud-services-(classic)) | Aug 24 |
+|[Virtual WAN](/azure/virtual-wan/virtual-wan-faq#update-router) | Aug 24 |
+|[Microsoft HPC Pack](/powershell/high-performance-computing/burst-to-cloud-services-retirement-guide) |Aug 24|
+|[Azure Active Directory Domain Services](/azure/active-directory-domain-services/migrate-from-classic-vnet) | Mar 23 |
+
+## Support
+We understand that you may have questions or concerns about this change, and we are here to help. If you have any questions or require further information, please do not hesitate to reach out to our [customer support team](https://azure.microsoft.com/support)
++
reliability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview.md
- # Azure reliability documentation
-Reliability consists of two principles: resiliency and availability. The goal of reliability is to return your application to a fully functioning state after a failure occurs. The goal of availability is to provide consistent access to your application or workload be users as they need to.
+Reliability consists of two principles: resiliency and availability. The goal of resiliency is to return your application to a fully functioning state after a failure occurs. The goal of availability is to provide consistent access to your application or workload be users as they need to.
Azure includes built-in reliability services that you can use and manage based on your business needs. Whether itΓÇÖs a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional outage, Azure provides solutions that improve reliability. For example, availability sets ensure that the virtual machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability zones protect customersΓÇÖ applications and data from datacenter failures across multiple physical locations within a region. **Regions** and **availability zones** are central to your application design and resiliency strategy and are discussed in greater detail later in this article.
resource-mover About Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/about-move-process.md
These components are used during region move.
![Diagram showing the move steps](./media/about-move-process/move-steps.png) + Each move resource goes through the summarized steps. | **Step** | **Details** | **State/Issues** | | | | | | **Step 1: Select resources** | Select a resource. The resource is added to the move collection. | Resource state moves to *Prepare pending*. |
-| **Step 2: Validate dependencies** | Validation of the dependencies is carried out along with addition of resources in the background. <br/><br/> You must add dependent resources if validation shows that dependent resources are pending. <br><br> Add them to the move collection. <br/><br/> Add all dependent resources, even if you don't want to move them. You can later specify that the resources you're moving should use different resources in the target region instead of using the **Configuration** option.<br/><br/> You may need to manually validate if there are outstanding dependencies in the **Validate dependencies** tab. |
+| **Step 2: Validate dependencies** | Validation of the dependencies is carried out along with addition of resources in the background. <br/><br/> You must add dependent resources if validation shows that dependent resources are pending. <br><br> Add them to the move collection. <br/><br/> Add all dependent resources, even if you don't want to move them. You can later specify that the resources you're moving should use different resources in the target region instead of using the **Configuration** option.<br/><br/> You may need to manually validate if there are outstanding dependencies in the **Validate dependencies** tab. ||
| **Step 3: Prepare** | Kick off the prepare process. Preparation steps depend on the resources you're moving:<br/><br/> - **Stateless resources**: Stateless resources have configuration information only. These resources don't need continuous replication of data in order to move them. Examples include Azure virtual networks (VNets), network adapters, load balancers, and network security groups. For this type of resource, the Prepare process generates an Azure Resource Manager template.<br/><br/> - **Stateful resources**: Stateful resources have both configuration information, and data that needs to be moved. Examples include Azure VMs, and Azure SQL databases. The Prepare process differs for each resource. It might include replicating the source resource to the target region. | Kicking off moves resource state to *Prepare in progress*.<br/><br/> After prepare finishes, resource state moves to *Initiate move pending*, with no issues.<br/><br/> An unsuccessful process moves state to *Prepare failed*. | | **Step 4: Initiate move** | Kick off the move process. The move method depends on the resource type:<br/><br/> - **Stateless**: Typically, for stateless resources, the move process deploys an imported template in the target region. The template is based on the source resource settings, and any manual edits you make to target settings.<br/><br/> - **Stateful**: For stateful resources, the move process might involve creating the resource, or enabling a copy, in the target region.<br/><br/> For stateful resources only, initiating a move might result in downtime of source resources. For example, VMs and SQL. | Kicking off move shifts the state to *Initiate move in progress*.<br/><br/> A successful initiate move moves resource state to *Commit move pending*, with no issues. <br/><br/> An unsuccessful move process moves state to *Initiate move failed*. | | **Step 5 Option 1: Discard move** | After the initial move, you can decide whether you want to go ahead with a full move. If you don't, you can discard the move, and Resource Mover deletes the resources created in the target. The replication process for stateful resources continues after the Discard process. This option is useful for testing. | Discarding resources moves state to *Discard in progress*.<br/><br/> Successful discard moves state to *Initiate move pending*, with no issues.<br/><br/> A failed discard moves state to *Discard move failed*. | | **Step 5 Option 2: Commit move** | After the initial move, if you want to go ahead with a full move, you verify resources in the target region, and when you're ready, you commit the move.<br/><br/> For stateful resources only, commit can result in source resources like VMs or SQL becoming inaccessible. | If you commit the move, resource state moves to *Commit move in progress**.<br/><br/> After a successful commit, the resource state shows *Commit move completed*, with no issues.<br/><br/> A failed commit moves state to *Commit move failed*. |
-| **Step 6: Delete source** | After committing the move, and verifying resources in the target region, you can delete the source resource. | After committing, a resource state moves to *Delete source pending*. You can then select the source resource and delete it.<br/><br/> - Only resources in the *Delete source pending* state can be deleted. <br/><br/>Deleting a resource group or SQL Server in the Resource Mover portal isn't supported. These resources can only be deleted from the resource properties page. |
-
+| **Step 6: Delete source** | After committing the move, and verifying resources in the target region, you can delete the source resource. | After committing, a resource state moves to *Delete source pending*. You can then select the source resource and delete it.<br/><br/> Only resources in the *Delete source pending* state can be deleted. <br/><br/>Deleting a resource group or SQL Server in the Resource Mover portal isn't supported. These resources can only be deleted from the resource properties page. |
## Move region states
The table summarizes what's impacted when you're moving across regions.
[Move](tutorial-move-region-virtual-machines.md) Azure VMs to another region. [Move](tutorial-move-region-sql.md) Azure SQL resources to another region.+
resource-mover Tutorial Move Region Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-virtual-machines.md
Last updated 02/10/2023
#Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region.- + # Move Azure VMs across regions This tutorial shows you how to move Azure VMs and related network/storage resources to a different Azure region using [Azure Resource Mover](overview.md).
Before you begin, verify the following:
| Requirement | Description | | | | | **Resource Mover support** | [Review](common-questions.md) the supported regions and other common questions. |
-| **Subscription permissions** | Check that you have *Owner* access on the subscription containing the resources that you want to move<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles.|
+| **Subscription permissions** | Check that you have *Owner* access on the subscription containing the resources that you want to move<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types), formerly known as Managed Service Identify (MSI) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles.|
| **VM support** | - Check that the VMs you want to move are supported.<br/> - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<br/> - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<br/> - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.|
-| **Destination subscription** | The subscription in the destination region needs enough quota to create the resources you're moving in the target region. If it doesn't have a quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).
-**Destination region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.|
-
+| **Destination subscription** | The subscription in the destination region needs enough quota to create the resources you're moving in the target region. If it doesn't have a quota, [request additional limits](../azure-resource-manager/management/azure-subscription-service-limits.md).|
+|**Destination region charges** | Verify pricing and charges associated with the target region to which you're moving VMs. Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help you.|
## Prepare VMs
During the Prepare process, Resource Mover generates Azure Resource Manager (ARM
> [!NOTE] > After preparing the resource group, it's in the *Initiate move pending* state.
-
### Move the source resource group **To start the move, follows these steps:**
To delete the additional resources created for the move, follow these steps:
## Next steps [Learn more](./tutorial-move-region-sql.md) about moving Azure SQL databases and elastic pools to another region.+
sap Manage Virtual Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/manage-virtual-instance.md
Last updated 02/03/2023
-#Customer intent: As a developer, I want to configure my Virtual Instance for SAP solutions resource so that I can find system properties and connect to databases.
+#Customer intent: As a SAP Basis Admin, I want to view and manage my SAP systems using Virtual Instance for SAP solutions resource where I can find SAP system properties.
# Manage a Virtual Instance for SAP solutions (preview)
In this article, you'll learn how to view the *Virtual Instance for SAP solution
## Prerequisites -- An Azure subscription. -- **Contributor** role access to the subscription or resource groups where you plan to deploy the SAP system.-- A **User-assigned managed identity** with **Contributor** role access to the Subscription or resource groups of the SAP system.
+- An Azure subscription in which you have a successfully created Virtual Instance for SAP solutions(VIS) resource.
+- An Azure account with **Azure Center for SAP solutions administrator** role access to the subscription or resource groups where you have the VIS resources.
## Open VIS in portal
To view properties for the instances within your VIS, first [open the VIS in the
In the sidebar menu, look under the section **SAP resources**: -- To see properties of ASCS instances, select **Central server instances**.
+- To see properties of ASCS instances, select **Central service instances**.
- To see properties of application server instances, select **App server instances**. - To see properties of database instances, select **Databases**.
If you get the warning **The operation 'List' is not enabled in this key vault's
## Delete VIS
-When you delete a VIS, you also delete the managed resource group and all instances that are attached to the VIS. For example, the VIS, ASCS, Application Server, and Database instances are deleted.
+When you delete a VIS, you also delete the managed resource group and all instances that are attached to the VIS. That is, the VIS, ASCS, Application Server, and Database instances are deleted.
Any Azure physical resources aren't deleted when you delete a VIS. For example, the VMs, disks, NICs, and other resources aren't deleted. > [!WARNING]
sap Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/register-existing-system.md
In this how-to guide, you'll learn how to register an existing SAP system with *
- Use a [**Storage** service tag with regional scope](../../virtual-network/service-tags-overview.md) to allow storage account connectivity to the Azure storage accounts in the same region as the VMs. - Allowlist the region-specific IP addresses for Azure Storage. - Register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system.-- Check that your Azure account has **Azure Center for SAP solutions administrator** or equivalent role access on the subscription or resource groups where you have the SAP system resources.
+- Check that your Azure account has **Azure Center for SAP solutions administrator** and **Managed Identity Operator** or equivalent role access on the subscription or resource groups where you have the SAP system resources.
- A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** and **Tag Contributor** role access on the Compute resource group and **Reader** and **Tag Contributor** role access on the Network resource group of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource. - Make sure ASCS, Application Server and Database virtual machines of the SAP system are in **Running** state.-- sapcontrol and saphostctrl exe files must exist in the path /usr/sap/hostctrl/exe on ASCS, App server and Database.
+- sapcontrol and saphostctrl exe files must exist on ASCS, App server and Database.
+ - File path on Linux VMs: /usr/sap/hostctrl/exe
+ - File path on Windows VMs: C:\Program Files\SAP\hostctrl\exe\
- Make sure the **sapstartsrv** process is running on all **SAP instances** and for **SAP hostctrl agent** on all the VMs in the SAP system.
- - To start hostctrl sapstartsrv use the command: 'hostexecstart -start'
+ - To start hostctrl sapstartsrv use this command for Linux VMs: 'hostexecstart -start'
- To start instance sapstartsrv use the command: 'sapcontrol -nr 'instanceNr' -function StartService S0S'
+ - To check status of hostctrl sapstartsrv use this command for Windows VMs: C:\Program Files\SAP\hostctrl\exe\saphostexec ΓÇôstatus
- For successful discovery and registration of the SAP system, ensure there is network connectivity between ASCS, App and DB VMs. 'ping' command for App instance hostname must be successful from ASCS VM. 'ping' for Database hostname must be successful from App server VM. - On App server profile, SAPDBHOST, DBTYPE, DBID parameters must have the right values configured for the discovery and registration of Database instance details.
To provide permissions to the SAP system resources to a user-assigned managed id
To register an existing SAP system in Azure Center for SAP solutions:
-1. Sign in to the [Azure portal](https://portal.azure.com). Make sure to sign in with an Azure account that has **Azure Center for SAP solutions administrator** role access to the subscription or resource groups where the SAP system exists. For more information, see the [resource permissions explanation](#enable-resource-permissions).
+1. Sign in to the [Azure portal](https://portal.azure.com). Make sure to sign in with an Azure account that has **Azure Center for SAP solutions administrator** and **Managed Identity Operator** role access to the subscription or resource groups where the SAP system exists. For more information, see the [resource permissions explanation](#enable-resource-permissions).
1. Search for and select **Azure Center for SAP solutions** in the Azure portal's search bar. 1. On the **Azure Center for SAP solutions** page, select **Register an existing SAP system**.
This error happens when the Database identifier is incorrectly configured on the
1. Stop the Application Server instance:
- `sapcontrol -nr -function Stop`
+ `sapcontrol -nr <instance number> -function Stop`
1. Stop the ASCS instance:
- `sapcontrol -nr -function Stop`
+ `sapcontrol -nr <instance number> -function Stop`
1. Open the Application Server profile. 1. Add the profile parameter for the HANA Database:
- `rsdb/dbid = HanaDbSid`
+ `rsdb/dbid = <SID of HANA Database>`
1. Restart the Application Server instance:
- `sapcontrol -nr -function Start`
+ `sapcontrol -nr <instance number> -function Start`
1. Restart the ASCS instance:
- `sapcontrol -nr -function Start`
+ `sapcontrol -nr <instance number> -function Start`
1. Delete the VIS resource whose registration failed. 1. [Register the SAP system](#register-sap-system) again. ### Error - Azure VM Agent not in desired provisioning state
-This issue occurs when Azure VM agent's provisioning state is not as expected on the specified Virtual Machine. Expected state is **Ready**. Verify the agent status by checking the properties section in the VM overview page. To fix the VM Agent,
+**Cause:** This issue occurs when Azure VM agent's provisioning state is not as expected on the specified Virtual Machine. Expected state is **Ready**. Verify the agent status by checking the properties section in the VM overview page.
+
+**Solution:** To fix the Linux VM Agent,
1. Login to the VM using bastion or serial console.
-1. If the VM agent exists and is not running, then restart the waagent.
+2. If the VM agent exists and is not running, then restart the waagent.
- sudo systemctl status waagent.
- - If the service is not running then restart this service. To restart use the following steps:
+3. If the service is not running then restart this service. To restart use the following steps:
- sudo systemctl stop waagent - sudo systemctl start waagent
- - If this does not solve the issue, try updating the VM Agent using [this document](../../virtual-machines/extensions/update-linux-agent.md)
-3. If the VM agent does not exist or needs to be re-installed, then follow [this documentation](../../virtual-machines/extensions/update-linux-agent.md).
+4. If this does not solve the issue, try updating the VM Agent using [this document](../../virtual-machines/extensions/update-linux-agent.md)
+5. If the VM agent does not exist or needs to be re-installed, then follow [this documentation](../../virtual-machines/extensions/update-linux-agent.md).
+To fix the Windows VM Agent, follow [Troubleshooting Azure Windows VM Agent](/troubleshoot/azure/virtual-machines/windows-azure-guest-agent).
## Next steps
sap Provider Netweaver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-netweaver.md
This step is **mandatory** when configuring SAP NetWeaver Provider. To fetch spe
1. Open an SAP GUI connection to the SAP server. 1. Sign in with an administrative account. 1. Execute transaction **RZ10**.
-1. Select the appropriate profile (recommended Instance Profile - no restart needed, *DEFAULT.PFL* requires restart of SAP system).
+1. Select the appropriate profile (recommended Instance Profile).
1. Select **Extended Maintenance** &gt; **Change**. 1. Select the profile parameter `service/protectedwebmethods`. 1. Change the value to: ```Value field
- SDEFAULT -GetQueueStatistic -ABAPGetWPTable -EnqGetStatistic -GetProcessList
+ SDEFAULT -GetQueueStatistic -ABAPGetWPTable -EnqGetStatistic -GetProcessList -GetEnvironment
1. Select **Copy**. 1. Select **Profile** &gt; **Save** to save the changes. 1. Restart the **SAPStartSRV** service on each instance in the SAP system. Restarting the services doesn't restart the entire system. This process only restarts **SAPStartSRV** (on Windows) or the daemon process (in Unix or Linux). 1. On Windows systems, use the SAP Microsoft Management Console (MMC) or SAP Management Console (MC) to restart the service. Right-click each instance. Then, choose **All Tasks** &gt; **Restart Service**.
- 1. On Linux systems, use the following commands to restart the host. Replace `<instance number` with your SAP system's instance number.
+ 2. On Linux systems, use the following commands to restart the host. Replace `<instance number>` with your SAP system's instance number.
```Command to restart the service sapcontrol -nr <instance number> -function RestartService
- ```
-
-
+ ```
+ 3. Repeat the previous steps for each instance profile.
### Prerequisite to enable RFC metrics
After you restart the SAP service, check that your updated rules are applied to
sapcontrol -nr <instance number> -function ParameterValue service/protectedwebmethods -user "<admin user>" "<admin password>" ```
-1. Review the output. Ensure in the output you see the name of methods **GetQueueStatistic ABAPGetWPTable EnqGetStatistic GetProcessList**
+1. Review the output. Ensure in the output you see the name of methods **GetQueueStatistic ABAPGetWPTable EnqGetStatistic GetProcessList GetEnvironment**
1. Repeat the previous steps for each instance profile.
sap Dbms Guide Sapase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-sapase.md
There are two supported High Availability configurations for SAP ASE on Azure:
> [!NOTE]
-> The failover times and other characteristics of either HA Aware or Floating IP solutions are similar. When deciding between these two solutions customers should perform their own testing and evaluation including factors such as planned and unplanned failover times and other operational procedures.
+> The failover times and other characteristics of either HA Aware or Floating IP solutions are similar. When deciding between these two solutions customers should perform their own testing and evaluation including factors such as planned and unplanned failover times and other operational procedures.
### Third node for disaster recovery Beyond using SAP ASE Always-On for local high availability, you might want to extend the configuration to an asynchronously replicated node in another Azure region. For more information, see [Installation Procedure for Sybase 16. 3 Patch Level 3 Always-on + DR on Suse 12.3](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/installation-procedure-for-sybase-16-3-patch-level-3-always-on/ba-p/368199).
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 03/01/2023 Last updated : 03/27/2023
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- March 26, 2023: Adding recommended sector size in [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md)
- March 1, 2023: Change in [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to add configuration for cluster default properties - February 21, 2023: Correct link to HANA hardware directory in [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md) and fixed a bug in [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md) - February 17, 2023: Add support and Sentinel sections, few other minor updates in [RISE with SAP integration](rise-integration.md)
sap Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v2.md
Previously updated : 12/14/2022 Last updated : 03/27/2023
When you look up the price list for Azure managed disks, then it becomes apparen
Since we don't want to define which direction you should go, we're leaving the decision to you on whether to take the single disk approach or to take the multiple disk approach. Though keep in mind that the single disk approach can hit its limitations with the 1,200MB/sec throughput. There might be a point where you need to stretch /hana/data across multiple volumes. also keep in mind that the capabilities of Azure VMs in providing storage throughput are going to grow over time. And that HANA savepoints are extremely critical and demand high throughput for the **/hana/data** volume
+> [!IMPORTANT]
+> You have the possibility to define the sector size of Azure Premium SSD v2 as 512 Bytes or 4096 Bytes. All the tests conducted and the certification as storage for SAP HANA were done with a sector size of 4096 Bytes. Therefore, you should make sure that this is the sector size of choice when you deploy disks. This sector size is different than stripe sizes that you need to define when using a logical volume manager.
+ **Recommendation: The recommended configurations with Azure premium storage for production scenarios look like:** Configuration for SAP **/hana/data** volume:
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
With susChkSrv implemented, an immediate and configurable action is executed, wh
Configuration pointing to the standard location /usr/share/SAPHanaSR, brings a benefit, that the python hook code is automatically updated through OS or package updates and it gets used by HANA at next restart. With an optional, own path, such as /hana/shared/myHooks you can decouple OS updates with the used hook version.
-2. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the command as `root` and adapt the bold values of hn1/HN1 with correct SID.
- <pre><code>
+2. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the command as `root` and adapt the values of hn1/HN1 with correct SID.
+
+ ```bash
cat << EOF > /etc/sudoers.d/20-saphana # Needed for SAPHanaSR and susChkSrv Python hooks
- <b>hn1</b>adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_<b>hn1</b>_site_srHook_*
- <b>hni</b>adm ALL=(ALL) NOPASSWD: /usr/sbin/SAPHanaSR-hookHelper --sid=<b>HN1</b> --case=fenceMe
+ hn1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_hn1_site_srHook_*
+ hn1adm ALL=(ALL) NOPASSWD: /usr/sbin/SAPHanaSR-hookHelper --sid=HN1 --case=fenceMe
EOF
- </code></pre>
+ ```
For more details on the implementation of the SAP HANA system replication hook see [Set up HANA HA/DR providers](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-sr-guide-PerfOpt-15/https://docsupdatetracker.net/index.html#_set_up_sap_hana_hadr_providers). 3. **[A]** Start SAP HANA on both nodes. Execute as <sid\>adm.
NOTE: The following tests are designed to be run in sequence and depend on the e
* [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide]
-* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
+* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
search Search Security Trimming For Azure Search With Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-trimming-for-azure-search-with-aad.md
- Previously updated : 01/30/2023+ Last updated : 03/24/2023 # Security filters for trimming Azure Cognitive Search results using Active Directory identities
This article demonstrates how to use Azure Active Directory (AD) security identities together with filters in Azure Cognitive Search to trim search results based on user group membership. This article covers the following tasks:+ > [!div class="checklist"] > - Create Azure AD groups and users > - Associate the user with the group you have created
This article covers the following tasks:
Your index in Azure Cognitive Search must have a [security field](search-security-trimming-for-azure-search.md) to store the list of group identities having read access to the document. This use case assumes a one-to-one correspondence between a securable item (such as an individual's college application) and a security field specifying who has access to that item (admissions personnel).
-You must have Azure AD administrator permissions (Owner or administrator), required in this walkthrough for creating users, groups, and associations.
+You must have Azure AD administrator permissions (Owner or administrator) to create users, groups, and associations.
Your application must also be registered with Azure AD as a multi-tenant app, as described in the following procedure.
Your application must also be registered with Azure AD as a multi-tenant app, as
This step integrates your application with Azure AD for the purpose of accepting sign-ins of user and group accounts. If you aren't a tenant admin in your organization, you might need to [create a new tenant](../active-directory/develop/quickstart-create-new-tenant.md) to perform the following steps.
-1. In [Azure portal](https://portal.azure.com), find the Azure Active Directory resource for your subscription.
+1. In [Azure portal](https://portal.azure.com), find the Azure Active Directory tenant.
1. On the left, under **Manage**, select **App registrations**, and then select **New registration**.
-1. Give the registration a name, perhaps a name that is similar to the search application name. Select **Register**.
+1. Give the registration a name, perhaps a name that's similar to the search application name. Select **Register**.
-1. Once the app registration is created, copy the Application ID. You'll need to provide this string to your application.
+1. Once the app registration is created, copy the Application (client) ID. You'll need to provide this string to your application.
- If you're stepping through the [DotNetHowToSecurityTrimming](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK), paste this value into the **app.config** file.
+ If you're stepping through the [DotNetHowToSecurityTrimming](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToSecurityTrimming), paste this value into the **app.config** file.
Repeat for the Tenant ID.
This step integrates your application with Azure AD for the purpose of accepting
- **Group.ReadWrite.All** - **User.ReadWrite.All**
-Microsoft Graph provides an API that allows programmatic access to Azure AD through a REST API. The code sample for this walkthrough uses the permissions to call the Microsoft Graph API for creating groups, users, and associations. The APIs are also used to cache group identifiers for faster filtering.
+ Microsoft Graph provides an API that allows programmatic access to Azure AD through a REST API. The code sample for this walkthrough uses the permissions to call the Microsoft Graph API for creating groups, users, and associations. The APIs are also used to cache group identifiers for faster filtering.
+
+1. Select **Grant admin consent for tenant** to complete the consent process.
## Create users and groups
IndexDocumentsResult result = searchClient.IndexDocuments(batch);
## Issue a search request
-For security trimming purposes, the values in your security field in the index are static values used for including or excluding documents in search results. For example, if the group identifier for Admissions is "A11B22C33D44-E55F66G77-H88I99JKK", any documents in an Azure Cognitive Search index having that identifier in the security filed are included (or excluded) in the search results sent back to the requestor.
+For security trimming purposes, the values in your security field in the index are static values used for including or excluding documents in search results. For example, if the group identifier for Admissions is "A11B22C33D44-E55F66G77-H88I99JKK", any documents in an Azure Cognitive Search index having that identifier in the security field are included (or excluded) in the search results sent back to the caller.
To filter documents returned in search results based on groups of the user issuing the request, review the following steps.
search Search Security Trimming For Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-trimming-for-azure-search.md
- Previously updated : 01/30/2023+ Last updated : 03/24/2023 # Security filters for trimming results in Azure Cognitive Search
-You can apply security filters to trim search results in Azure Cognitive Search based on user identity. This search experience generally requires comparing the identity of whoever requests the search against a field containing the principals who have permissions to the document. When a match is found, the user or principal (such as a group or role) has access to that document.
+Cognitive Search doesn't provide document-level permissions and can't vary search results from within the same index by user permissions. As a workaround, you can create a filter that trims search results based on a string containing a group or user identity.
-One way to achieve security filtering is through a complicated disjunction of equality expressions: for example, `Id eq 'id1' or Id eq 'id2'`, and so forth. This approach is error-prone, difficult to maintain, and in cases where the list contains hundreds or thousands of values, slows down query response time by many seconds.
+This article describes a pattern for security filtering that includes following steps:
-A simpler and faster approach is through the `search.in` function. If you use `search.in(Id, 'id1, id2, ...')` instead of an equality expression, you can expect sub-second response times.
-
-This article shows you how to accomplish security filtering using the following steps:
> [!div class="checklist"]
-> * Create a field that contains the principal identifiers
-> * Push or update existing documents with the relevant principal identifiers
-> * Issue a search request with `search.in` `filter`
+> * Assemble source documents with the required content
+> * Create a field for the principal identifiers
+> * Push the documents to the search index for indexing
+> * Query the index with the `search.in` filter function
+
+## About the security filter pattern
+
+Although Cognitive Search doesn't integrate with security subsystems for access to content within an index, many customers who have document-level security requirements have found that filters can meet their needs.
+
+In Cognitive Search, a security filter is a regular OData filter that includes or excludes a search result based on a matching value, except that in a security filter, the criteria is a string consisting of a security principal. There's no authentication or authorization through the security principal. The principal is just a string, used in a filter expression, to include or exclude a document from the search results.
->[!NOTE]
-> The process of retrieving the principal identifiers is not covered in this document. You should get it from your identity service provider.
+There are several ways to achieve security filtering. One way is through a complicated disjunction of equality expressions: for example, `Id eq 'id1' or Id eq 'id2'`, and so forth. This approach is error-prone, difficult to maintain, and in cases where the list contains hundreds or thousands of values, slows down query response time by many seconds.
+
+A better solution is using the `search.in` function for security filters, as described in this article. If you use `search.in(Id, 'id1, id2, ...')` instead of an equality expression, you can expect subsecond response times.
## Prerequisites
-This article assumes you have an [Azure subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F), an[Azure Cognitive Search service](search-create-service-portal.md), and an [index](search-what-is-an-index.md).
+* The field containing group or user identity must be a string with the "filterable" attribute. It should be a collection. It shouldn't allow nulls.
+
+* Other fields in the same document should provide the content that's accessible to that group or user. In the following JSON documents, the "security_id" fields contain identities used in a security filter, and the name, salary, and marital status will be included if the identity of the caller matches the "security_id" of the document.
+
+ ```json
+ {
+ "Employee-1": {
+ "id": "100-1000-10-1-10000-1",
+ "name": "Abram",
+ "salary": 75000,
+ "married": true,
+ "security_id": "10011"
+ },
+ "Employee-2": {
+ "id": "200-2000-20-2-20000-2",
+ "name": "Adams",
+ "salary": 75000,
+ "married": true,
+ "security_id": "20022"
+ }
+ }
+ ```
+
+ >[!NOTE]
+ > The process of retrieving the principal identifiers and injecting those strings into source documents that can be indexed by Cognitive Search isn't covered in this article. Refer to the documentation of your identity service provider for help with obtaining identifiers.
## Create security field
-Your documents must include a field specifying which groups have access. This information becomes the filter criteria against which documents are selected or rejected from the result set returned to the issuer.
-Let's assume that we have an index of secured files, and each file is accessible by a different set of users.
+In the search index, within the field collection, you need one field that contains the group or user identity, similar to the fictitious "security_id" field in the previous example.
+
+1. Add a security field as a `Collection(Edm.String)`. Make sure it has a `filterable` attribute set to `true` so that search results are filtered based on the access the user has. For example, if you set the `group_ids` field to `["group_id1, group_id2"]` for the document with `file_name` "secured_file_b", only users that belong to group IDs "group_id1" or "group_id2" have read access to the file.
-1. Add field `group_ids` (you can choose any name here) as a `Collection(Edm.String)`. Make sure the field has a `filterable` attribute set to `true` so that search results are filtered based on the access the user has. For example, if you set the `group_ids` field to `["group_id1, group_id2"]` for the document with `file_name` "secured_file_b", only users that belong to group IDs "group_id1" or "group_id2" have read access to the file.
-
- Make sure the field's `retrievable` attribute is set to `false` so that it isn't returned as part of the search request.
+ Set the field's `retrievable` attribute to `false` so that it isn't returned as part of the search request.
-2. Also add `file_id` and `file_name` fields for the sake of this example.
+1. Indexes require a document key. The "file_id" field satisfies that requirement. Indexes should also contain searchable content. The "file_name" and "file_description" fields represent that in this example.
- ```JSON
- {
+ ```https
+ POST https://[search service].search.windows.net/indexes/securedfiles/docs/index?api-version=2020-06-30
+ {
"name": "securedfiles", "fields": [
- {"name": "file_id", "type": "Edm.String", "key": true, "searchable": false, "sortable": false, "facetable": false},
- {"name": "file_name", "type": "Edm.String"},
- {"name": "group_ids", "type": "Collection(Edm.String)", "filterable": true, "retrievable": false}
+ {"name": "file_id", "type": "Edm.String", "key": true, "searchable": false },
+ {"name": "file_name", "type": "Edm.String", "searchable": true },
+ {"name": "file_description", "type": "Edm.String", "searchable": true },
+ {"name": "group_ids", "type": "Collection(Edm.String)", "filterable": true, "retrievable": false }
] }
- ```
+ ```
-## Pushing data into your index using the REST API
+## Push data into your index using the REST API
-Issue an HTTP POST request to your index's URL endpoint. The body of the HTTP request is a JSON object containing the documents to be added:
+Issue an HTTP POST request to your index's URL endpoint. The body of the HTTP request is a JSON object containing the documents to be indexed:
```http POST https://[search service].search.windows.net/indexes/securedfiles/docs/index?api-version=2020-06-30
-Content-Type: application/json
-api-key: [admin key]
``` In the request body, specify the content of your documents:
In the request body, specify the content of your documents:
"@search.action": "upload", "file_id": "1", "file_name": "secured_file_a",
+ "file_description": "File access is restricted to the Human Resources.",
"group_ids": ["group_id1"] }, { "@search.action": "upload", "file_id": "2", "file_name": "secured_file_b",
+ "file_description": "File access is restricted to Human Resources and Recruiting.",
"group_ids": ["group_id1", "group_id2"] }, { "@search.action": "upload", "file_id": "3", "file_name": "secured_file_c",
+ "file_description": "File access is restricted to Operations and Logistics.",
"group_ids": ["group_id5", "group_id6"] } ]
If you need to update an existing document with the list of groups, you can use
} ```
-For full details on adding or updating documents, you can read [Edit documents](/rest/api/searchservice/addupdate-or-delete-documents).
+For more information on uploading documents, see [Add, Update, or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents).
-## Apply the security filter
+## Apply the security filter in the query
In order to trim documents based on `group_ids` access, you should issue a search query with a `group_ids/any(g:search.in(g, 'group_id1, group_id2,...'))` filter, where 'group_id1, group_id2,...' are the groups to which the search request issuer belongs. This filter matches all documents for which the `group_ids` field contains one of the given identifiers. For full details on searching documents using Azure Cognitive Search, you can read [Search Documents](/rest/api/searchservice/search-documents).
-Note that this sample shows how to search documents using a POST request.
+
+This sample shows how to set up query using a POST request.
Issue the HTTP POST request:
You should get the documents back where `group_ids` contains either "group_id1"
This article described a pattern for filtering results based on user identity and the `search.in()` function. You can use this function to pass in principal identifiers for the requesting user to match against principal identifiers associated with each target document. When a search request is handled, the `search.in` function filters out search results for which none of the user's principals have read access. The principal identifiers can represent things like security groups, roles, or even the user's own identity.
-For an alternative pattern based on Active Directory, or to revisit other security features, see the following links.
+For an alternative pattern based on Azure Active Directory, or to revisit other security features, see the following links.
* [Security filters for trimming results using Active Directory identities](search-security-trimming-for-azure-search-with-aad.md) * [Security in Azure Cognitive Search](search-security-overview.md)
sentinel Configure Connector Login Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-connector-login-detection.md
Microsoft Sentinel can apply machine learning (ML) to Security events data to id
- **Unusual IP** - the IP address has rarely or never been observed in the last 30 days -- **Unusual geo-location** - the IP address, city, country, and ASN have rarely or never been observed in the last 30 days
+- **Unusual geo-location** - the IP address, city, country/region, and ASN have rarely or never been observed in the last 30 days
- **New user** - a new user logs in from an IP address and geo-location, both or either of which were not expected to be seen based on data from the 30 days prior.
sentinel Connect Logstash Data Connection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash-data-connection-rules.md
In this section, you create resources to use for your DCR, in one of these scena
To ingest the data to a custom table, follow these steps (based on the [Send data to Azure Monitor Logs using REST API (Azure portal) tutorial](../azure-monitor/logs/tutorial-logs-ingestion-portal.md)): 1. Review the [prerequisites](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#prerequisites).
-1. [Configure the application](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#configure-the-application).
-1. [Create a data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-a-data-collection-endpoint).
-1. [Add a custom log table](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#add-a-custom-log-table).
+1. [Configure the application](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-azure-ad-application).
+1. [Create a data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-data-collection-endpoint).
+1. [Add a custom log table](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-new-table-in-log-analytics-workspace).
1. [Parse and filter sample data](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#parse-and-filter-sample-data) using [the sample file you created in the previous section](#create-a-sample-file). 1. [Collect information from the DCR](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#collect-information-from-the-dcr). 1. [Assign permissions to the DCR](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#assign-permissions-to-the-dcr). Skip the Send sample data step.
-If you come across any issues, see the [troubleshooting steps](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#troubleshooting).
+If you come across any issues, see the [troubleshooting steps](../azure-monitor/logs/tutorial-logs-ingestion-code.md#troubleshooting).
#### Create DCR resources for ingestion into a standard table
To ingest the data to a standard table like Syslog or CommonSecurityLog, you use
1. Review the [prerequisites](../azure-monitor/logs/tutorial-logs-ingestion-api.md#prerequisites). 1. [Collect workspace details](../azure-monitor/logs/tutorial-logs-ingestion-api.md#collect-workspace-details).
-1. [Configure an application](../azure-monitor/logs/tutorial-logs-ingestion-api.md#configure-an-application).
+1. [Configure an application](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-azure-ad-application).
Skip the Create new table in Log Analytics workspace step. This step isn't relevant when ingesting data into a standard table, because the table is already defined in Log Analytics.
-1. [Create data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-a-data-collection-endpoint).
-1. [Create the DCR](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-a-data-collection-rule). In this step:
+1. [Create data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-data-collection-endpoint).
+1. [Create the DCR](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-data-collection-rule). In this step:
- Provide [the sample file you created in the previous section](#create-a-sample-file). - Use the sample file you created to define the `streamDeclarations` property. Each of the fields in the sample file should have a corresponding column with the same name and the appropriate type (see the [example](#example-dcr-that-ingests-data-into-the-syslog-table) below). - Configure the value of the `outputStream` property with the name of the standard table instead of the custom table. Unlike custom tables, standard table names don't have the `_CL` suffix.
To ingest the data to a standard table like Syslog or CommonSecurityLog, you use
Skip the Send sample data step.
-If you come across any issues, see the [troubleshooting steps](../azure-monitor/logs/tutorial-logs-ingestion-api.md#troubleshooting).
+If you come across any issues, see the [troubleshooting steps](../azure-monitor/logs/tutorial-logs-ingestion-code.md#troubleshooting).
##### Example: DCR that ingests data into the Syslog table
If you are not seeing any data in this log file, generate and send some events l
- Ingestion into standard tables is limited only to [standard tables supported for custom logs ingestion](data-transformation.md#data-transformation-support-for-custom-data-connectors). - The columns of the input stream in the `streamDeclarations` property must start with a letter. If you start a column with other characters (for example `@` or `_`), the operation fails. - The `TimeGenerated` datetime field is required. You must include this field in the KQL transform.-- For additional possible issues, review the [troubleshooting section](../azure-monitor/logs/tutorial-logs-ingestion-api.md#troubleshooting) in the tutorial.
+- For additional possible issues, review the [troubleshooting section](../azure-monitor/logs/tutorial-logs-ingestion-code.md#troubleshooting) in the tutorial.
## Next steps
sentinel Connect Logstash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash.md
Here are some sample configurations that use a few different options.
} ```
+- A more advanced configuration to parse a custom timestamp and a JSON string from unstructured text data and log a selected set of fields into Log Analytics with the extracted timestamp:
+
+ ```ruby
+ # Example log line below:
+ # Mon Nov 07 20:45:08 2022: { "name":"_custom_time_generated", "origin":"test_microsoft", "sender":"test@microsoft.com", "messages":1337}
+ # take an input
+ input {
+ file {
+ path => "/var/log/test.log"
+ }
+ }
+ filter {
+ # extract the header timestamp and the Json section
+ grok {
+ match => {
+ "message" => ["^(?<timestamp>.{24}):\s(?<json_data>.*)$"]
+ }
+ }
+ # parse the extracted header as a timestamp
+ date {
+ id => 'parse_metric_timestamp'
+ match => [ 'timestamp', 'EEE MMM dd HH:mm:ss yyyy' ]
+ timezone => 'Europe/Rome'
+ target => 'custom_time_generated'
+ }
+ json {
+ source => "json_data"
+ }
+ }
+ # output to a file for debugging (optional)
+ output {
+ file {
+ path => "/tmp/test.txt"
+ codec => line { format => "custom format: %{message} %{custom_time_generated} %{json_data}"}
+ }
+ }
+ # output to the console output for debugging (optional)
+ output {
+ stdout { codec => rubydebug }
+ }
+ # log into Log Analytics
+ output {
+ microsoft-logstash-output-azure-loganalytics {
+ workspace_id => '[REDACTED]'
+ workspace_key => '[REDACTED]'
+ custom_log_table_name => 'RSyslogMetrics'
+ time_generated_field => 'custom_time_generated'
+ key_names => ['custom_time_generated','name','origin','sender','messages']
+ }
+ }
+ ```
+ > [!NOTE] > Visit the output plugin [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-logstash-output-azure-loganalytics) to learn more about its inner workings, configuration, and performance settings.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs
description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 03/06/2023 Last updated : 03/25/2023
Data connectors are available as part of the following offerings:
## Citrix - [Citrix ADC (former NetScaler)](data-connectors/citrix-adc-former-netscaler.md)-- [CITRIX SECURITY ANALYTICS](data-connectors/citrix-security-analytics.md) ## Claroty
Data connectors are available as part of the following offerings:
## Cloud Software Group
+- [CITRIX SECURITY ANALYTICS](data-connectors/citrix-security-analytics.md)
- [Citrix WAF (Web App Firewall)](data-connectors/citrix-waf-web-app-firewall.md) ## Cloudflare
Data connectors are available as part of the following offerings:
- [Cyberpion Security Logs](data-connectors/cyberpion-security-logs.md)
+## Cybersixgill
+
+- [Cybersixgill Actionable Alerts (using Azure Function)](data-connectors/cybersixgill-actionable-alerts-using-azure-function.md)
+ ## Darktrace - [AI Analyst Darktrace](data-connectors/ai-analyst-darktrace.md)
Data connectors are available as part of the following offerings:
- [Symantec ProxySG](data-connectors/symantec-proxysg.md) - [Symantec VIP](data-connectors/symantec-vip.md)
+## TALON CYBER SECURITY LTD
+
+- [Talon Insights](data-connectors/talon-insights.md)
+ ## Tenable - [Tenable.io Vulnerability Management (using Azure Function)](data-connectors/tenable-io-vulnerability-management-using-azure-function.md)
sentinel Akamai Security Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/akamai-security-events.md
Title: "Akamai Security Events connector for Microsoft Sentinel"
description: "Learn how to install the connector Akamai Security Events to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
Akamai Solution for Sentinel provides the capability to ingest [Akamai Security
| Connector attribute | Description | | | |
-| **Kusto function alias** | AkamaiSIEMEvent |
-| **Kusto function url** | https://aka.ms/sentinel-akamaisecurityevents-parser |
| **Log Analytics table(s)** | CommonSecurityLog (AkamaiSecurityEvents)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
AkamaiSIEMEvent
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-akamaisecurityevents-parser) to create the Kusto functions alias, **AkamaiSIEMEvent**
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Akamai Security Events and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Akamai%20Security%20Events/Parsers/AkamaiSIEMEvent.txt), on the second line of the query, enter the hostname(s) of your Akamai Security Events device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Linux Syslog agent configuration
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-akamai?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-akamai?tab=Overview) in the Azure Marketplace.
sentinel Azure Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/azure-ddos-protection.md
Title: "Azure DDoS Protection connector for Microsoft Sentinel"
description: "Learn how to install the connector Azure DDoS Protection to connect your data source to Microsoft Sentinel." Previously updated : 03/14/2023 Last updated : 03/25/2023 # Azure DDoS Protection connector for Microsoft Sentinel
-Connect to Azure DDoS Protection logs via Public IP Address Diagnostic Logs. In addition to the core DDoS protection in the platform, Azure DDoS Protection provides advanced DDoS mitigation capabilities against network attacks. It's automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219760&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
+Connect to Azure DDoS Protection Standard logs via Public IP Address Diagnostic Logs. In addition to the core DDoS protection in the platform, Azure DDoS Protection Standard provides advanced DDoS mitigation capabilities against network attacks. It's automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219760&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
## Connector attributes
sentinel Blackberry Cylanceprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/blackberry-cylanceprotect.md
Title: "Blackberry CylancePROTECT connector for Microsoft Sentinel"
description: "Learn how to install the connector Blackberry CylancePROTECT to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Blackberry CylancePROTECT](https://www.blackberry.com/us/en/products/blackb
| Connector attribute | Description | | | |
-| **Kusto function alias** | CylancePROTECT |
-| **Kusto function url** | https://aka.ms/sentinel-cylanceprotect-parser |
| **Log Analytics table(s)** | Syslog (CylancePROTECT)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
To integrate with Blackberry CylancePROTECT make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-cylanceprotect-parser) to use the Kusto function alias, **CylancePROTECT**
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CyclanePROTECT and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Blackberry%20CylancePROTECT/Parsers/CylancePROTECT.txt), on the second line of the query, enter the hostname(s) of your CyclanePROTECT device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-blackberrycylanceprotect?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-blackberrycylanceprotect?tab=Overview) in the Azure Marketplace.
sentinel Braodcom Symantec Dlp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/braodcom-symantec-dlp.md
Title: "Braodcom Symantec DLP connector for Microsoft Sentinel"
-description: "Learn how to install the connector Braodcom Symantec DLP to connect your data source to Microsoft Sentinel."
+ Title: "Broadcom Symantec DLP connector for Microsoft Sentinel"
+description: "Learn how to install the connector Broadcom Symantec DLP to connect your data source to Microsoft Sentinel."
Previously updated : 02/23/2023 Last updated : 03/25/2023
-# Braodcom Symantec DLP connector for Microsoft Sentinel
+# Broadcom Symantec DLP connector for Microsoft Sentinel
-The [Broadcom Symantec Data Loss Prevention (DLP)](https://www.broadcom.com/products/cyber-security/information-protection/data-loss-prevention) connector allows you to easily connect your Symantec DLP with Azure Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs information, where it travels, and improves your security operation capabilities.
+The [Broadcom Symantec Data Loss Prevention (DLP)](https://www.broadcom.com/products/cyber-security/information-protection/data-loss-prevention) connector allows you to easily connect your Symantec DLP with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs information, where it travels, and improves your security operation capabilities.
## Connector attributes | Connector attribute | Description | | | |
-| **Kusto function alias** | SymantecDLP |
-| **Kusto function url** | https://aka.ms/sentinel-symantecdlp-parser |
| **Log Analytics table(s)** | CommonSecurityLog (SymantecDLP)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
SymantecDLP
``` - ## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-symantecdlp-parser) to use the Kusto function alias, **SymantecDLP**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SymantecDLP and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Broadcom%20SymantecDLP/Parsers/SymantecDLP.txt). The function usually takes 10-15 minutes to activate after solution installation/update.
1. Linux Syslog agent configuration
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Azure Sentinel.
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
> Notice that the data from all regions will be stored in the selected workspace 1.1 Select or create a Linux machine
-Select or create a Linux machine that Azure Sentinel will use as the proxy between your security solution and Azure Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
1.2 Install the CEF collector on the Linux machine
-Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Azure Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
> 1. Make sure that you have Python on your machine using the following command: python ΓÇôversion.
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
2. Forward Symantec DLP logs to a Syslog agent
-Configure Symantec DLP to forward Syslog messages in CEF format to your Azure Sentinel workspace via the Syslog agent.
+Configure Symantec DLP to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent.
1. [Follow these instructions](https://help.symantec.com/cs/DLP15.7/DLP/v27591174_v133697641/Configuring-the-Log-to-a-Syslog-Server-action?locale=EN_US) to configure the Symantec DLP to forward syslog 2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address.
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-broadcomsymantecdlp?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-broadcomsymantecdlp?tab=Overview) in the Azure Marketplace.
sentinel Cisco Application Centric Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-application-centric-infrastructure.md
Title: "Cisco Application Centric Infrastructure connector for Microsoft Sentine
description: "Learn how to install the connector Cisco Application Centric Infrastructure to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
| Connector attribute | Description | | | | | **Log Analytics table(s)** | Syslog (CiscoACIEvent)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Open Log Analytics to check if the logs are received using the Syslog schema.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoaci?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoaci?tab=Overview) in the Azure Marketplace.
sentinel Cisco Meraki https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-meraki.md
Title: "Cisco Meraki connector for Microsoft Sentinel"
description: "Learn how to install the connector Cisco Meraki to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Cisco Meraki](https://meraki.cisco.com/) connector allows you to easily con
| Connector attribute | Description | | | |
-| **Kusto function alias** | CiscoMeraki |
-| **Kusto function url** | https://aka.ms/sentinel-ciscomeraki-parser |
| **Log Analytics table(s)** | meraki_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
To integrate with Cisco Meraki make sure you have:
## Vendor installation instructions
->This data connector depends on a parser (based on a Kusto Function) to work as expected. You have 2 options to get this parser into workspace
-
-> 1. If you have installed this connector via Meraki solution in ContentHub then navigate to parser definition from your workspace (Logs --> Functions --> CiscoMeraki --> Load the function code) to add your Meraki device list in the query and save the function.
-
-> 2. If you have not installed the Meraki solution from ContentHub then [Follow the steps](https://aka.ms/sentinel-ciscomeraki-parser) to use the Kusto function alias, **CiscoMeraki**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CiscoMeraki and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/CiscoMeraki/Parsers/CiscoMeraki.txt). The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
sentinel Cisco Ucs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-ucs.md
Title: "Cisco UCS connector for Microsoft Sentinel"
description: "Learn how to install the connector Cisco UCS to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023 # Cisco UCS connector for Microsoft Sentinel
-The [Cisco Unified Computing System (UCS)](https://www.cisco.com/c/en/us/products/servers-unified-computing/https://docsupdatetracker.net/index.html) connector allows you to easily connect your Cisco UCS logs with Azure Sentinel This gives you more insight into your organization's network and improves your security operation capabilities.
+The [Cisco Unified Computing System (UCS)](https://www.cisco.com/c/en/us/products/servers-unified-computing/https://docsupdatetracker.net/index.html) connector allows you to easily connect your Cisco UCS logs with Microsoft Sentinel This gives you more insight into your organization's network and improves your security operation capabilities.
## Connector attributes | Connector attribute | Description | | | |
-| **Kusto function alias** | CiscoUCS |
-| **Kusto function url** | https://aka.ms/sentinel-ciscoucs-function |
| **Log Analytics table(s)** | Syslog (CiscoUCS)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
To integrate with Cisco UCS make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-ciscoucs-function) to use the Kusto function alias, **CiscoUCS**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CiscoUCS and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Cisco%20UCS/Parsers/CiscoUCS.txt). The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoucs?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoucs?tab=Overview) in the Azure Marketplace.
sentinel Common Event Format Cef Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/common-event-format-cef-via-ama.md
Title: "Common Event Format (CEF) via AMA connector for Microsoft Sentinel"
description: "Learn how to install the connector Common Event Format (CEF) via AMA to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
Common Event Format (CEF) is an industry standard format on top of Syslog messag
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-commoneventformat?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-commoneventformat?tab=Overview) in the Azure Marketplace.
sentinel Contrast Protect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/contrast-protect.md
Title: "Contrast Protect connector for Microsoft Sentinel"
description: "Learn how to install the connector Contrast Protect to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
Contrast Protect mitigates security threats in production applications with runt
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (ContrastProtect)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Contrast Protect](https://docs.contrastsecurity.com/) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/contrast_security.contrast_protect_azure_sentinel_solution?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/contrast_security.contrast_protect_azure_sentinel_solution?tab=Overview) in the Azure Marketplace.
sentinel Crowdstrike Falcon Endpoint Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/crowdstrike-falcon-endpoint-protection.md
Title: "CrowdStrike Falcon Endpoint Protection connector for Microsoft Sentinel"
description: "Learn how to install the connector CrowdStrike Falcon Endpoint Protection to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [CrowdStrike Falcon Endpoint Protection](https://www.crowdstrike.com/endpoin
| Connector attribute | Description | | | |
-| **Kusto function alias** | CrowdStrikeFalconEventStream |
-| **Kusto function url** | https://aka.ms/sentinel-crowdstrikefalconendpointprotection-parser |
| **Log Analytics table(s)** | CommonSecurityLog (CrowdStrikeFalconEventStream)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
CrowdStrikeFalconEventStream
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-crowdstrikefalconendpointprotection-parser) to use the Kusto function alias, **CrowdStrikeFalconEventStream**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Crowd Strike Falcon Endpoint Protection and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/CrowdStrike%20Falcon%20Endpoint%20Protection/Parsers/CrowdstrikeFalconEventStream.txt), on the second line of the query, enter the hostname(s) of your CrowdStrikeFalcon device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Linux Syslog agent configuration
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-crowdstrikefalconep?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-crowdstrikefalconep?tab=Overview) in the Azure Marketplace.
sentinel Cybersixgill Actionable Alerts Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cybersixgill-actionable-alerts-using-azure-function.md
+
+ Title: "Cybersixgill Actionable Alerts (using Azure Function) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Cybersixgill Actionable Alerts (using Azure Function) to connect your data source to Microsoft Sentinel."
++ Last updated : 03/25/2023++++
+# Cybersixgill Actionable Alerts (using Azure Function) connector for Microsoft Sentinel
+
+Actionable alerts provide customized alerts based on configured assets
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Azure function app code** | https://github.com/syed-loginsoft/Azure-Sentinel/blob/cybersixgill/Solutions/Cybersixgill-Actionable-Alerts/Data%20Connectors/CybersixgillAlerts.zip?raw=true |
+| **Log Analytics table(s)** | CyberSixgill_Alerts_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Cybersixgill](https://www.cybersixgill.com/) |
+
+## Query samples
+
+**All Alerts**
+ ```kusto
+CyberSixgill_Alerts
+ ```
+++
+## Prerequisites
+
+To integrate with Cybersixgill Actionable Alerts (using Azure Function) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](https://learn.microsoft.com/azure/azure-functions/).
+- **REST API Credentials/permissions**: **Client_ID** and **Client_Secret** are required for making API calls.
++
+## Vendor installation instructions
++
+> [!NOTE]
+ > This connector uses Azure Functions to connect to the Cybersixgill API to pull Alerts into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
++
+>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
++++
+Option 1 - Azure Resource Manager (ARM) Template
+
+Use this method for automated deployment of the Cybersixgill Actionable Alerts data connector using an ARM Tempate.
+
+1. Click the **Deploy to Azure** button below.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fgithub.com%2Fsyed-loginsoft%2FAzure-Sentinel%2Fraw%2Fcybersixgill%2FSolutions%2FCybersixgill-Actionable-Alerts%2FData%20Connectors%2Fazuredeploy_Connector_Cybersixgill_AzureFunction.json)
+2. Select the preferred **Subscription**, **Resource Group** and **Location**.
+3. Enter the **Workspace ID**, **Workspace Key**, **Client ID**, **Client Secret**, **TimeInterval** and deploy.
+4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
+5. Click **Purchase** to deploy.
+
+Option 2 - Manual Deployment of Azure Functions
+
+Use the following step-by-step instructions to deploy the Cybersixgill Actionable Alerts data connector manually with Azure Functions (Deployment via Visual Studio Code).
++
+**1. Deploy a Function App**
+
+> NOTE:You will need to [prepare VS code](https://learn.microsoft.com/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+
+1. Download the [Azure Function App](https://github.com/syed-loginsoft/Azure-Sentinel/blob/cybersixgill/Solutions/Cybersixgill-Actionable-Alerts/Data%20Connectors/CybersixgillAlerts.zip?raw=true) file. Extract archive to your local development computer.
+2. Start VS Code. Choose File in the main menu and select Open Folder.
+3. Select the top level folder from extracted files.
+4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
+If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
+If you're already signed in, go to the next step.
+5. Provide the following information at the prompts:
+
+ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
+
+ b. **Select Subscription:** Choose the subscription to use.
+
+ c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
+
+ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. CybersixgillAlertsXXX).
+
+ e. **Select a runtime:** Choose Python 3.8.
+
+ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft sentinel is located.
+
+6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
+7. Go to Azure Portal for the Function App configuration.
++
+**2. Configure the Function App**
+
+1. In the Function App, select the Function App Name and select **Configuration**.
+2. In the **Application settings** tab, select **+ New application setting**.
+3. Add each of the following application settings individually, with their respective string values (case-sensitive):
+ ClientID
+ ClientSecret
+ Polling
+ WorkspaceID
+ WorkspaceKey
+ logAnalyticsUri (optional)
+ - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`
+3. Once all application settings have been entered, click **Save**.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cybersixgill1657701397011.azure-sentinel-cybersixgill-actionable-alerts?tab=Overview) in the Azure Marketplace.
sentinel Delinea Secret Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/delinea-secret-server.md
Title: "Delinea Secret Server connector for Microsoft Sentinel"
description: "Learn how to install the connector Delinea Secret Server to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
Common Event Format (CEF) from Delinea Secret Server
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog(DelineaSecretServer)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Delinea](https://delinea.com/support/) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/delineainc1653506022260.delinea_secret_server_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/delineainc1653506022260.delinea_secret_server_mss?tab=Overview) in the Azure Marketplace.
sentinel Digital Guardian Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/digital-guardian-data-loss-prevention.md
Title: "Digital Guardian Data Loss Prevention connector for Microsoft Sentinel"
description: "Learn how to install the connector Digital Guardian Data Loss Prevention to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
| Connector attribute | Description | | | | | **Log Analytics table(s)** | Syslog (DigitalGuardianDLPEvent)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Open Log Analytics to check if the logs are received using the Syslog schema.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-digitalguardiandlp?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-digitalguardiandlp?tab=Overview) in the Azure Marketplace.
sentinel Eset Protect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/eset-protect.md
Title: "ESET PROTECT connector for Microsoft Sentinel"
description: "Learn how to install the connector ESET PROTECT to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
This connector gathers all events generated by ESET software through the central
| **Kusto function alias** | ESETPROTECT | | **Kusto function url** | https://aka.ms/sentinel-esetprotect-parser | | **Log Analytics table(s)** | Syslog (ESETPROTECT)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [ESET Netherlands](https://techcenter.eset.nl/en/) | ## Query samples
Configure ESET PROTECT to send all events thorugh Syslog.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberdefensegroupbv1625581149103.eset_protect?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberdefensegroupbv1625581149103.eset_protect?tab=Overview) in the Azure Marketplace.
sentinel Extrahop Reveal X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/extrahop-reveal-x.md
Title: "ExtraHop Reveal(x) connector for Microsoft Sentinel"
description: "Learn how to install the connector ExtraHop Reveal(x) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The ExtraHop Reveal(x) data connector enables you to easily connect your Reveal(
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (ΓÇÿExtraHopΓÇÖ)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [ExtraHop](https://www.extrahop.com/support/) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/extrahop.extrahop_revealx_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/extrahop.extrahop_revealx_mss?tab=Overview) in the Azure Marketplace.
sentinel F5 Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/f5-networks.md
Title: "F5 Networks connector for Microsoft Sentinel"
description: "Learn how to install the connector F5 Networks to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The F5 firewall connector allows you to easily connect your F5 logs with Microso
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (F5)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [F5](https://www.f5.com/services/support) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5_networks_data_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5_networks_data_mss?tab=Overview) in the Azure Marketplace.
sentinel Fireeye Network Security Nx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/fireeye-network-security-nx.md
Title: "FireEye Network Security (NX) connector for Microsoft Sentinel"
description: "Learn how to install the connector FireEye Network Security (NX) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [FireEye Network Security (NX)](https://www.fireeye.com/products/network-sec
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (FireEyeNX)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fireeyenx?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fireeyenx?tab=Overview) in the Azure Marketplace.
sentinel Forcepoint Casb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/forcepoint-casb.md
Title: "Forcepoint CASB connector for Microsoft Sentinel"
description: "Learn how to install the connector Forcepoint CASB to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Forcepoint CASB (Cloud Access Security Broker) Connector allows you to autom
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (ForcepointCASB)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | ## Query samples
To complete the installation of this Forcepoint product integration, follow the
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-casb?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-casb?tab=Overview) in the Azure Marketplace.
sentinel Forcepoint Ngfw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/forcepoint-ngfw.md
Title: "Forcepoint NGFW connector for Microsoft Sentinel"
description: "Learn how to install the connector Forcepoint NGFW to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Forcepoint NGFW (Next Generation Firewall) connector allows you to automatic
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (ForcePointNGFW)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | ## Query samples
To complete the installation of this Forcepoint product integration, follow the
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-ngfw?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-ngfw?tab=Overview) in the Azure Marketplace.
sentinel Forescout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/forescout.md
Title: "Forescout connector for Microsoft Sentinel"
description: "Learn how to install the connector Forescout to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Forescout](https://www.forescout.com/) data connector provides the capabili
| Connector attribute | Description | | | | | **Log Analytics table(s)** | Syslog(ForescoutEvent)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Follow the configuration steps below to get Forescout logs into Microsoft Sentin
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/forescout.azure-sentinel-solution-forescout?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/forescout.azure-sentinel-solution-forescout?tab=Overview) in the Azure Marketplace.
sentinel Fortinet Fortiweb Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/fortinet-fortiweb-web-application-firewall.md
Title: "Fortinet FortiWeb Web Application Firewall connector for Microsoft Senti
description: "Learn how to install the connector Fortinet FortiWeb Web Application Firewall to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [fortiweb](https://www.fortinet.com/products/web-application-firewall/fortiw
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (Fortiweb)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fortiwebcloud?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fortiwebcloud?tab=Overview) in the Azure Marketplace.
sentinel Gitlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/gitlab.md
Title: "GitLab connector for Microsoft Sentinel"
description: "Learn how to install the connector GitLab to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [GitLab](https://about.gitlab.com/solutions/devops-platform/) connector allo
| Connector attribute | Description | | | | | **Log Analytics table(s)** | Syslog (GitlabAccess)<br/> Syslog (GitlabAudit)<br/> Syslog (GitlabApp)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-gitlab?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-gitlab?tab=Overview) in the Azure Marketplace.
sentinel Google Workspace G Suite Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-workspace-g-suite-using-azure-function.md
Title: "Google Workspace (G Suite) (using Azure Function) connector for Microsof
description: "Learn how to install the connector Google Workspace (G Suite) (using Azure Function) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Google Workspace](https://workspace.google.com/) data connector provides th
| Connector attribute | Description | | | | | **Azure function app code** | https://aka.ms/sentinel-GWorkspaceReportsAPI-functionapp |
-| **Kusto function alias** | GWorkspaceActivityReports |
-| **Kusto function url** | https://aka.ms/sentinel-GWorkspaceReportsAPI-parser |
| **Log Analytics table(s)** | GWorkspace_ReportsAPI_admin_CL<br/> GWorkspace_ReportsAPI_calendar_CL<br/> GWorkspace_ReportsAPI_drive_CL<br/> GWorkspace_ReportsAPI_login_CL<br/> GWorkspace_ReportsAPI_mobile_CL<br/> GWorkspace_ReportsAPI_token_CL<br/> GWorkspace_ReportsAPI_user_accounts_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
To integrate with Google Workspace (G Suite) (using Azure Function) make sure yo
> [!NOTE]
- > This connector uses Azure Functions to connect to the Google Reports API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+ > This connector uses Azure Functions to connect to the Google Reports API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details
>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-GWorkspaceReportsAPI-parser) to create the Kusto functions alias, **GWorkspaceActivityReports**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias GWorkspaceReports and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/GoogleWorkspaceReports/Parsers/GWorkspaceActivityReports), on the second line of the query, enter the hostname(s) of your GWorkspaceReports device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
**STEP 1 - Ensure the prerequisites to obtain the Google Pickel String**
To integrate with Google Workspace (G Suite) (using Azure Function) make sure yo
**STEP 2 - Configuration steps for the Google Reports API**
- 1. In Google Workspace, create a project if one does not already exists.
- 2. From ***APIs & Services*** -> ***Enabled APIs & Services***, enable **Admin SDK API** for this project.
- 3. Go to ***APIs & Services*** -> ***OAuth Consent Screen***. If not already configured, create a OAuth Consent Screen with the following steps:
+1. Login to Google cloud console with your Workspace Admin credentials https://console.cloud.google.com.
+2. Using the search option (available at the top middle), Search for ***APIs & Services***
+3. From ***APIs & Services*** -> ***Enabled APIs & Services***, enable **Admin SDK API** for this project.
+ 4. Go to ***APIs & Services*** -> ***OAuth Consent Screen***. If not already configured, create a OAuth Consent Screen with the following steps:
1. Provide App Name and other mandatory information. 2. Add authorized domains with API Access Enabled. 3. In Scopes section, add **Admin SDK API** scope. 4. In Test Users section, make sure the domain admin account is added.
- 4. Go to ***APIs & Services*** -> ***Credentials*** and create OAuth 2.0 Client ID
+ 5. Go to ***APIs & Services*** -> ***Credentials*** and create OAuth 2.0 Client ID
1. Click on Create Credentials on the top and select Oauth client Id. 2. Select Web Application from the Application Type drop down. 3. Provide a suitable name to the Web App and add http://localhost:8081/ as one of the Authorized redirect URIs. 4. Once you click Create, download the JSON from the pop-up that appears. Rename this file to "**credentials.json**".
- 5. To fetch Google Pickel String, run the [python script](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/GoogleWorkspaceReports/Data%20Connectors/get_google_pickle_string.py) from the same folder where credentials.json is saved.
+ 6. To fetch Google Pickel String, run the [python script](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/GoogleWorkspaceReports/Data%20Connectors/get_google_pickle_string.py) from the same folder where credentials.json is saved.
1. When popped up for sign-in, use the domain admin account credentials to login. >**Note:** This script is supported only on Windows operating system.
- 6. From the output of the previous step, copy Google Pickle String (contained within single quotation marks) and keep it handy. It will be needed on Function App deployment step.
+ 7. From the output of the previous step, copy Google Pickle String (contained within single quotation marks) and keep it handy. It will be needed on Function App deployment step.
**STEP 3 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
sentinel Illumio Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/illumio-core.md
Title: "Illumio Core connector for Microsoft Sentinel"
description: "Learn how to install the connector Illumio Core to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Illumio Core](https://www.illumio.com/products/core) data connector provide
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (IllumioCore)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft](https://support.microsoft.com) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-illumiocore?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-illumiocore?tab=Overview) in the Azure Marketplace.
sentinel Illusive Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/illusive-platform.md
Title: "Illusive Platform connector for Microsoft Sentinel"
description: "Learn how to install the connector Illusive Platform to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Illusive Platform Connector allows you to share Illusive's attack surface an
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (illusive)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Illusive Networks](https://illusive.com/support) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/illusivenetworks.illusive_platform_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/illusivenetworks.illusive_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Infoblox Cloud Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/infoblox-cloud-data-connector.md
Title: "Infoblox Cloud Data connector for Microsoft Sentinel"
description: "Learn how to install the connector Infoblox Cloud Data to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Infoblox Cloud Data Connector allows you to easily connect your Infoblox Blo
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (InfobloxCDC)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [InfoBlox](https://support.infoblox.com/) | ## Query samples
InfobloxCDC
## Vendor installation instructions
->**IMPORTANT:** This data connector depends on a parser based on a Kusto Function to work as expected called [**InfobloxCDC**](https://aka.ms/sentinel-InfobloxCloudDataConnector-parser) which is deployed with the solution.
+>**IMPORTANT:** This data connector depends on a parser based on a Kusto Function to work as expected called [**InfobloxCDC**](https://aka.ms/sentinel-InfobloxCloudDataConnector-parser) which is deployed with the Microsoft Sentinel Solution.
->**IMPORTANT:** This Microsoft Sentinel data connector assumes an Infoblox Cloud Data Connector host has already been created and configured in the Infoblox Cloud Services Portal (CSP). As the [**Infoblox Cloud Data Connector**](https://docs.infoblox.com/display/BloxOneThreatDefense/Deploying+the+Data+Connector+Solution) is a feature of BloxOne Threat Defense, access to an appropriate BloxOne Threat Defense subscription is required. See this [**quick-start guide**](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-data-connector.pdf) for more information and licensing requirements.
+>**IMPORTANT:** This Sentinel data connector assumes an Infoblox Data Connector host has already been created and configured in the Infoblox Cloud Services Portal (CSP). As the [**Infoblox Data Connector**](https://docs.infoblox.com/display/BloxOneThreatDefense/Deploying+the+Data+Connector+Solution) is a feature of BloxOne Threat Defense, access to an appropriate BloxOne Threat Defense subscription is required. See this [**quick-start guide**](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-data-connector.pdf) for more information and licensing requirements.
1. Linux Syslog agent configuration
Install and configure the Linux agent to collect your Common Event Format (CEF)
1.1 Select or create a Linux machine
-Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Microsoft Sentinel or other clouds.
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
1.2 Install the CEF collector on the Linux machine
Follow the steps below to configure the Infoblox CDC to send BloxOne data to Mic
2. Navigate to **Manage > Data Connector**. 3. Click the **Destination Configuration** tab at the top. 4. Click **Create > Syslog**.
+ - **Name**: Give the new Destination a meaningful **name**, such as **Azure-Sentinel-Destination**.
- **Description**: Optionally give it a meaningful **description**. - **State**: Set the state to **Enabled**. - **Format**: Set the format to **CEF**.
Follow the steps below to configure the Infoblox CDC to send BloxOne data to Mic
- Click **Save & Close**. 5. Click the **Traffic Flow Configuration** tab at the top. 6. Click **Create**.
+ - **Name**: Give the new Traffic Flow a meaningful **name**, such as **Azure-Sentinel-Flow**.
- **Description**: Optionally give it a meaningful **description**. - **State**: Set the state to **Enabled**. - Expand the **CDC Enabled Host** section.
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/infoblox.infoblox-cdc-solution?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/infoblox.infoblox-cdc-solution?tab=Overview) in the Azure Marketplace.
sentinel Isc Bind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/isc-bind.md
Title: "ISC Bind connector for Microsoft Sentinel"
description: "Learn how to install the connector ISC Bind to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [ISC Bind](https://www.isc.org/bind/) connector allows you to easily connect
| Connector attribute | Description | | | |
-| **Kusto function alias** | ISCBind |
-| **Kusto function url** | https://aka.ms/sentinel-iscbind-parser |
| **Log Analytics table(s)** | Syslog (ISCBind)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ## Query samples
To integrate with ISC Bind make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-iscbind-parser) to use the Kusto function alias, **ISCBind**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ISCBind and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/ISC%20Bind/Parsers/ISCBind.txt).The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-iscbind?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-iscbind?tab=Overview) in the Azure Marketplace.
sentinel Ivanti Unified Endpoint Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ivanti-unified-endpoint-management.md
Title: "Ivanti Unified Endpoint Management connector for Microsoft Sentinel"
description: "Learn how to install the connector Ivanti Unified Endpoint Management to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Ivanti Unified Endpoint Management](https://www.ivanti.com/products/unified
| Connector attribute | Description | | | | | **Log Analytics table(s)** | Syslog (IvantiUEMEvent)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Install the agent on the Server where the Ivanti Unified Endpoint Management Ale
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ivantiuem?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ivantiuem?tab=Overview) in the Azure Marketplace.
sentinel Juniper Srx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/juniper-srx.md
Title: "Juniper SRX connector for Microsoft Sentinel"
description: "Learn how to install the connector Juniper SRX to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Juniper SRX](https://www.juniper.net/us/en/products-services/security/srx-s
| Connector attribute | Description | | | |
-| **Kusto function alias** | JuniperSRX |
-| **Kusto function url** | https://aka.ms/sentinel-junipersrx-parser |
| **Log Analytics table(s)** | Syslog (JuniperSRX)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
To integrate with Juniper SRX make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-junipersrx-parser) to use the Kusto function alias, **JuniperSRX**
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias JuniperSRX and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Juniper%20SRX/Parsers/JuniperSRX.txt), on the second line of the query, enter the hostname(s) of your JuniperSRX device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-junipersrx?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-junipersrx?tab=Overview) in the Azure Marketplace.
sentinel Kaspersky Security Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/kaspersky-security-center.md
Title: "Kaspersky Security Center connector for Microsoft Sentinel"
description: "Learn how to install the connector Kaspersky Security Center to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Kaspersky Security Center](https://support.kaspersky.com/KSC/13/en-US/3396.
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (KasperskySC)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-kasperskysc?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-kasperskysc?tab=Overview) in the Azure Marketplace.
sentinel Marklogic Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/marklogic-audit.md
Title: "MarkLogic Audit connector for Microsoft Sentinel"
description: "Learn how to install the connector MarkLogic Audit to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
MarkLogic data connector provides the capability to ingest [MarkLogicAudit](http
| Connector attribute | Description | | | |
-| **Kusto function alias** | MarkLogicAudit |
-| **Kusto function url** | https://aka.ms/sentinel-marklogicaudit-parser |
| **Log Analytics table(s)** | MarkLogicAudit_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
MarkLogicAudit_CL
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-marklogicaudit-parser) to create the Kusto Functions alias, **MarkLogicAudit**
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SentinelOne and load the function code. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux or Windows
sentinel Mcafee Epolicy Orchestrator Epo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mcafee-epolicy-orchestrator-epo.md
Title: "McAfee ePolicy Orchestrator (ePO) connector for Microsoft Sentinel"
description: "Learn how to install the connector McAfee ePolicy Orchestrator (ePO) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The McAfee ePolicy Orchestrator data connector provides the capability to ingest
| **Kusto function alias** | McAfeeEPOEvent | | **Kusto function url** | https://aka.ms/sentinel-McAfeeePO-parser | | **Log Analytics table(s)** | Syslog(McAfeeePO)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ## Query samples
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-mcafeeepo?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-mcafeeepo?tab=Overview) in the Azure Marketplace.
sentinel Mcafee Network Security Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mcafee-network-security-platform.md
Title: "McAfee Network Security Platform connector for Microsoft Sentinel"
description: "Learn how to install the connector McAfee Network Security Platform to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [McAfee® Network Security Platform](https://www.mcafee.com/enterprise/en-us
| Connector attribute | Description | | | | | **Log Analytics table(s)** | Syslog (McAfeeNSPEvent)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Follow the configuration steps below to get McAfee® Network Security Platform l
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-mcafeensp?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-mcafeensp?tab=Overview) in the Azure Marketplace.
sentinel Mongodb Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/mongodb-audit.md
Title: "MongoDB Audit connector for Microsoft Sentinel"
description: "Learn how to install the connector MongoDB Audit to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
MongoDB data connector provides the capability to ingest [MongoDBAudit](https://
| Connector attribute | Description | | | |
-| **Kusto function alias** | MongoDBAudit |
-| **Kusto function url** | https://aka.ms/sentinel-mongodbaudit-parser |
| **Log Analytics table(s)** | MongoDBAudit_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
MongoDBAudit
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-mongodbaudit-parser) to create the Kusto Functions alias, **MongoDBAudit**
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SentinelOne and load the function code. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux or Windows
sentinel Morphisec Utpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/morphisec-utpp.md
Title: "Morphisec UTPP connector for Microsoft Sentinel"
description: "Learn how to install the connector Morphisec UTPP to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
Integrate vital insights from your security products with the Morphisec Data Con
| | | | **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser | | **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Morphisec](https://support.morphisec.com/support/home) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/morphisec.morphisec_utpp_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/morphisec.morphisec_utpp_mss?tab=Overview) in the Azure Marketplace.
sentinel Netskope Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netskope-using-azure-function.md
Title: "Netskope (using Azure Function) connector for Microsoft Sentinel"
description: "Learn how to install the connector Netskope (using Azure Function) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Netskope Cloud Security Platform](https://www.netskope.com/platform) connec
| Connector attribute | Description | | | | | **Application settings** | apikey<br/>workspaceID<br/>workspaceKey<br/>uri<br/>timeInterval<br/>logTypes<br/>logAnalyticsUri (optional) |
-| **Azure function app code** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Netskope/AzureFunctionNetskope/run.ps1 |
-| **Kusto function alias** | Netskope |
-| **Kusto function url** | https://aka.ms/sentinel-netskope-parser |
+| **Azure function app code** | https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Netskope/Data%20Connectors/Netskope/AzureFunctionNetskope/run.ps1 |
| **Log Analytics table(s)** | Netskope_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Netskope](https://www.netskope.com/services#support) |
To integrate with Netskope (using Azure Function) make sure you have:
> This connector uses Azure Functions to connect to Netskope to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-netskope-parser) to use the Kusto function alias, **Netskope**
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Netskope and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Netskope/Parsers/Netskope.txt), on the second line of the query, enter the hostname(s) of your Netskope device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](https://learn.microsoft.com/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
This method provides the step-by-step instructions to deploy the Netskope connec
2. Select **Timer Trigger**. 3. Enter a unique Function **Name** and modify the cron schedule, if needed. The default value is set to run the Function App every 5 minutes. (Note: the Timer trigger should match the `timeInterval` value below to prevent overlapping data), click **Create**. 4. Click on **Code + Test** on the left pane.
-5. Copy the [Function App Code](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Netskope/AzureFunctionNetskope/run.ps1) and paste into the Function App `run.ps1` editor.
+5. Copy the [Function App Code](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Netskope/Data%20Connectors/Netskope/AzureFunctionNetskope/run.ps1) and paste into the Function App `run.ps1` editor.
5. Click **Save**.
sentinel Netwrix Auditor Formerly Stealthbits Privileged Activity Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netwrix-auditor-formerly-stealthbits-privileged-activity-manager.md
Title: "Netwrix Auditor (formerly Stealthbits Privileged Activity Manager) conne
description: "Learn how to install the connector Netwrix Auditor (formerly Stealthbits Privileged Activity Manager) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
Netwrix Auditor data connector provides the capability to ingest [Netwrix Audito
| **Kusto function alias** | NetwrixAuditor | | **Kusto function url** | https://aka.ms/sentinel-netwrixauditor-parser | | **Log Analytics table(s)** | CommonSecurityLog<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-netwrixauditor?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-netwrixauditor?tab=Overview) in the Azure Marketplace.
sentinel Nozomi Networks N2os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nozomi-networks-n2os.md
Title: "Nozomi Networks N2OS connector for Microsoft Sentinel"
description: "Learn how to install the connector Nozomi Networks N2OS to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Nozomi Networks](https://www.nozominetworks.com/) data connector provides t
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (NozomiNetworks)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-nozominetworks?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-nozominetworks?tab=Overview) in the Azure Marketplace.
sentinel Openvpn Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/openvpn-server.md
Title: "OpenVPN Server connector for Microsoft Sentinel"
description: "Learn how to install the connector OpenVPN Server to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [OpenVPN](https://github.com/OpenVPN) data connector provides the capability
| Connector attribute | Description | | | | | **Log Analytics table(s)** | Syslog(OpenVPN)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
OpenVPN server logs are written into common syslog file (depending on the Linux
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-openvpn?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-openvpn?tab=Overview) in the Azure Marketplace.
sentinel Oracle Database Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/oracle-database-audit.md
Title: "Oracle Database Audit connector for Microsoft Sentinel"
description: "Learn how to install the connector Oracle Database Audit to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Oracle DB Audit data connector provides the capability to ingest [Oracle Dat
| Connector attribute | Description | | | |
-| **Kusto function alias** | OracleDatabaseAuditEvent |
-| **Kusto function url** | https://aka.ms/sentinel-OracleDatabaseAudit-parser |
| **Log Analytics table(s)** | Syslog (OracleDatabaseAudit)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
OracleDatabaseAuditEvent
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-OracleDatabaseAudit-parser) to create the Kusto Functions alias, **OracleDatabaseAuditEvent**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Oracle Database Audit and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/OracleDatabaseAudit/Parsers/OracleDatabaseAuditEvent.txt). The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
For more information please refer to [documentation](https://docs.oracle.com/en/
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-oracledbaudit?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-oracledbaudit?tab=Overview) in the Azure Marketplace.
sentinel Ossec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ossec.md
Title: "OSSEC connector for Microsoft Sentinel"
description: "Learn how to install the connector OSSEC to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023 # OSSEC connector for Microsoft Sentinel
-OSSEC data connector provides the capability to ingest [OSSEC](https://www.ossec.net/) events into Azure Sentinel. Refer to [OSSEC documentation](https://www.ossec.net/docs) for more information.
+OSSEC data connector provides the capability to ingest [OSSEC](https://www.ossec.net/) events into Microsoft Sentinel. Refer to [OSSEC documentation](https://www.ossec.net/docs) for more information.
## Connector attributes | Connector attribute | Description | | | |
-| **Kusto function alias** | OSSECEvent |
-| **Kusto function url** | https://aka.ms/sentinel-OSSEC-parser |
| **Log Analytics table(s)** | CommonSecurityLog (OSSEC)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ## Query samples
OSSECEvent
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-OSSEC-parser) to create the Kusto Functions alias, **OSSECEvent**
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias OSSEC and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/OSSEC/Parsers/OSSECEvent.txt), on the second line of the query, enter the hostname(s) of your OSSEC device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Linux Syslog agent configuration
-Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Azure Sentinel.
+Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel.
> Notice that the data from all regions will be stored in the selected workspace 1.1 Select or create a Linux machine
-Select or create a Linux machine that Azure Sentinel will use as the proxy between your security solution and Azure Sentinel this machine can be on your on-prem environment, Azure or other clouds.
+Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds.
1.2 Install the CEF collector on the Linux machine
-Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Azure Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
+Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP.
> 1. Make sure that you have Python on your machine using the following command: python -version.
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ossec?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ossec?tab=Overview) in the Azure Marketplace.
sentinel Palo Alto Networks Cortex Data Lake Cdl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/palo-alto-networks-cortex-data-lake-cdl.md
Title: "Palo Alto Networks Cortex Data Lake (CDL) connector for Microsoft Sentin
description: "Learn how to install the connector Palo Alto Networks Cortex Data Lake (CDL) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Palo Alto Networks CDL](https://www.paloaltonetworks.com/cortex/cortex-data
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (PaloAltoNetworksCDL)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) in the Azure Marketplace.
sentinel Palo Alto Networks Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/palo-alto-networks-firewall.md
Title: "Palo Alto Networks (Firewall) connector for Microsoft Sentinel"
description: "Learn how to install the connector Palo Alto Networks (Firewall) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Palo Alto Networks firewall connector allows you to easily connect your Palo
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (PaloAlto)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltopanos?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltopanos?tab=Overview) in the Azure Marketplace.
sentinel Pingfederate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/pingfederate.md
Title: "PingFederate connector for Microsoft Sentinel"
description: "Learn how to install the connector PingFederate to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) d
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (PingFederate)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-pingfederate?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-pingfederate?tab=Overview) in the Azure Marketplace.
sentinel Pulse Connect Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/pulse-connect-secure.md
Title: "Pulse Connect Secure connector for Microsoft Sentinel"
description: "Learn how to install the connector Pulse Connect Secure to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Pulse Connect Secure](https://www.pulsesecure.net/products/pulse-connect-se
| Connector attribute | Description | | | |
-| **Kusto function alias** | PulseConnectSecure |
-| **Kusto function url** | https://aka.ms/sentinelgithubparserspulsesecurevpn |
| **Log Analytics table(s)** | Syslog (PulseConnectSecure)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
To integrate with Pulse Connect Secure make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinelgithubparserspulsesecurevpn) to use this Kusto functions alias, **PulseConnectSecure**
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Pulse Connect Secure and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Pulse%20Connect%20Secure/Parsers/PulseConnectSecure.txt), on the second line of the query, enter the hostname(s) of your Pulse Connect Secure device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-pulseconnectsecure?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-pulseconnectsecure?tab=Overview) in the Azure Marketplace.
sentinel Qualys Vm Knowledgebase Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/qualys-vm-knowledgebase-using-azure-function.md
Title: "Qualys VM KnowledgeBase (using Azure Function) connector for Microsoft S
description: "Learn how to install the connector Qualys VM KnowledgeBase (using Azure Function) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Qualys Vulnerability Management (VM)](https://www.qualys.com/apps/vulnerability-management/) KnowledgeBase (KB) connector provides the capability to ingest the latest vulnerability data from the Qualys KB into Azure Sentinel.
- This data can used to correlate and enrich vulnerability detections found by the [Qualys Vulnerability Management (VM)](/azure/sentinel/connect-qualys-vm) data connector.
+ This data can used to correlate and enrich vulnerability detections found by the [Qualys Vulnerability Management (VM)](https://docs.microsoft.com/azure/sentinel/connect-qualys-vm) data connector.
## Connector attributes
The [Qualys Vulnerability Management (VM)](https://www.qualys.com/apps/vulnerabi
| | | | **Application settings** | apiUsername<br/>apiPassword<br/>workspaceID<br/>workspaceKey<br/>uri<br/>filterParameters<br/>logAnalyticsUri (optional) | | **Azure function app code** | https://aka.ms/sentinel-qualyskb-functioncode |
-| **Kusto function alias** | QualysKB |
-| **Kusto function url** | https://aka.ms/sentinel-qualyskb-parser |
| **Log Analytics table(s)** | QualysKB_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
To integrate with Qualys VM KnowledgeBase (using Azure Function) make sure you h
## Vendor installation instructions
-> [!NOTE]
- > This connector uses Azure Functions to connect to Qualys KB connector to pull logs into Azure Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias QualysVM Knowledgebase and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/CrowdStrike%20Falcon%20Endpoint%20Protection/Parsers/CrowdstrikeFalconEventStream.txt), on the second line of the query, enter the hostname(s) of your QualysVM Knowledgebase device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
>This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-qualyskb-parser) to use the Kusto function alias, **QualysKB**
This method provides the step-by-step instructions to deploy the Qualys KB conne
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-qualysvmknowledgebase?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-qualysvmknowledgebase?tab=Overview) in the Azure Marketplace.
sentinel Rapid7 Insight Platform Vulnerability Management Reports Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rapid7-insight-platform-vulnerability-management-reports-using-azure-function.md
Title: "Rapid7 Insight Platform Vulnerability Management Reports (using Azure Fu
description: "Learn how to install the connector Rapid7 Insight Platform Vulnerability Management Reports (using Azure Function) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
Use the following step-by-step instructions to deploy the Rapid7 Insight Vulnera
**1. Deploy a Function App** > **NOTE:** You will need to [prepare VS code](https://aka.ms/sentinel-InsightVMCloudAPI-functionapp) file. Extract archive to your local development computer.+ 1. Start VS Code. Choose File in the main menu and select Open Folder. 1. Select the top level folder from extracted files. 1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
If you aren't already signed in, choose the Azure icon in the Activity bar, then
If you're already signed in, go to the next step. 1. Provide the following information at the prompts:
- 1. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
- 1. **Select Subscription:** Choose the subscription to use.
- 1. Select **Create new Function App in Azure** (Don't choose the Advanced option)
- 1. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. InsightVMXXXXX).
- 1. **Select a runtime:** Choose Python 3.8.
- 1. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
+ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
+
+ b. **Select Subscription:** Choose the subscription to use.
+
+ c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
+
+ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. InsightVMXXXXX).
+
+ e. **Select a runtime:** Choose Python 3.8.
+
+ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
1. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied. 1. Go to Azure Portal for the Function App configuration.
sentinel Rsa Securid Authentication Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rsa-securid-authentication-manager.md
Title: "RSA® SecurID (Authentication Manager) connector for Microsoft Sentinel"
description: "Learn how to install the connector RSA® SecurID (Authentication Manager) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [RSA® SecurID Authentication Manager](https://www.securid.com/) data connec
| Connector attribute | Description | | | | | **Log Analytics table(s)** | Syslog (RSASecurIDAMEvent)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Follow the configuration steps below to get RSA® SecurID Authentication Manager
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-securid?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-securid?tab=Overview) in the Azure Marketplace.
sentinel Security Events Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/security-events-via-legacy-agent.md
Title: "Security Events via Legacy Agent connector for Microsoft Sentinel"
description: "Learn how to install the connector Security Events via Legacy Agent to connect your data source to Microsoft Sentinel." Previously updated : 02/28/2023 Last updated : 03/25/2023
You can stream all security events from the Windows machines connected to your M
| Connector attribute | Description | | | | | **Log Analytics table(s)** | SecurityEvents<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-securityevents?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-securityevents?tab=Overview) in the Azure Marketplace.
sentinel Sentinelone Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sentinelone-using-azure-function.md
Title: "SentinelOne (using Azure Function) connector for Microsoft Sentinel"
description: "Learn how to install the connector SentinelOne (using Azure Function) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [SentinelOne](https://www.sentinelone.com/) data connector provides the capa
| | | | **Application settings** | SentinelOneAPIToken<br/>SentinelOneUrl<br/>WorkspaceID<br/>WorkspaceKey<br/>logAnalyticsUri (optional) | | **Azure function app code** | https://aka.ms/sentinel-SentinelOneAPI-functionapp |
-| **Kusto function alias** | SentinelOne |
-| **Kusto function url** | https://aka.ms/sentinel-SentinelOneAPI-parser |
| **Log Analytics table(s)** | SentinelOne_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
To integrate with SentinelOne (using Azure Function) make sure you have:
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-SentinelOneAPI-parser) to create the Kusto functions alias, **SentinelOne**
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SentinelOne and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/SentinelOne/Parsers/SentinelOne.txt). The function usually takes 10-15 minutes to activate after solution installation/update.
**STEP 1 - Configuration steps for the SentinelOne API**
To integrate with SentinelOne (using Azure Function) make sure you have:
7. Save credentials of the new user for using in the data connector.
-**NOTE :- **Admin access can be delegated using custom roles. Please review SentinelOne [documentation](https://www.sentinelone.com/blog/feature-spotlight-fully-custom-role-based-access-control/) to learn more about custom RBAC.
+**NOTE :-** Admin access can be delegated using custom roles. Please review SentinelOne [documentation](https://www.sentinelone.com/blog/feature-spotlight-fully-custom-role-based-access-control/) to learn more about custom RBAC.
**STEP 2 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
If you're already signed in, go to the next step.
**2. Configure the Function App**
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select ** New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- SentinelOneAPIToken
- SentinelOneUrl
- WorkspaceID
- WorkspaceKey
- logAnalyticsUri (optional)
+ 1. In the Function App, select the Function App Name and select **Configuration**.
+
+ 2. In the **Application settings** tab, select ** New application setting**.
+
+ 3. Add each of the following application settings individually, with their respective string values (case-sensitive):
+ SentinelOneAPIToken
+ SentinelOneUrl
+ WorkspaceID
+ WorkspaceKey
+ logAnalyticsUri (optional)
+ > - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
-4. Once all application settings have been entered, click **Save**.
+
+ 4. Once all application settings have been entered, click **Save**.
sentinel Sonicwall Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sonicwall-firewall.md
Title: "SonicWall Firewall connector for Microsoft Sentinel"
description: "Learn how to install the connector SonicWall Firewall to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
Common Event Format (CEF) is an industry standard format on top of Syslog messag
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (SonicWall)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [SonicWall](https://www.sonicwall.com/support/) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sonicwall-inc.sonicwall-networksecurity-azure-sentinal?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sonicwall-inc.sonicwall-networksecurity-azure-sentinal?tab=Overview) in the Azure Marketplace.
sentinel Sophos Xg Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sophos-xg-firewall.md
Title: "Sophos XG Firewall connector for Microsoft Sentinel"
description: "Learn how to install the connector Sophos XG Firewall to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Sophos XG Firewall](https://www.sophos.com/products/next-gen-firewall.aspx)
| Connector attribute | Description | | | |
-| **Kusto function alias** | SophosXGFirewall |
-| **Kusto function url** | https://aka.ms/sentinelgithubparserssophosfirewallxg |
| **Log Analytics table(s)** | Syslog (SophosXGFirewall)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ## Query samples
To integrate with Sophos XG Firewall make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinelgithubparserssophosfirewallxg) to create the Kusto functions alias, **SophosXGFirewall**
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Sophos XG Firewall and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Sophos%20XG%20Firewall/Parsers/SophosXGFirewall.txt), on the second line of the query, enter the hostname(s) of your Sophos XG Firewall device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sophosxgfirewall?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-sophosxgfirewall?tab=Overview) in the Azure Marketplace.
sentinel Squid Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/squid-proxy.md
Title: "Squid Proxy connector for Microsoft Sentinel"
description: "Learn how to install the connector Squid Proxy to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023 # Squid Proxy connector for Microsoft Sentinel
-The [Squid Proxy](http://www.squid-cache.org/) connector allows you to easily connect your Squid Proxy logs with Azure Sentinel. This gives you more insight into your organization's network proxy traffic and improves your security operation capabilities.
+The [Squid Proxy](http://www.squid-cache.org/) connector allows you to easily connect your Squid Proxy logs with Microsoft Sentinel. This gives you more insight into your organization's network proxy traffic and improves your security operation capabilities.
## Connector attributes | Connector attribute | Description | | | |
-| **Kusto function alias** | SquidProxy |
-| **Kusto function url** | https://aka.ms/sentinelgithubparsersquidproxy |
| **Log Analytics table(s)** | SquidProxy_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |
SquidProxy
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinelgithubparsersquidproxy) to use the Kusto function alias, **SquidProxy**.
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Squid Proxy and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/SquidProxy/Parsers/SquidProxy.txt), on the second line of the query, enter the hostname(s) of your SquidProxy device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux or Windows
sentinel Symantec Endpoint Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-endpoint-protection.md
Title: "Symantec Endpoint Protection connector for Microsoft Sentinel"
description: "Learn how to install the connector Symantec Endpoint Protection to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Broadcom Symantec Endpoint Protection (SEP)](https://www.broadcom.com/produ
| Connector attribute | Description | | | |
-| **Kusto function alias** | SEP |
-| **Kusto function url** | https://aka.ms/sentinel-SymantecEndpointProtection-parser |
| **Log Analytics table(s)** | Syslog (SymantecEndpointProtection)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
To integrate with Symantec Endpoint Protection make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-SymantecEndpointProtection-parser) to use the Kusto function alias, **SymantecEndpointProtection**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Symantec Endpoint Protection and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Symantec%20Endpoint%20Protection/Parsers/SymantecEndpointProtection.txt), on the second line of the query, enter the hostname(s) of your SymantecEndpointProtection device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-symantecendpointprotection?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-symantecendpointprotection?tab=Overview) in the Azure Marketplace.
sentinel Symantec Proxysg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-proxysg.md
Title: "Symantec ProxySG connector for Microsoft Sentinel"
description: "Learn how to install the connector Symantec ProxySG to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Symantec ProxySG](https://www.broadcom.com/products/cyber-security/network/
| Connector attribute | Description | | | |
-| **Kusto function alias** | SymantecProxySG |
-| **Kusto function url** | https://aka.ms/sentinelgithubparserssymantecproxysg |
| **Log Analytics table(s)** | Syslog (SymantecProxySG)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ## Query samples
To integrate with Symantec ProxySG make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinelgithubparserssymantecproxysg) to create the Kusto functions alias, **SymantecProxySG**
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Symantec Proxy SG and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/SymantecProxySG/Parsers/SymantecProxySG/SymantecProxySG.txt), on the second line of the query, enter the hostname(s) of your Symantec Proxy SG device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-symantec-proxysg?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-symantec-proxysg?tab=Overview) in the Azure Marketplace.
sentinel Symantec Vip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-vip.md
Title: "Symantec VIP connector for Microsoft Sentinel"
description: "Learn how to install the connector Symantec VIP to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Symantec VIP](https://vip.symantec.com/) connector allows you to easily con
| Connector attribute | Description | | | |
-| **Kusto function alias** | SymantecVIP |
-| **Kusto function url** | https://aka.ms/sentinelgithubparserssymantecvip |
| **Log Analytics table(s)** | Syslog (SymantecVIP)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
To integrate with Symantec VIP make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinelgithubparserssymantecvip) to use the Kusto function alias, **SymantecVIP**
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Symantec VIP and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Symantec%20VIP/Parsers/SymantecVIP.txt), on the second line of the query, enter the hostname(s) of your Symantec VIP device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-symantecvip?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-symantecvip?tab=Overview) in the Azure Marketplace.
sentinel Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/syslog.md
Title: "Syslog connector for Microsoft Sentinel"
description: "Learn how to install the connector Syslog to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
Syslog is an event logging protocol that is common to Linux. Applications will s
| Connector attribute | Description | | | | | **Log Analytics table(s)** | Syslog<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-syslog?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-syslog?tab=Overview) in the Azure Marketplace.
sentinel Talon Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/talon-insights.md
+
+ Title: "Talon Insights connector for Microsoft Sentinel"
+description: "Learn how to install the connector Talon Insights to connect your data source to Microsoft Sentinel."
++ Last updated : 03/25/2023++++
+# Talon Insights connector for Microsoft Sentinel
+
+The Talon Security Logs connector allows you to easily connect your Talon events and audit logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Log Analytics table(s)** | Talon_CL<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Talon Security](https://docs.console.talon-sec.com/) |
+
+## Query samples
+
+**Blocked user activities**
+ ```kusto
+Talon_CL
+ | where action_s != "blocked"
+ ```
+
+**Failed login user **
+ ```kusto
+Talon_CL
+ | where eventType_s == "loginFailed"
+ ```
+
+**Audit logs changes **
+ ```kusto
+ Talon_CL
+ | where type_s == "audit"
+ ```
+++
+## Vendor installation instructions
++
+Please note the values below and follow the instructions <a href='https://docs.console.talon-sec.com/en/articles/254-microsoft-sentinel-integration'>here</a> to connect your Talon Security events and audit logs with Microsoft Sentinel.
+++++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/taloncybersecurityltd1654088115170.talonconnector?tab=Overview) in the Azure Marketplace.
sentinel Threat Intelligence Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/threat-intelligence-platforms.md
Title: "Threat Intelligence Platforms connector for Microsoft Sentinel"
description: "Learn how to install the connector Threat Intelligence Platforms to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023 # Threat Intelligence Platforms connector for Microsoft Sentinel
-Microsoft Sentinel integrates with Microsoft Graph Security API data sources to enable monitoring, alerting, and hunting using your threat intelligence. Use this connector to send threat indicators to Microsoft Sentinel from your Threat Intelligence Platform (TIP), such as Threat Connect, Palo Alto Networks MindMeld, MISP, or other integrated applications. Threat indicators can include IP addresses, domains, URLs, and file hashes.
+Microsoft Sentinel integrates with Microsoft Graph Security API data sources to enable monitoring, alerting, and hunting using your threat intelligence. Use this connector to send threat indicators to Microsoft Sentinel from your Threat Intelligence Platform (TIP), such as Threat Connect, Palo Alto Networks MindMeld, MISP, or other integrated applications. Threat indicators can include IP addresses, domains, URLs, and file hashes. For more information, see the [Microsoft Sentinel documentation >](https://go.microsoft.com/fwlink/p/?linkid=2223729&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
## Connector attributes
sentinel Threat Intelligence Taxii https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/threat-intelligence-taxii.md
Title: "Threat intelligence - TAXII connector for Microsoft Sentinel"
description: "Learn how to install the connector Threat intelligence - TAXII to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023 # Threat intelligence - TAXII connector for Microsoft Sentinel
-Microsoft Sentinel integrates with TAXII 2.0 and 2.1 data sources to enable monitoring, alerting, and hunting using your threat intelligence. Use this connector to send threat indicators from TAXII servers to Microsoft Sentinel. Threat indicators can include IP addresses, domains, URLs, and file hashes.
+Microsoft Sentinel integrates with TAXII 2.0 and 2.1 data sources to enable monitoring, alerting, and hunting using your threat intelligence. Use this connector to send threat indicators from TAXII servers to Microsoft Sentinel. Threat indicators can include IP addresses, domains, URLs, and file hashes. For more information, see the [Microsoft Sentinel documentation >](https://go.microsoft.com/fwlink/p/?linkid=2224105&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
## Connector attributes
sentinel Trend Micro Apex One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-apex-one.md
Title: "Trend Micro Apex One connector for Microsoft Sentinel"
description: "Learn how to install the connector Trend Micro Apex One to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [Trend Micro Apex One](https://www.trendmicro.com/en_us/business/products/us
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (TrendMicroApexOne)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-trendmicroapexone?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-trendmicroapexone?tab=Overview) in the Azure Marketplace.
sentinel Trend Micro Deep Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-deep-security.md
Title: "Trend Micro Deep Security connector for Microsoft Sentinel"
description: "Learn how to install the connector Trend Micro Deep Security to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Trend Micro Deep Security connector allows you to easily connect your Deep S
| | | | **Kusto function url** | https://aka.ms/TrendMicroDeepSecurityFunction | | **Log Analytics table(s)** | CommonSecurityLog (TrendMicroDeepSecurity)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Trend Micro](https://success.trendmicro.com/dcx/s/?language=en_US) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trendmicro.trend_micro_deep_security_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trendmicro.trend_micro_deep_security_mss?tab=Overview) in the Azure Marketplace.
sentinel Trend Micro Tippingpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-tippingpoint.md
Title: "Trend Micro TippingPoint connector for Microsoft Sentinel"
description: "Learn how to install the connector Trend Micro TippingPoint to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Trend Micro TippingPoint connector allows you to easily connect your Tipping
| | | | **Kusto function url** | https://aka.ms/sentinel-trendmicrotippingpoint-function | | **Log Analytics table(s)** | CommonSecurityLog (TrendMicroTippingPoint)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Trend Micro](https://success.trendmicro.com/dcx/s/contactus?language=en_US) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trendmicro.trend_micro_tippingpoint_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trendmicro.trend_micro_tippingpoint_mss?tab=Overview) in the Azure Marketplace.
sentinel Varmour Application Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/varmour-application-controller.md
Title: "vArmour Application Controller connector for Microsoft Sentinel"
description: "Learn how to install the connector vArmour Application Controller to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
vArmour reduces operational risk and increases cyber resiliency by visualizing a
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (vArmour)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [vArmour Networks](https://www.varmour.com/contact-us/) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/varmournetworks.varmour_sentinel?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/varmournetworks.varmour_sentinel?tab=Overview) in the Azure Marketplace.
sentinel Vectra Ai Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vectra-ai-detect.md
Title: "Vectra AI Detect connector for Microsoft Sentinel"
description: "Learn how to install the connector Vectra AI Detect to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The AI Vectra Detect connector allows users to connect Vectra Detect logs with M
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (AIVectraDetect)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Vectra AI](https://www.vectra.ai/support) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vectraaiinc.ai_vectra_detect_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vectraaiinc.ai_vectra_detect_mss?tab=Overview) in the Azure Marketplace.
sentinel Vmware Esxi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-esxi.md
Title: "VMware ESXi connector for Microsoft Sentinel"
description: "Learn how to install the connector VMware ESXi to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [VMware ESXi](https://www.vmware.com/products/esxi-and-esx.html) connector a
| Connector attribute | Description | | | |
-| **Kusto function alias** | VMwareESXi |
-| **Kusto function url** | https://aka.ms/sentinel-vmwareesxi-parser |
| **Log Analytics table(s)** | Syslog (VMwareESXi)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Query samples
To integrate with VMware ESXi make sure you have:
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-vmwareesxi-parser) to use the Kusto function alias, **VMwareESXi**
+> [!NOTE]
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias VMwareESXi and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/VMWareESXi/Parsers/VMwareESXi.txt), on the second line of the query, enter the hostname(s) of your VMwareESXi device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-vmwareesxi?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-vmwareesxi?tab=Overview) in the Azure Marketplace.
sentinel Vmware Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-vcenter.md
Title: "VMware vCenter connector for Microsoft Sentinel"
description: "Learn how to install the connector VMware vCenter to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The [vCenter](https://www.vmware.com/in/products/vcenter-server.html) connector
| Connector attribute | Description | | | |
-| **Kusto function alias** | vCenter |
-| **Kusto function url** | https://aka.ms/sentinel-vcenter-parser |
| **Log Analytics table(s)** | vCenter_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
vCenter
## Vendor installation instructions
-This data connector depends on a parser (based on a Kusto Function) to work as expected.
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias VMware vCenter and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/VMware%20vCenter/Parsers/vCenter.txt), on the second line of the query, enter the hostname(s) of your VMware vCenter device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
> 1. If you have not installed the vCenter solution from ContentHub then [Follow the steps](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/VCenter/Parsers/vCenter.txt) to use the Kusto function alias, **vCenter** 1. Install and onboard the agent for Linux
sentinel Watchguard Firebox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/watchguard-firebox.md
Title: "WatchGuard Firebox connector for Microsoft Sentinel"
description: "Learn how to install the connector WatchGuard Firebox to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
WatchGuard Firebox (https://www.watchguard.com/wgrd-products/firewall-appliances
| **Kusto function alias** | WatchGuardFirebox | | **Kusto function url** | https://aka.ms/sentinel-watchguardfirebox-parser | | **Log Analytics table(s)** | Syslog (WatchGuardFirebox)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [WatchGuard](https://www.watchguard.com/wgrd-support/contact-support) | ## Query samples
Configure the facilities you want to collect and their severities.
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/watchguard-technologies.watchguard_firebox_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/watchguard-technologies.watchguard_firebox_mss?tab=Overview) in the Azure Marketplace.
sentinel Windows Forwarded Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/windows-forwarded-events.md
Title: "Windows Forwarded Events connector for Microsoft Sentinel"
description: "Learn how to install the connector Windows Forwarded Events to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
You can stream all Windows Event Forwarding (WEF) logs from the Windows Servers
| Connector attribute | Description | | | | | **Log Analytics table(s)** | WindowsEvents<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-windowsforwardedevents?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-windowsforwardedevents?tab=Overview) in the Azure Marketplace.
sentinel Windows Security Events Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/windows-security-events-via-ama.md
Title: "Windows Security Events via AMA connector for Microsoft Sentinel"
description: "Learn how to install the connector Windows Security Events via AMA to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023 # Windows Security Events via AMA connector for Microsoft Sentinel
-You can stream all security events from the Windows machines connected to your Microsoft Sentinel workspace using the Windows agent. This connection enables you to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs network and improves your security operation capabilities.
+You can stream all security events from the Windows machines connected to your Microsoft Sentinel workspace using the Windows agent. This connection enables you to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs network and improves your security operation capabilities. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2220225&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
## Connector attributes | Connector attribute | Description | | | | | **Log Analytics table(s)** | SecurityEvents<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-securityevents?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-securityevents?tab=Overview) in the Azure Marketplace.
sentinel Wirex Network Forensics Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/wirex-network-forensics-platform.md
Title: "WireX Network Forensics Platform connector for Microsoft Sentinel"
description: "Learn how to install the connector WireX Network Forensics Platform to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The WireX Systems data connector allows security professional to integrate with
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (WireXNFPevents)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [WireX Systems](https://wirexsystems.com/contact-us/) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/wirexsystems1584682625009.wirex_network_forensics_platform_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/wirexsystems1584682625009.wirex_network_forensics_platform_mss?tab=Overview) in the Azure Marketplace.
sentinel Withsecure Elements Via Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/withsecure-elements-via-connector.md
Title: "WithSecure Elements via connector for Microsoft Sentinel"
description: "Learn how to install the connector WithSecure Elements via to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Common Event Format (CEF) provides natively search & correlation, alerting a
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (WithSecure Events)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [WithSecure](https://www.withsecure.com/en/support) | ## Query samples
The Common Event Format (CEF) provides natively search & correlation, alerting a
```kusto CommonSecurityLog
- | where DeviceVendor == "F-Secure"
+ | where DeviceVendor == "WithSecureΓäó"
| sort by TimeGenerated ```
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/withsecurecorporation.sentinel-solution-withsecure-via-connector?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/withsecurecorporation.sentinel-solution-withsecure-via-connector?tab=Overview) in the Azure Marketplace.
sentinel Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/zscaler.md
Title: "Zscaler connector for Microsoft Sentinel"
description: "Learn how to install the connector Zscaler to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/25/2023
The Zscaler data connector allows you to easily connect your Zscaler Internet Ac
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (Zscaler)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Zscaler](https://help.zscaler.com/submit-ticket-links) | ## Query samples
Make sure to configure the machine's security according to your organization's s
## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zscaler1579058425289.zscaler_internet_access_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zscaler1579058425289.zscaler_internet_access_mss?tab=Overview) in the Azure Marketplace.
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
Only the following tables are currently supported for custom log ingestion:
- [**SecurityEvent**](/azure/azure-monitor/reference/tables/securityevent) - [**CommonSecurityLog**](/azure/azure-monitor/reference/tables/commonsecuritylog) - [**Syslog**](/azure/azure-monitor/reference/tables/syslog)-- [**ASimDnsActivityLog**](/azure/azure-monitor/reference/tables/asimdnsactivitylogs)
+- [**ASimDnsActivityLogs**](/azure/azure-monitor/reference/tables/asimdnsactivitylogs)
- [**ASimNetworkSessionLogs**](/azure/azure-monitor/reference/tables/asimnetworksessionlogs) ## Known issues
sentinel Migration Export Ingest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-export-ingest.md
To ingest your historical data into Azure Data Explorer (ADX) (option 1 in the [
To ingest your historical data into Microsoft Sentinel Basic Logs (option 2 in the [diagram above](#export-data-from-the-legacy-siem)): 1. If you don't have an existing Log Analytics workspace, create a new workspace and [install Microsoft Sentinel](quickstart-onboard.md#enable-microsoft-sentinel-).
-1. [Create an App registration to authenticate against the API](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#configure-the-application).
-1. [Create a data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-a-data-collection-endpoint). This endpoint acts as the API endpoint that accepts the data.
-1. [Create a custom log table](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#add-a-custom-log-table) to store the data, and provide a data sample. In this step, you can also define a transformation before the data is ingested.
+1. [Create an App registration to authenticate against the API](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-azure-ad-application).
+1. [Create a data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-data-collection-endpoint). This endpoint acts as the API endpoint that accepts the data.
+1. [Create a custom log table](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-new-table-in-log-analytics-workspace) to store the data, and provide a data sample. In this step, you can also define a transformation before the data is ingested.
1. [Collect information from the data collection rule](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#collect-information-from-the-dcr) and assign permissions to the rule. 1. [Change the table from Analytics to Basic Logs](../azure-monitor/logs/basic-logs-configure.md). 1. Run the [Custom Log Ingestion script](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/CustomLogsIngestion-DCE-DCR). The script asks for the following details:
sentinel Normalization Schema Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-authentication.md
imAuthentication (targetusername_has = 'johndoe', starttime = ago(1d), endtime=n
## Normalized content
-Normalized authentication analytic rules are unique as they detect attacks across sources. So, for example, if a user logged in to different, unrelated systems, from different countries, Microsoft Sentinel will now detect this threat.
+Normalized authentication analytic rules are unique as they detect attacks across sources. So, for example, if a user logged in to different, unrelated systems, from different countries/regions, Microsoft Sentinel will now detect this threat.
For a full list of analytics rules that use normalized Authentication events, see [Authentication schema security content](normalization-content.md#authentication-security-content).
sentinel Cross Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/cross-workspace.md
+
+ Title: Working with the Microsoft Sentinel solution for SAP® applications across multiple workspaces
+description: This article discusses working with Microsoft Sentinel solution for SAP® applications across multiple workspaces in different scenarios.
+++ Last updated : 03/22/2023++
+# Working with the Microsoft Sentinel solution for SAP® applications across multiple workspaces
+
+When you set up your Microsoft Sentinel workspace, there are [multiple architecture options](../design-your-workspace-architecture.md#decision-tree) and considerations. Considering geography, regulation, access control, and other factors, you may choose to have multiple Sentinel workspaces in your organization.
+
+This article discusses working with the Microsoft Sentinel solution for SAP® applications across multiple workspaces in different scenarios.
+
+The Microsoft Sentinel solution for SAP® applications natively supports a cross-workspace architecture to allow improved flexibility for:
+
+- Managed security service providers (MSSPs) or a global or federated SOC
+- Data residency requirements
+- Organizational hierarchy/IT design
+- Insufficient role-based access control (RBAC) in a single workspace
+
+> [!IMPORTANT]
+> Working with multiple workspaces is currently in PREVIEW. This feature is provided without a service level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+You can define multiple workspaces when you [deploy the SAP security content](deploy-sap-security-content.md#deploy-sap-security-content).
+
+## Collaboration between the SOC and SAP teams in your organization
+
+In this article, we focus on a specific and common use case, where collaboration between the security operations center (SOC) and SAP teams in your organization requires a multi-workspace setup.
+
+Your organization's SAP team has technical knowledge that's critical to successfully and effectively implement the Microsoft Sentinel solution for SAP® applications. Therefore, it's important for the SAP team see the relevant data and collaborate with the SOC on the required configuration and incident response procedures.
+
+As part of this collaboration, there are two possible scenarios, depending on your organization's needs:
+
+1. **The SAP data and the SOC data reside in separate workspaces**. Both teams can see the SAP data, using [cross-workspace queries](#scenario-1-sap-and-soc-data-reside-in-separate-workspaces).
+1. **The SAP data is kept in the SOC workspace**, and SAP team can query the data using [resource context queries](#scenario-2-sap-data-is-kept-in-the-soc-workspace).
+
+## Scenario 1: SAP and SOC data reside in separate workspaces
+
+In this scenario, the SAP and SOC teams have separate Microsoft Sentinel workspaces.
++
+When your organization [deploys the Microsoft Sentinel solution for SAP® applications](deploy-sap-security-content.md#deploy-sap-security-content), each team specifies its SAP workspace.
+
+A common practice is to provide some or all of the SOC team members with the **Sentinel Reader** role on the SAP workspace.
+
+Creating separate workspaces for the SAP and SOC data has these benefits:
+
+- Microsoft Sentinel can trigger alerts that include both SOC and SAP data, and run those alerts on the SOC workspace.
+
+ > [!NOTE]
+ > For larger SAP landscapes, running queries made by the SOC on data from the SAP workspace can impact performance, because the SAP data must travel to the SOC workspace when being queried. For improved performance and cost optimizations, consider having both the SOC and SAP workspaces on the same [dedicated cluster](../../azure-monitor/logs/logs-dedicated-clusters.md?tabs=cli#cluster-pricing-model).
+
+- The SAP team has its own Microsoft Sentinel workspace, including all features, except for detections that include both SOC and SAP data.
+- Flexibility: The SAP team can focus on the control and internal threats in its landscape, while the SOC can focus on external threats.
+- There is no additional charge for ingestion fees, because data is only ingested once into Microsoft Sentinel. However, note that each workspace has its own [pricing tier](../design-your-workspace-architecture.md#step-5-collecting-any-non-soc-data).
+- The SOC can see and investigate SAP incidents: If the SAP team faces an event they can't explain with the existing data, they can assign the incident to the SOC.
+
+This table maps out the access of data and features for the SAP and SOC teams in this scenario.
+
+|Function |SOC team |SAP team |
+||||
+|SOC workspace access | &#x2705; | &#10060; |
+|SAP workspace data, analytics rules, functions, watchlists, and workbooks access | &#x2705; | &#x2705;<sup>1</sup> |
+|SAP incident access and collaboration | &#x2705; | &#x2705;<sup>1</sup> |
+
+<sup>1</sup>The SOC team can see these functions on both workspaces, while the SAP team can see these functions only on the SAP workspace.
+
+## Scenario 2: SAP data is kept in the SOC workspace
+
+In this scenario, you want to keep all of the data in one workspace and to apply access controls. You can do this using Log Analytics to [manage access to data by resource](../resource-context-rbac.md). You can also associate SAP resources with an Azure resource ID by specifying the required `azure_resource_id` field in the [connector configuration section](reference-systemconfig.md#connector-configuration-section) on the data collector used to ingest data from the SAP system into Microsoft Sentinel.
++
+Once the data collector agent is configured with the correct resource ID, the SAP team can access the specific SAP data in the SOC workspace using a resource-scoped query. The SAP team cannot read any of the other, non-SAP data types.
+
+There are no costs associated with this approach, as the data is only ingested once into Microsoft Sentinel. Using this mode of access, the SAP team only sees raw and unformatted data and cannot use any Microsoft Sentinel features. In addition to accessing the raw data via log analytics, the SAP team can also access the same data [via Power BI](../resource-context-rbac.md).
+
+## Next steps
+
+In this article, you learned about working with Microsoft Sentinel solution for SAP® applications across multiple workspaces in different scenarios.
+
+> [!div class="nextstepaction"]
+> [Deploy the Sentinel solution for SAP® applications](deployment-overview.md)
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
Deployment of the Microsoft Sentinel solution for SAP® applications is divided
1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+1. [Work with the solution across multiple workspaces](cross-workspace.md) (PREVIEW)
+ 1. [Prepare SAP environment](preparing-sap.md) 1. **Deploy data connector agent (*You are here*)**
If you're not using SNC, then your SAP configuration and authentication secrets
```bash wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh --cloud fairfax ```
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the number of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
2. **Follow the on-screen instructions** to enter your SAP and key vault details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
If you're not using SNC, then your SAP configuration and authentication secrets
./sapcon-sentinel-kickstart.sh --keymode kvsi --appid aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa --appsecret ssssssssssssssssssssssssssssssssss -tenantid bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb -kvaultname <key vault name> ```
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the number of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
If you're not using SNC, then your SAP configuration and authentication secrets
./sapcon-sentinel-kickstart.sh --keymode cfgf ```
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the number of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
sentinel Deploy Sap Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-security-content.md
Title: Deploy SAP security content in Microsoft Sentinel description: This article shows you how to deploy Microsoft Sentinel security content into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Microsoft Sentinel solution for SAP® applications.--++ Previously updated : 04/27/2022 Last updated : 03/23/2023 # Deploy SAP security content in Microsoft Sentinel This article shows you how to deploy Microsoft Sentinel security content into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Microsoft Sentinel solution for SAP® applications.
+Learn about [working with the solution across multiple workspaces](cross-workspace.md) (PREVIEW), or [define multiple workspaces](#deploy-sap-security-content).
+ ## Deployment milestones Track your SAP solution deployment journey through this series of articles:
Track your SAP solution deployment journey through this series of articles:
1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+1. [Work with the solution across multiple workspaces](cross-workspace.md) (PREVIEW)
+ 1. [Prepare SAP environment](preparing-sap.md) 1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
To deploy SAP solution security content, do the following:
1. To open the SAP solution page, select **Microsoft Sentinel solution for SAP® applications**.
- :::image type="content" source="./media/deploy-sap-security-content/sap-solution.png" alt-text="Screenshot of the 'Microsoft Sentinel solution for SAP® applications' solution pane." lightbox="media/deploy-sap-security-content/sap-solution.png":::
+ :::image type="content" source="./media/deploy-sap-security-content/sap-solution.png" alt-text="Screenshot of the 'Microsoft Sentinel solution for SAP® applications' solution pane." lightbox="./media/deploy-sap-security-content/sap-solution.png":::
+
+1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription and resource group.
+
+1. For the **Deployment target workspace**, select the Log Analytics workspace (the one used by Microsoft Sentinel) where you want to deploy the solution. <a id="multi-workspace"></a>
+
+1. If you want to [work with the Microsoft Sentinel solution for SAP® applications across multiple workspaces](cross-workspace.md) (PREVIEW), do one of the following, select **Some of the data is on a different workspace**.
+ 1. Under **Configure the workspace where the SOC data resides in**, select the SOC subscription and workspace.
+ 1. Under **Configure the workspace where the SAP data resides in**, select the SAP subscription and workspace.
+
+ For example:
+
+ :::image type="content" source="./media/deploy-sap-security-content/sap-multi-workspace.png" alt-text="Screenshot of how to configure the Microsoft Sentinel solution for SAP® applications to work across multiple workspaces.":::
-1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription, resource group, and Log Analytics workspace (the one used by Microsoft Sentinel) where you want to deploy the solution.
+ > [!Note]
+ > If you want the SAP and SOC data to be kept on the same workspace with no additional access controls, do not select **Some of the data is on a different workspace**. If you want the SOC and SAP data to be kept on the same workspace, but to apply additional access controls, review [this scenario](cross-workspace.md#scenario-2-sap-data-is-kept-in-the-soc-workspace).
1. Select **Next** to cycle through the **Data Connectors**, **Analytics**, and **Workbooks** tabs, where you can learn about the components that will be deployed with this solution.
To deploy SAP solution security content, do the following:
1. In Microsoft Sentinel, go to the **Microsoft Sentinel for SAP** data connector to confirm the connection:
- [![Screenshot of the Microsoft Sentinel for SAP data connector page.](./media/deploy-sap-security-content/sap-data-connector.png)](./media/deploy-sap-security-content/sap-data-connector.png#lightbox)
+ :::image type="content" source="./media/deploy-sap-security-content/sap-data-connector.png" alt-text="Screenshot of the Microsoft Sentinel for SAP data connector page." lightbox="media/deploy-sap-security-content/sap-data-connector.png":::
SAP ABAP logs are displayed on the Microsoft Sentinel **Logs** page, under **Custom logs**:
- [![Screenshot of the SAP ABAP logs in the 'Custom Logs' area in Microsoft Sentinel.](./media/deploy-sap-security-content/sap-logs-in-sentinel.png)](./media/deploy-sap-security-content/sap-logs-in-sentinel.png#lightbox)
+ :::image type="content" source="./media/deploy-sap-security-content/sap-logs-in-sentinel.png" alt-text="Screenshot of the SAP ABAP logs in the 'Custom Logs' area in Microsoft Sentinel." lightbox="media/deploy-sap-security-content/sap-logs-in-sentinel.png":::
For more information, see [Microsoft Sentinel solution for SAP® applications solution logs reference](sap-solution-log-reference.md).
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Follow your deployment journey through this series of articles, in which you'll
| Milestone | Article | | | - | | **1. Deployment overview** | **YOU ARE HERE** |
-| **2. Deployment prerequisites** | [Prerequisites for deploying the Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) |
-| **3. Prepare SAP environment** | [Deploying SAP CRs and configuring authorization](preparing-sap.md) |
-| **4. Deploy data connector agent** | [Deploy and configure the container hosting the data connector agent](deploy-data-connector-agent-container.md) |
-| **5. Deploy SAP security content** | [Deploy SAP security content](deploy-sap-security-content.md)
-| **6. Microsoft Sentinel solution for SAP® applications** | [Configure Microsoft Sentinel solution for SAP® applications](deployment-solution-configuration.md) |
-| **7. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure Microsoft Sentinel for SAP data connector to use SNC](configure-snc.md)<br>- [Configure audit log monitoring rules](configure-audit-log-rules.md)<br>- [Select SAP ingestion profiles](select-ingestion-profiles.md) |
+| **2. Plan architecture** | Learn about [working with the solution across multiple workspaces](cross-workspace.md) (PREVIEW) |
+| **3. Deployment prerequisites** | [Prerequisites for deploying the Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) |
+| **4. Prepare SAP environment** | [Deploying SAP CRs and configuring authorization](preparing-sap.md) |
+| **5. Deploy data connector agent** | [Deploy and configure the container hosting the data connector agent](deploy-data-connector-agent-container.md) |
+| **6. Deploy SAP security content** | [Deploy SAP security content](deploy-sap-security-content.md)
+| **7. Microsoft Sentinel solution for SAP® applications** | [Configure Microsoft Sentinel solution for SAP® applications](deployment-solution-configuration.md) |
+| **8. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure Microsoft Sentinel for SAP data connector to use SNC](configure-snc.md)<br>- [Configure audit log monitoring rules](configure-audit-log-rules.md)<br>- [Select SAP ingestion profiles](select-ingestion-profiles.md) |
## Next steps
sentinel Deployment Solution Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md
Track your SAP solution deployment journey through this series of articles:
1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+1. [Work with the solution across multiple workspaces](cross-workspace.md) (PREVIEW)
+ 1. [Prepare SAP environment](preparing-sap.md) 1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
Track your SAP solution deployment journey through this series of articles:
1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+1. [Work with the solution across multiple workspaces](cross-workspace.md) (PREVIEW)
+ 1. **Prepare SAP environment (*You are here*)** 1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
Track your SAP solution deployment journey through this series of articles:
1. **Deployment prerequisites (*You are here*)**
+1. [Work with the solution across multiple workspaces](cross-workspace.md) (PREVIEW)
+ 1. [Prepare SAP environment](preparing-sap.md) 1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
sentinel Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/solution-overview.md
Title: Microsoft Sentinel solution for SAP® applications overview description: This article introduces Microsoft Sentinel solution for SAP® applications--- Previously updated : 06/21/2022+++ Last updated : 03/22/2023 # Microsoft Sentinel solution for SAP® applications overview
site-recovery Azure To Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-architecture.md
description: Overview of the architecture used when you set up disaster recovery
Previously updated : 4/28/2022 Last updated : 03/27/2023
site-recovery Azure To Azure Autoupdate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-autoupdate.md
Previously updated : 07/23/2020 Last updated : 03/24/2023
When you enable replication for a VM either starting [from the VM view](azure-to
To manage the extension manually, select **Off**.
+ > [!IMPORTANT]
+ > When you choose **Allow Site Recovery to manage**, the setting is applied to all VMs in the vault.
+ 1. Select **Save**. :::image type="content" source="./media/azure-to-azure-autoupdate/vault-toggle.png" alt-text="Extension update settings":::
-> [!IMPORTANT]
-> When you choose **Allow Site Recovery to manage**, the setting is applied to all VMs in the vault.
> [!NOTE] > Either option notifies you of the automation account used for managing updates. If you're using this feature in a vault for the first time, a new automation account is created by default. Alternately, you can customize the setting, and choose an existing automation account. Once defined, all subsequent actions to enable replication in the same vault will use that selected automation account. Currently, the drop-down menu will only list automation accounts that are in the same Resource Group as the vault.
+**For a custom automation account, use the following script:**
+ > [!IMPORTANT]
-> The following script needs to be run in the context of an automation account.
-For a custom automation account, use the following script:
+> Run the following script in the context of an automation account. This script leverages System Assigned Managed Identities as its authentication type.
```azurepowershell param(
param(
$SiteRecoveryRunbookName = "Modify-AutoUpdateForVaultForPatner" $TaskId = [guid]::NewGuid().ToString() $SubscriptionId = "00000000-0000-0000-0000-000000000000"
-$AsrApiVersion = "2018-01-10"
-$RunAsConnectionName = "AzureRunAsConnection"
+$AsrApiVersion = "2021-12-01"
$ArmEndPoint = "https://management.azure.com" $AadAuthority = "https://login.windows.net/" $AadAudience = "https://management.core.windows.net/" $AzureEnvironment = "AzureCloud" $Timeout = "160"
+$AuthenticationType = "SystemAssignedIdentity"
function Throw-TerminatingErrorMessage { Param
function Invoke-InternalWebRequest($Uri, $Headers, $Method, $Body, $ContentType,
} }while($true) }
-function Get-Header([ref]$Header, $AadAudience, $AadAuthority, $RunAsConnectionName){
+function Get-Header([ref]$Header, $AadAudience){
try {
- $RunAsConnection = Get-AutomationConnection -Name $RunAsConnectionName
- $TenantId = $RunAsConnection.TenantId
- $ApplicationId = $RunAsConnection.ApplicationId
- $CertificateThumbprint = $RunAsConnection.CertificateThumbprint
- $Path = "cert:\CurrentUser\My\{0}" -f $CertificateThumbprint
- $Secret = Get-ChildItem -Path $Path
- $ClientCredential = New-Object Microsoft.IdentityModel.Clients.ActiveDirectory.ClientAssertionCertificate(
- $ApplicationId,
- $Secret)
- # Trim the forward slash from the AadAuthority if it exist.
- $AadAuthority = $AadAuthority.TrimEnd("/")
- $AuthContext = New-Object Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext(
- "{0}/{1}" -f $AadAuthority, $TenantId )
- $AuthenticationResult = $authContext.AcquireToken($AadAudience, $Clientcredential)
$Header.Value['Content-Type'] = 'application\json'
- $Header.Value['Authorization'] = $AuthenticationResult.CreateAuthorizationHeader()
+ Write-InformationTracing ("The Authentication Type is system Assigned Identity based.")
+ $endpoint = $env:IDENTITY_ENDPOINT
+ $endpoint
+ $Headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
+ $Headers.Add("X-IDENTITY-HEADER", $env:IDENTITY_HEADER)
+ $Headers.Add("Metadata", "True")
+ $authenticationResult = Invoke-RestMethod -Method Get -Headers $Headers -Uri ($endpoint +'?resource=' +$AadAudience)
+ $accessToken = $authenticationResult.access_token
+ $Header.Value['Authorization'] = "Bearer " + $accessToken
$Header.Value["x-ms-client-request-id"] = $TaskId + "/" + (New-Guid).ToString() + "-" + (Get-Date).ToString("u") } catch
function Get-ProtectionContainerToBeModified([ref] $ContainerMappingList)
Write-InformationTracing ("Get protection container mappings : {0}." -f $VaultResourceId) $ContainerMappingListUrl = $ArmEndPoint + $VaultResourceId + "/replicationProtectionContainerMappings" + "?api-version=" + $AsrApiVersion Write-InformationTracing ("Getting the bearer token and the header.")
- Get-Header ([ref]$Header) $AadAudience $AadAuthority $RunAsConnectionName
+ Get-Header ([ref]$Header) $AadAudience
$Result = @() Invoke-InternalRestMethod -Uri $ContainerMappingListUrl -Headers $header -Result ([ref]$Result) $ContainerMappings = $Result[0]
$Inputs = ("Tracing inputs VaultResourceId: {0}, Timeout: {1}, AutoUpdateAction:
Write-Tracing -Message $Inputs -Level Informational -DisplayMessageToUser $CloudConfig = ("Tracing cloud configuration ArmEndPoint: {0}, AadAuthority: {1}, AadAudience: {2}." -f $ArmEndPoint, $AadAuthority, $AadAudience) Write-Tracing -Message $CloudConfig -Level Informational -DisplayMessageToUser
-$AutomationConfig = ("Tracing automation configuration RunAsConnectionName: {0}." -f $RunAsConnectionName)
-Write-Tracing -Message $AutomationConfig -Level Informational -DisplayMessageToUser
ValidateInput $SubscriptionId = Initialize-SubscriptionId Get-ProtectionContainerToBeModified ([ref]$ContainerMappingList)
$Input = @{
"instanceType" = "A2A" "agentAutoUpdateStatus" = $AutoUpdateAction "automationAccountArmId" = $AutomationAccountArmId
+ "automationAccountAuthenticationType" = $AuthenticationType
} } }
try
{ try { $UpdateUrl = $ArmEndPoint + $Mapping + "?api-version=" + $AsrApiVersion
- Get-Header ([ref]$Header) $AadAudience $AadAuthority $RunAsConnectionName
+ Get-Header ([ref]$Header) $AadAudience
$Result = @() Invoke-InternalWebRequest -Uri $UpdateUrl -Headers $Header -Method 'PATCH' ` -Body $InputJson -ContentType "application/json" -Result ([ref]$Result)
try
{ try {
- Get-Header ([ref]$Header) $AadAudience $AadAuthority $RunAsConnectionName
+ Get-Header ([ref]$Header) $AadAudience
$Result = Invoke-RestMethod -Uri $JobAsyncUrl -Headers $header $JobState = $Result.Status if($JobState -ieq "InProgress")
elseif($JobsCompletedSuccessList.Count -ne $ContainerMappingList.Count)
Throw-TerminatingErrorMessage -Message $ErrorMessage } Write-Tracing -Level Succeeded -Message ("Modify cloud pairing completed.") -DisplayMessageToUser+ ``` ### Manage updates manually
If you can't enable automatic updates, see the following common errors and recom
> [!NOTE] > After you renew the certificate, refresh the page to display the current status.+
+## Next steps
+
+[Learn more](./how-to-migrate-run-as-accounts-managed-identity.md) on how to migrate the authentication type of the Automation accounts to Managed Identities.
+
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 02/27/2023 Last updated : 03/27/2023
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
Title: Deploy Azure Site Recovery replication appliance - Modernized
description: This article describes support and requirements when deploying the replication appliance for VMware disaster recovery to Azure with Azure Site Recovery - Modernized Previously updated : 09/21/2022 Last updated : 03/27/2023
site-recovery How To Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-migrate-run-as-accounts-managed-identity.md
+
+ Title: Migrate from a Run As account to a managed identity
+description: This article describes how to migrate from a Run As account to a managed identity in Azure Site Recovery.
++++ Last updated : 02/23/2023++
+# Migrate from a Run As account to Managed Identities
+
+> [!IMPORTANT]
+> - Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use managed identities. For more information, see [migrating from an existing Run As accounts to managed identity](/articles/automation/automation-managed-identity-faq.md).
+> - Delaying the feature has a direct impact on our support burden, as it would cause upgrades of mobility agent to fail.
+
+This article shows you how to migrate your runbooks to use a Managed Identities for Azure Site Recovery. Azure Automation Accounts are used by Azure Site Recovery customers to auto-update the agents of their protected virtual machines. Site Recovery creates Azure Automation Run As Accounts when you enable replication via the IaaS VM Blade and Recovery Services Vault.
+
+On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure Active Directory (Azure AD) and using it to obtain Azure AD tokens.
+
+## Prerequisites
+
+Before you migrate from a Run As account to a managed identity, ensure that you have the appropriate roles to create a system-assigned identity for your automation account and to assign it the Contributor role in the corresponding recovery services vault.
+
+## Benefits of managed identities
+
+Here are some of the benefits of using managed identities:
+
+- **Credentials access** - You don't need to manage credentials.
+- **Simplified authentication** - You can use managed identities to authenticate to any resource that supports Azure AD authentication including your own applications.
+- **Cost effective** - Managed identities can be used at no extra cost.
+- **Double encryption** - Managed identity is also used to encrypt/decrypt data and metadata using the customer-managed key stored in Azure Key Vault, providing double encryption.
+
+> [!NOTE]
+> Managed identities for Azure resources is the new name for the service formerly known as Managed Service Identity (MSI).
+
+## Migrate from an existing Run As account to a managed identity
+
+### Configure managed identities
+
+You can configure your managed identities through:
+
+- Azure portal
+- Azure CLI
+- your Azure Resource Manager (ARM) template
+
+> [!NOTE]
+> For more information about migration cadence and the support timeline for Run As account creation and certificate renewal, see the [frequently asked questions](../automation/automation-managed-identity-faq.md).
++
+### From Azure portal
+
+**To migrate your Azure Automation account authentication type from a Run As to a managed identity authentication, follow these steps:**
+
+1. In the [Azure portal](https://portal.azure.com), select the recovery services vault for which you want to migrate the runbooks.
+
+1. On the homepage of your recovery services vault page, do the following:
+ 1. On the left pane, under **Manage**, select **Site Recovery infrastructure**.
+ :::image type="content" source="./media/how-to-migrate-from-run-as-to-managed-identities/manage-section.png" alt-text="Screenshot of the **Site Recovery infrastructure** page.":::
+ 1. Under **For Azure virtual machines**, select **Extension update settings**.
+ This page details the authentication type for the automation account that is being used to manage the Site Recovery extensions.
+
+ 1. On this page, select **Migrate** to migrate the authentication type for your automation accounts to use Managed Identities.
+
+ :::image type="content" source="./media/how-to-migrate-from-run-as-to-managed-identities/extension-update-settings.png" alt-text="Screenshot of the Create Recovery Services vault page.":::
+
+1. After the successful migration of your automation account, the authentication type for the linked account details on the **Extension update settings** page is updated.
+
+When you successfully migrate from a Run As to a Managed Identities account, the following changes are reflected on the Automation Run As Accounts :
+
+- System Assigned Managed Identity is enabled for the account (if not already enabled).
+- The **Contributor** role permission is assigned to the Recovery Services vaultΓÇÖs subscription.
+- The script that updates the mobility agent to use Managed Identity based authentication is updated.
++
+### Link an existing managed identity account to vault
+
+To link an existing managed identity Automation account to your Recovery Services vault. Follow these steps:
+
+#### Enable the managed identity for the vault
+
+1. Go to the automation account that you have selected. Under **Account settings**, select **Identity**.
+
+ :::image type="content" source="./media/how-to-migrate-from-run-as-to-managed-identities/mi-automation-account.png" alt-text="Screenshot that shows the identity settings page.":::
+
+1. Under the **System assigned**, change the **Status** to **On** and select **Save**.
+
+ An Object ID is generated. The vault is now registered with Azure Active
+ Directory.
+ :::image type="content" source="./media/hybrid-how-to-enable-replication-private-endpoints/enable-managed-identity-in-vault.png" alt-text="Screenshot that shows the system identity settings page.":::
+
+1. Go back to your recovery services vault. On the left pane, select the **Access control (IAM)** option.
+ :::image type="content" source="./media/how-to-migrate-from-run-as-to-managed-identities/add-mi-iam.png" alt-text="Screenshot that shows IAM settings page.":::
+1. Select **Add** > **Add role assignment** > **Contributor** to open the **Add role assignment** page.
+1. On the **Add role assignment** page, ensure to select **Managed identity**.
+1. Select the **Select members**. In the **Select managed identities** pane, do the following:
+ 1. In the **Select** field, enter the name of the managed identity automation account.
+ 1. In the **Managed identity** field, select **All system-assigned managed identities**.
+ 1. Select the **Select** option.
+ :::image type="content" source="./media/how-to-migrate-from-run-as-to-managed-identities/select-mi.png" alt-text="Screenshot that shows select managed identity settings page.":::
+1. Select **Review + assign**.
+++
+## Next steps
+
+Learn more about:
+- [Managed identities](/articles/active-directory/managed-identities-azure-resources/overview.md).
+- [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing).
+
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
Previously updated : 01/23/2023 Last updated : 03/27/2023
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Title: Support matrix for VMware/physical disaster recovery in Azure Site Recove
description: Summarizes support for disaster recovery of VMware VMs and physical server to Azure using Azure Site Recovery. Previously updated : 02/27/2023 Last updated : 03/27/2023
spring-apps How To Circuit Breaker Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-circuit-breaker-metrics.md
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to collect Spring Cloud Resilience4j Circuit Breaker Metrics with Application Insights Java in-process agent. With this feature you can monitor metrics of resilience4j circuit breaker from Application Insights with Micrometer.
+This article shows you how to collect Spring Cloud Resilience4j Circuit Breaker Metrics with Application Insights Java in-process agent. With this feature, you can monitor the metrics of Resilience4j circuit breaker from Application Insights with Micrometer.
-We use the [spring-cloud-circuit-breaker-demo](https://github.com/spring-cloud-samples/spring-cloud-circuitbreaker-demo) to show how it works.
+The demo [spring-cloud-circuit-breaker-demo](https://github.com/spring-cloud-samples/spring-cloud-circuitbreaker-demo) shows how the monitoring works.
## Prerequisites * Enable Java In-Process agent from the [Java In-Process Agent for Application Insights guide](./how-to-application-insights.md#manage-application-insights-using-the-azure-portal). * Enable dimension collection for resilience4j metrics from the [Application Insights guide](../azure-monitor/app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
-* Install git, Maven, and Java, if not already in use by the development computer.
+* Install Git, Maven, and Java, if not already installed on the development computer.
## Build and deploy apps
-The following procedure builds and deploys apps.
+Use the following steps to build and deploy the sample applications.
1. Clone and build the demo repository.
-```bash
-git clone https://github.com/spring-cloud-samples/spring-cloud-circuitbreaker-demo.git
-cd spring-cloud-circuitbreaker-demo && mvn clean package -DskipTests
-```
-
-2. Create applications with endpoints
-
-```azurecli
-az spring app create
- --resource-group ${resource-group-name} \
- --name resilience4j \
- --service ${Azure-Spring-Apps-instance-name} \
- --assign-endpoint
-az spring app create \
- --resource-group ${resource-group-name} \
- --service ${Azure-Spring-Apps-instance-name} \
- --name reactive-resilience4j \
- --assign-endpoint
-```
-
-3. Deploy applications.
-
-```azurecli
-az spring app deploy -n resilience4j \
- --jar-path ./spring-cloud-circuitbreaker-demo-resilience4j/target/spring-cloud-circuitbreaker-demo-resilience4j-0.0.1.BUILD-SNAPSHOT.jar \
- -s ${service_name} -g ${resource_group}
-az spring app deploy -n reactive-resilience4j \
- --jar-path ./spring-cloud-circuitbreaker-demo-reactive-resilience4j/target/spring-cloud-circuitbreaker-demo-reactive-resilience4j-0.0.1.BUILD-SNAPSHOT.jar \
- -s ${service_name} -g ${resource_group}
-```
-
-> [!Note]
+ ```bash
+ git clone https://github.com/spring-cloud-samples/spring-cloud-circuitbreaker-demo.git
+ cd spring-cloud-circuitbreaker-demo && mvn clean package -DskipTests
+ ```
+
+1. Create applications with endpoints.
+
+ ```azurecli
+ az spring app create \
+ --resource-group ${resource-group-name} \
+ --service ${Azure-Spring-Apps-instance-name} \
+ --name resilience4j \
+ --assign-endpoint
+ az spring app create \
+ --resource-group ${resource-group-name} \
+ --service ${Azure-Spring-Apps-instance-name} \
+ --name reactive-resilience4j \
+ --assign-endpoint
+ ```
+
+1. Deploy applications.
+
+ ```azurecli
+ az spring app deploy \
+ --resource-group ${resource-group-name} \
+ --service ${Azure-Spring-Apps-instance-name} \
+ --name resilience4j \
+ --jar-path ./spring-cloud-circuitbreaker-demo-resilience4j/target/spring-cloud-circuitbreaker-demo-resilience4j-0.0.1.BUILD-SNAPSHOT.jar
+ az spring app deploy \
+ --resource-group ${resource-group-name} \
+ --service ${Azure-Spring-Apps-instance-name} \
+ --name reactive-resilience4j \
+ --jar-path ./spring-cloud-circuitbreaker-demo-reactive-resilience4j/target/spring-cloud-circuitbreaker-demo-reactive-resilience4j-0.0.1.BUILD-SNAPSHOT.jar
+ ```
+
+> [!NOTE]
> > * Include the required dependency for Resilience4j: >
az spring app deploy -n reactive-resilience4j \
> </dependency> > ``` >
-> * The customer code must use the API of `CircuitBreakerFactory`, which is implemented as a `bean` automatically created when you include a Spring Cloud Circuit Breaker starter. For details see [Spring Cloud Circuit Breaker](https://spring.io/projects/spring-cloud-circuitbreaker#overview).
+> * Your code must use the `CircuitBreakerFactory` API, which is implemented as a `bean` automatically created when you include a Spring Cloud Circuit Breaker starter. For more information, see [Spring Cloud Circuit Breaker](https://spring.io/projects/spring-cloud-circuitbreaker#overview).
>
-> * The following 2 dependencies have conflicts with resilient4j packages above. Be sure the customer does not include them.
+> * The following two dependencies have conflicts with Resilient4j packages. Be sure you don't include them.
> > ```xml > <dependency>
az spring app deploy -n reactive-resilience4j \
> /get/fluxdelay/{seconds} > ```
-## Locate Resilence4j Metrics from Portal
+## Locate Resilence4j Metrics on the Azure portal
+
+1. In your Azure Spring Apps instance, select **Application Insights** in the navigation pane and then select **Application Insights** on the page.
+
+ :::image type="content" source="media/how-to-circuit-breaker-metrics/application-insights.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps Application Insights page with the Application Insights on the button bar highlighted." lightbox="media/how-to-circuit-breaker-metrics/application-insights.png":::
-1. Select the **Application Insights** Blade from Azure Spring Apps portal, and select **Application Insights**.
+1. Select **Metrics** in the navigation pane. The **Metrics** page provides dropdown menus and options to define the charts in this procedure. For all charts, set **Metric Namespace** to **azure.applicationinsights**.
- [ ![resilience4J 0](media/spring-cloud-resilience4j/resilience4J-0.png)](media/spring-cloud-resilience4j/resilience4J-0.png#lightbox)
+ :::image type="content" source="media/how-to-circuit-breaker-metrics/chart-menus.png" alt-text="Screenshot of the Azure portal Application Insights Metrics page, with Metrics highlighted in the navigation pane, and with azure-applicationinsights highlighted in the Metric Namespace dropdown menu." lightbox="media/how-to-circuit-breaker-metrics/chart-menus.png":::
-2. Select **Metrics** from the **Application Insights** page. Select **azure.applicationinsights** from **Metrics Namespace**. Also select **resilience4j_circuitbreaker_buffered_calls** metrics with **Average**.
+1. Set **Metric** to **resilience4j_circuitbreaker_buffered_calls**, and then set **Aggregation** to **Avg**.
- [ ![resilience4J 1](media/spring-cloud-resilience4j/resilience4J-1.png)](media/spring-cloud-resilience4j/resilience4J-1.png#lightbox)
+ :::image type="content" source="media/how-to-circuit-breaker-metrics/buffered-calls.png" alt-text="Screenshot of the Azure portal Application Insights Metrics page showing a chart with Metric set to circuit breaker buffered calls and Aggregation set to Average." lightbox="media/how-to-circuit-breaker-metrics/buffered-calls.png":::
-3. Select **resilience4j_circuitbreaker_calls** metrics and **Average**.
+1. Set **Metric** to **resilience4j_circuitbreaker_calls**, and then set **Aggregation** to **Avg**.
- [ ![resilience4J 2](media/spring-cloud-resilience4j/resilience4J-2.png)](media/spring-cloud-resilience4j/resilience4J-2.png#lightbox)
+ :::image type="content" source="media/how-to-circuit-breaker-metrics/calls.png" alt-text="Screenshot of the Azure portal Application Insights Metrics page showing a chart with Metric set to circuit breaker calls and Aggregation set to Average." lightbox="media/how-to-circuit-breaker-metrics/calls.png":::
-4. Select **resilience4j_circuitbreaker_calls** metrics and **Average**. Select **Add filter**, and then select name as **createNewAccount**.
+1. Set **Metric** to **resilience4j_circuitbreaker_calls**, and then set **Aggregation** to **Avg**. Select **Add filter** and set **Name** to **Delay**.
- [ ![resilience4J 3](media/spring-cloud-resilience4j/resilience4J-3.png)](media/spring-cloud-resilience4j/resilience4J-3.png#lightbox)
+ :::image type="content" source="media/how-to-circuit-breaker-metrics/calls-filter.png" alt-text="Screenshot of the Azure portal Application Insights Metrics page showing a chart with Metric set to circuit breaker calls and Aggregation set to Average, and with Filter set to the name Delay." lightbox="media/how-to-circuit-breaker-metrics/calls-filter.png":::
-5. Select **resilience4j_circuitbreaker_calls** metrics and **Average**. Then select **Apply splitting**, and select **kind**.
+1. Set **Metric** to **resilience4j_circuitbreaker_calls**, and then set **Aggregation** to **Avg**. Select **Apply splitting** and set **Split by** to **kind**.
- [ ![resilience4J 4](media/spring-cloud-resilience4j/resilience4J-4.png)](media/spring-cloud-resilience4j/resilience4J-4.png#lightbox)
+ :::image type="content" source="media/how-to-circuit-breaker-metrics/calls-splitting.png" alt-text="Screenshot of the Azure portal Application Insights Metrics page showing a chart with Metric set to circuit breaker calls and Aggregation set to Average, and with Apply splitting selected with Split by set to kind." lightbox="media/how-to-circuit-breaker-metrics/calls-splitting.png":::
-6. Select **resilience4j_circuitbreaker_calls**, `**resilience4j_circuitbreaker_buffered_calls**, and **resilience4j_circuitbreaker_slow_calls** metrics with **Average**.
+1. Set **Metric** to **resilience4j_circuitbreaker_calls**, and then set **Aggregation** to **Avg**. Select **Add metric** and set **Metric** to **resilience4j_circuitbreaker_buffered_calls**, and then set **Aggregation** to **Avg**. Select **Add metric** again and set **Metric** to **resilience4j_circuitbreaker_slow_calls**, and then set **Aggregation** set to **Avg**.
- [ ![resilience4J 5](media/spring-cloud-resilience4j/resilience4j-5.png)](media/spring-cloud-resilience4j/resilience4j-5.png#lightbox)
+ :::image type="content" source="media/how-to-circuit-breaker-metrics/slow-calls.png" alt-text="Screenshot of the Azure portal Application Insights Metrics page showing three charts: A chart with Metric set to circuit breaker calls and Aggregation set to Average. A chart with Metric set to circuit breaker calls buffered and Aggregation set to Average. A chart with Metric set to circuit breaker slow calls and Aggregation set to Average." lightbox="media/how-to-circuit-breaker-metrics/slow-calls.png":::
## Next steps
spring-apps How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-marketplace-offer.md
The following table lists each supported geographic location and its [ISO 3166
| Switzerland | CH | | Taiwan | TW | | Thailand | TH |
-| Turkey | TR |
+| T├╝rkiye | TR |
| United Arab Emirates | AE | | United Kingdom | GB | | United States | US |
spring-apps How To Use Dev Tool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-dev-tool-portal.md
Use the following command to update the SSO configuration using the Azure CLI:
```azurecli az spring dev-tool update \ --resource-group <resource-group-name> \
- --name <Azure-Spring-Apps-service-instance-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
--client-id "<client-id>" \ --scopes "scope1,scope2" \ --client-secret "<client-secret>" \
Use the following command to assign a public endpoint using the Azure CLI:
```azurecli az spring dev-tool update \ --resource-group <resource-group-name> \
- --name <Azure-Spring-Apps-service-instance-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
--assign-endpoint ```
storage Storage Blob Client Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-client-management.md
To learn more about authorization, see [Authorize access to data in Azure Storag
Working with any Azure resource using the SDK begins with creating a client object. In this section, you learn how to create client objects to interact with the three types of resources in the storage service: storage accounts, containers, and blobs.
+When your application creates a client object, you pass a URI referencing the endpoint to the client constructor. You can construct the endpoint string manually, as shown in the examples in this article, or you can query for the endpoint at runtime using the Azure Storage management library. To learn how to query for an endpoint, see [Query for a Blob Storage endpoint](storage-blob-query-endpoint-srp.md).
+ ### Create a BlobServiceClient object An authorized `BlobServiceClient` object allows your app to interact with resources at the storage account level. `BlobServiceClient` provides methods to retrieve and configure account properties, as well as list, create, and delete containers within the storage account. This client object is the starting point for interacting with resources in the storage account.
storage Storage Blob Copy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-python.md
To learn more about copying blobs using the Azure Blob Storage client library fo
### REST API operations
-The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for copying blobs use the following REST API operations:
+The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods covered in this article use the following REST API operations:
-- [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) (REST API)
+- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API)
- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API) ### Code samples - [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob-devguide-blobs.py)
storage Storage Blob Query Endpoint Srp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-query-endpoint-srp.md
+
+ Title: Query for a Blob Storage endpoint using the Azure Storage management library
+
+description: Learn how to query for a Blob Storage endpoint using the Azure Storage management library. Then use the endpoint to create a BlobServiceClient object to connect to Blob Storage data resources.
++++++ Last updated : 03/24/2023++++
+# Query for a Blob Storage endpoint using the Azure Storage management library
+
+A Blob Storage endpoint forms the base address for all objects within a storage account. When you create a storage account, you specify which type of endpoint you want to use. Blob Storage supports two types of endpoints:
+
+- A [standard endpoint](../common/storage-account-overview.md#standard-endpoints) includes the unique storage account name along with a fixed domain name. The format of a standard endpoint is `https://<storage-account>.blob.core.windows.net`.
+- An [Azure DNS zone endpoint (preview)](../common/storage-account-overview.md#azure-dns-zone-endpoints-preview) dynamically selects an Azure DNS zone and assigns it to the storage account when it's created. The format of an Azure DNS Zone endpoint is `https://<storage-account>.z[00-99].blob.storage.azure.net`.
+
+When your application creates a service client object that connects to Blob Storage data resources, you pass a URI referencing the endpoint to the service client constructor. You can construct the URI string manually, or you can query for the service endpoint at runtime using the Azure Storage management library.
+
+The Azure Storage management library provides programmatic access to the [Azure Storage resource provider](/rest/api/storagerp). The resource provider is the Azure Storage implementation of the Azure Resource Manager. The management library enables developers to manage storage accounts and account configuration, as well as configure lifecycle management policies, object replication policies, and immutability policies.
+
+In this article, you learn how to query a Blob Storage endpoint using the Azure Storage management library. Then you use that endpoint to create a `BlobServiceClient` object to connect with Blob Storage data resources.
+
+## Set up your project
+
+To work with the code examples in this article, follow these steps to set up your project.
+
+### Install packages
+
+Install packages to work with the libraries used in this example.
+
+## [.NET](#tab/dotnet)
+
+Install the following packages using `dotnet add package`:
+
+```dotnetcli
+dotnet add package Azure.Identity
+dotnet add package Azure.ResourceManager.Storage
+dotnet add package Azure.Storage.Blobs
+```
+
+## [Java](#tab/java)
+
+Open the `pom.xml` file in your text editor.
+
+Add **azure-sdk-bom** to take a dependency on the latest version of the library. In the following snippet, replace the `{bom_version_to_target}` placeholder with the version number. Using **azure-sdk-bom** keeps you from having to specify the version of each individual dependency. To learn more about the BOM, see the [Azure SDK BOM README](https://github.com/Azure/azure-sdk-for-jav).
+
+```xml
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-sdk-bom</artifactId>
+ <version>{bom_version_to_target}</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
+```
+
+Then add the following dependency elements to the group of dependencies. The **azure-identity** dependency is needed for passwordless connections to Azure services.
+
+```xml
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-storage-blob</artifactId>
+</dependency>
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+</dependency>
+<dependency>
+ <groupId>com.azure.resourcemanager</groupId>
+ <artifactId>azure-resourcemanager</artifactId>
+ <version>2.24.0</version>
+</dependency>
+<dependency>
+ <groupId>com.azure.resourcemanager</groupId>
+ <artifactId>azure-resourcemanager-storage</artifactId>
+ <version>2.24.0</version>
+</dependency>
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-core-management</artifactId>
+ <version>1.10.2</version>
+</dependency>
+```
+
+## [JavaScript](#tab/javascript)
+
+Install the following packages using `npm install`:
+
+```console
+npm install @azure/identity
+npm install @azure/storage-blob
+npm install @azure/arm-resources
+npm install @azure/arm-storage
+```
+
+## [Python](#tab/python)
+
+Install the following packages using `pip install`:
+
+```console
+pip install azure-identity
+pip install azure-storage-blob
+pip install azure-mgmt-resource
+pip install azure-mgmt-storage
+```
+++
+### Set up the app code
+
+Add the necessary `using` or `import` directives to the code. Note that the code examples may split out functionality between files, but in this section all directives are listed together.
+
+## [.NET](#tab/dotnet)
+
+Add the following `using` directives:
+
+```csharp
+using Azure.Core;
+using Azure.Identity;
+using Azure.Storage.Blobs;
+using Azure.ResourceManager;
+using Azure.ResourceManager.Resources;
+using Azure.ResourceManager.Storage;
+```
+
+Client library information:
+
+- [Azure.Identity](/dotnet/api/overview/azure/identity-readme): Provides Azure Active Directory (Azure AD) token authentication support across the Azure SDK, and is needed for passwordless connections to Azure services.
+- [Azure.ResourceManager.Storage](/dotnet/api/overview/azure/resourcemanager.storage-readme): Supports management of Azure Storage resources, including resource groups and storage accounts.
+- [Azure.Storage.Blobs](/dotnet/api/overview/azure/storage.blobs-readme): Contains the primary classes that you can use to work with Blob Storage data resources.
+
+## [Java](#tab/java)
+
+Add the following `import` directives:
+
+```java
+import com.azure.identity.*;
+import com.azure.storage.blob.*;
+import com.azure.resourcemanager.*;
+import com.azure.resourcemanager.storage.models.*;
+import com.azure.core.management.*;
+import com.azure.core.management.profile.*;
+```
+
+Client library information:
+
+- [com.azure.identity](/java/api/overview/azure/identity-readme): Provides Azure Active Directory (Azure AD) token authentication support across the Azure SDK, and is needed for passwordless connections to Azure services.
+- [com.azure.storage.blob](/java/api/com.azure.storage.blob): Contains the primary classes that you can use to work with Blob Storage data resources.
+- [com.azure.resourcemanager](/java/api/overview/azure/resourcemanager-readme): Supports management of Azure resources and resource groups.
+- [com.azure.resourcemanager.storage](/java/api/overview/azure/resourcemanager-storage-readme): Supports management of Azure Storage resources, including resource groups and storage accounts.
+
+## [JavaScript](#tab/javascript)
+
+Add the following `require` statements to load the modules:
+
+```javascript
+const { DefaultAzureCredential } = require("@azure/identity");
+const { BlobServiceClient } = require("@azure/storage-blob");
+const { ResourceManagementClient } = require("@azure/arm-resources");
+const { StorageManagementClient } = require("@azure/arm-storage");
+```
+
+Client library information:
+
+- [@azure/identity](/javascript/api/overview/azure/identity-readme): Provides Azure Active Directory (Azure AD) token authentication support across the Azure SDK, and is needed for passwordless connections to Azure services.
+- [@azure/storage-blob](/javascript/api/overview/azure/storage-blob-readme): Contains the primary classes that you can use to work with Blob Storage data resources.
+- [@azure/arm-resources](/javascript/api/overview/azure/arm-resources-readme): Supports management of Azure resources and resource groups.
+- [@azure/arm-storage](/javascript/api/overview/azure/arm-storage-readme): Supports management of Azure Storage resources, including resource groups and storage accounts.
+
+## [Python](#tab/python)
+
+Add the following `import` statements:
+
+```python
+from azure.identity import DefaultAzureCredential
+from azure.storage.blob import BlobServiceClient
+from azure.mgmt.resource import ResourceManagementClient
+from azure.mgmt.storage import StorageManagementClient
+```
+
+Client library information:
+
+- [azure-identity](/python/api/overview/azure/identity-readme): Provides Azure Active Directory (Azure AD) token authentication support across the Azure SDK, and is needed for passwordless connections to Azure services.
+- [azure-storage-blob](/python/api/overview/azure/storage-blob-readme): Contains the primary classes that you can use to work with Blob Storage data resources.
+- [azure-mgmt-resource](/python/api/azure-mgmt-resource/azure.mgmt.resource.resourcemanagementclient): Supports management of Azure resources and resource groups.
+- [azure-mgmt-storage](/python/api/azure-mgmt-storage/azure.mgmt.storage.storagemanagementclient): Supports management of Azure Storage resources, including resource groups and storage accounts.
+++
+### Register the Storage resource provider with a subscription
+
+A resource provider must be registered with your Azure subscription before you can work with it. This step only needs to be done once per subscription, and only applies if the resource provider **Microsoft.Storage** is not currently registered with your subscription.
+
+You can register the Storage resource provider, or check the registration status, using [Azure portal](/azure/azure-resource-manager/management/resource-providers-and-types#azure-portal), [Azure CLI](/azure/azure-resource-manager/management/resource-providers-and-types#azure-cli), or [Azure PowerShell](/azure/azure-resource-manager/management/resource-providers-and-types#azure-powershell).
+
+You can also use the Azure management libraries to check the registration status and register the Storage resource provider, as shown in the following examples:
+
+## [.NET](#tab/dotnet)
++
+## [Java](#tab/java)
++
+## [JavaScript](#tab/javascript)
++
+## [Python](#tab/python)
++++
+> [!NOTE]
+> To perform the register operation, you'll need permissions for the following Azure RBAC action: **Microsoft.Storage/register/action**. This permission is included in the **Contributor** and **Owner** roles.
+
+## Query for the Blob Storage endpoint
+
+To retrieve the Blob Storage endpoint for a given storage account, we need to get the storage account properties by calling the [Get Properties](/rest/api/storagerp/storage-accounts/get-properties) operation. The following code samples use both the data access and management libraries to get a Blob Storage endpoint for a specified storage account:
+
+## [.NET](#tab/dotnet)
+
+To get the properties for a specified storage account, use the following method from a [StorageAccountCollection](/dotnet/api/azure.resourcemanager.storage.storageaccountcollection) object:
+
+- [GetAsync](/dotnet/api/azure.resourcemanager.storage.storageaccountcollection.getasync)
+
+This method returns a [StorageAccountResource](/dotnet/api/azure.resourcemanager.storage.storageaccountresource) object, which represents the storage account.
++
+## [Java](#tab/java)
+
+To get the properties for a specified storage account, use the following method from an [AzureResourceManager](/java/api/com.azure.resourcemanager.azureresourcemanager) object:
+
+- [storageAccounts().getByResourceGroup](/java/api/com.azure.resourcemanager.resources.fluentcore.arm.collection.supportsgettingbyresourcegroup#com-azure-resourcemanager-resources-fluentcore-arm-collection-supportsgettingbyresourcegroup-getbyresourcegroup(java-lang-string-java-lang-string))
+
+This method returns a [StorageAccount](/java/api/com.azure.resourcemanager.storage.models.storageaccount) interface, which is an immutable client-side representation of the storage account.
++
+## [JavaScript](#tab/javascript)
+
+To get the properties for a specified storage account, use the following method from a [StorageManagementClient](/javascript/api/@azure/arm-storage/storagemanagementclient) object:
+
+- [storageAccounts.getProperties](/javascript/api/@azure/arm-storage/storageaccounts#@azure-arm-storage-storageaccounts-getproperties)
+
+This method returns a [`Promise<StorageAccountsGetPropertiesResponse>`](/javascript/api/@azure/arm-storage/storageaccountsgetpropertiesresponse), which represents the storage account.
++
+## [Python](#tab/python)
+
+To get the properties for a specified storage account, use the following method from a [StorageManagementClient](/python/api/azure-mgmt-storage/azure.mgmt.storage.storagemanagementclient) object:
+
+- [storageAccounts.getProperties](/python/api/azure-mgmt-storage/azure.mgmt.storage.storagemanagementclient#azure-mgmt-storage-storagemanagementclient-storage-accounts)
+
+This method returns a `StorageAccount` object, which represents the storage account.
++++
+## Create a client object using the endpoint
+
+Once you have the Blob Storage endpoint for a storage account, you can instantiate a client object to work with data resources. The following code sample creates a `BlobServiceClient` object using the endpoint we retrieved in the earlier example:
+
+## [.NET](#tab/dotnet)
++
+## [Java](#tab/java)
++
+## [JavaScript](#tab/javascript)
++
+## [Python](#tab/python)
++++
+## Next steps
+
+View the full code samples (GitHub):
+- [.NET](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/dotnet/BlobQueryEndpoint)
+- [Java](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/Java/blob-query-endpoint/src/main/java/com/blobs/queryendpoint)
+- [JavaScript](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/blob-query-endpoint/index.js)
+- [Python](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-query-endpoint/blob-query-endpoint.py)
+
+To learn more about creating client objects, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+++
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
Previously updated : 03/17/2023 Last updated : 03/23/2023
The following table describes whether a feature is supported in a standard gener
| [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; | | [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Customer-managed keys in a single-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Customer-managed keys in a multi-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed keys with key vault in the same tenant](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Customer-managed keys with key vault in a different tenant (cross-tenant)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-provided keys](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705;<sup>2</sup> | &#x2705; | | [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
The following table describes whether a feature is supported in a standard gener
<sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
-<sup>3</sup> Setting the tier of a blob by using the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchial namespace.
+<sup>3</sup> Setting the tier of a blob by using the [Blob Batch](/rest/api/storageservices/blob-batch) operation is not yet supported in accounts that have a hierarchical namespace.
## Premium block blob accounts
The following table describes whether a feature is supported in a premium block
| [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; | | [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=/azure/storage/blobs/toc.json) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Customer-managed keys in a single-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Customer-managed keys in a multi-tenant scenario (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed keys with key vault in the same tenant](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Customer-managed keys with key vault in a different tenant (cross-tenant)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-provided keys](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705;<sup>2</sup> | &#x2705; | | [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
storage Upgrade To Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2.md
Renaming a blob is far more efficient because client applications can rename a b
## Impact on costs
-Storage costs aren't impacted, but transactions costs are impacted. Use these pages to assess compare costs.
+There is no cost to perform the upgrade. After you upgrade, the cost to store your data doesn't change, but the cost of a transaction does change. Use these pages to assess compare costs.
- [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
As you move between content sets, you'll notice some slight terminology differen
When you are ready to upgrade your storage account to include Data Lake Storage Gen2 capabilities, see this step-by-step guide. > [!div class="nextstepaction"]
-> [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md)
+> [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md)
storage Customer Managed Keys Configure Cross Tenant Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md
This article shows how to configure encryption with customer-managed keys for an
To learn how to configure customer-managed keys for a new storage account, see [Configure cross-tenant customer-managed keys for a new storage account](customer-managed-keys-configure-cross-tenant-new-account.md).
+> [!NOTE]
+> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration of customer-managed keys. Any action that is supported for Azure Key Vault is also supported for Azure Key Vault Managed HSM.
+ [!INCLUDE [active-directory-msi-cross-tenant-cmk-overview](../../../includes/active-directory-msi-cross-tenant-cmk-overview.md)] [!INCLUDE [active-directory-msi-cross-tenant-cmk-create-identities-authorize-key-vault](../../../includes/active-directory-msi-cross-tenant-cmk-create-identities-authorize-key-vault.md)]
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
This article shows how to configure encryption with customer-managed keys at the
To learn how to configure customer-managed keys for an existing storage account, see [Configure cross-tenant customer-managed keys for an existing storage account](customer-managed-keys-configure-cross-tenant-existing-account.md).
+> [!NOTE]
+> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration of customer-managed keys. Any action that is supported for Azure Key Vault is also supported for Azure Key Vault Managed HSM.
+ [!INCLUDE [active-directory-msi-cross-tenant-cmk-overview](../../../includes/active-directory-msi-cross-tenant-cmk-overview.md)] [!INCLUDE [active-directory-msi-cross-tenant-cmk-create-identities-authorize-key-vault](../../../includes/active-directory-msi-cross-tenant-cmk-create-identities-authorize-key-vault.md)]
storage Customer Managed Keys Configure Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-existing-account.md
Title: Configure customer-managed keys for an existing storage account
+ Title: Configure customer-managed keys in the same tenant for an existing storage account
description: Learn how to configure Azure Storage encryption with customer-managed keys for an existing storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault.
Previously updated : 03/09/2023 Last updated : 03/23/2023
-# Configure customer-managed keys in an Azure key vault for an existing storage account
+# Configure customer-managed keys in the same tenant for an existing storage account
Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can manage your own keys. Customer-managed keys must be stored in Azure Key Vault or Key Vault Managed Hardware Security Model (HSM).
-This article shows how to configure encryption with customer-managed keys for an existing storage account. The customer-managed keys are stored in a key vault.
+This article shows how to configure encryption with customer-managed keys for an existing storage account when the storage account and key vault are in the same tenant. The customer-managed keys are stored in a key vault.
To learn how to configure customer-managed keys for a new storage account, see [Configure customer-managed keys in an Azure key vault for an new storage account](customer-managed-keys-configure-new-account.md). To learn how to configure encryption with customer-managed keys stored in a managed HSM, see [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md). > [!NOTE]
-> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration.
+> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration of customer-managed keys. Any action that is supported for Azure Key Vault is also supported for Azure Key Vault Managed HSM.
[!INCLUDE [storage-customer-managed-keys-key-vault-configure-include](../../../includes/storage-customer-managed-keys-key-vault-configure-include.md)]
When you manually update the key version, you'll need to update the storage acco
-## The impact of changing customer-managed keys
-
-When customer-managed keys are enabled or disabled, or the key or key version is changed, the protection of the root encryption key changes, but the data in your Azure Storage account remains encrypted at all times. There is no additional action required on your part to ensure that your data is protected. Rotating the key version doesn't impact performance. There is no downtime associated with rotating the key version.
- [!INCLUDE [storage-customer-managed-keys-change-include](../../../includes/storage-customer-managed-keys-change-include.md)]
-If the new key is in a different key vault, you must [grant the managed identity access to the key in the new vault](#choose-a-managed-identity-to-authorize-access-to-the-key-vault). If you choose manual updating of the key version, you will also need to [update the key vault URI](#configure-encryption-for-manual-updating-of-key-versions).
+If the new key is in a different key vault, you must [grant the managed identity access to the key in the new vault](#choose-a-managed-identity-to-authorize-access-to-the-key-vault). If you opt for manual updating of the key version, you will also need to [update the key vault URI](#configure-encryption-for-manual-updating-of-key-versions).
[!INCLUDE [storage-customer-managed-keys-revoke-include](../../../includes/storage-customer-managed-keys-revoke-include.md)]
-Disabling the key will cause attempts to access data in the storage account to fail with error code 403 (Forbidden). For a list of storage account operations that will be affected by disabling the key, see [Revoke access to a storage account that uses customer-managed keys](customer-managed-keys-overview.md#revoke-access-to-a-storage-account-that-uses-customer-managed-keys).
- [!INCLUDE [storage-customer-managed-keys-disable-include](../../../includes/storage-customer-managed-keys-disable-include.md)] ## Next steps
storage Customer Managed Keys Configure New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-new-account.md
Title: Configure customer-managed keys for a new storage account
+ Title: Configure customer-managed keys in the same tenant for a new storage account
description: Learn how to configure Azure Storage encryption with customer-managed keys for a new storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault.
Previously updated : 03/09/2023 Last updated : 03/23/2023
-# Configure customer-managed keys in an Azure key vault for a new storage account
+# Configure customer-managed keys in the same tenant for a new storage account
Azure Storage encrypts all data in a storage account at rest. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can manage your own keys. Customer-managed keys must be stored in an Azure Key Vault or in an Azure Key Vault Managed Hardware Security Model (HSM).
This article shows how to configure encryption with customer-managed keys at the
To learn how to configure customer-managed keys for an existing storage account, see [Configure customer-managed keys in an Azure key vault for an existing storage account](customer-managed-keys-configure-existing-account.md).
+> [!NOTE]
+> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration of customer-managed keys. Any action that is supported for Azure Key Vault is also supported for Azure Key Vault Managed HSM.
+ [!INCLUDE [storage-customer-managed-keys-key-vault-configure-include](../../../includes/storage-customer-managed-keys-key-vault-configure-include.md)] [!INCLUDE [storage-customer-managed-keys-key-vault-add-key-include](../../../includes/storage-customer-managed-keys-key-vault-add-key-include.md)]
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 03/15/2023 Last updated : 03/23/2023
You must use one of the following Azure key stores to store your customer-manage
You can either create your own keys and store them in the key vault or managed HSM, or you can use the Azure Key Vault APIs to generate keys. The storage account and the key vault or managed HSM can be different Azure Active Directory (Azure AD) tenants, regions, and subscriptions. > [!NOTE]
-> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration.
+> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration of customer-managed keys. Any action that is supported for Azure Key Vault is also supported for Azure Key Vault Managed HSM.
## About customer-managed keys
Data in Blob storage and Azure Files is always protected by customer-managed key
## Enable customer-managed keys for a storage account
-When you configure a customer-managed key, Azure Storage wraps the root data encryption key for the account with the customer-managed key in the associated key vault or managed HSM. Enabling customer-managed keys takes effect immediately and doesn't impact performance.
+When you configure customer-managed keys for a storage account, Azure Storage wraps the root data encryption key for the account with the customer-managed key in the associated key vault or managed HSM. The protection of the root encryption key changes, but the data in your Azure Storage account remains encrypted at all times. There is no additional action required on your part to ensure that your data remains encrypted. Protection by customer-managed keys takes effect immediately.
-You can configure customer-managed keys with the key vault and storage account in the same tenant or in different Azure AD tenants. To learn how to configure Azure Storage encryption with customer-managed keys when the key vault and storage account are in the same tenants, see one of the following articles:
+You can switch between customer-managed keys and Microsoft-managed keys at any time. For more information about Microsoft-managed keys, see [About encryption key management](storage-service-encryption.md#about-encryption-key-management).
-- [Configure encryption with customer-managed keys stored in Azure Key Vault](customer-managed-keys-configure-key-vault.md).-- [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md).
+### Key vault requirements
-To learn how to configure Azure Storage encryption with customer-managed keys when the key vault and storage account are in different Azure AD tenants, see one of the following articles:
+The key vault or managed HSM that stores the key must have both soft delete and purge protection enabled. Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more information about keys, see [About keys](../../key-vault/keys/about-keys.md).
-- [Configure cross-tenant customer-managed keys for a new storage account](customer-managed-keys-configure-cross-tenant-new-account.md)-- [Configure cross-tenant customer-managed keys for an existing storage account](customer-managed-keys-configure-cross-tenant-existing-account.md)
+Using a key vault or managed HSM has associated costs. For more information, see [Key Vault pricing](https://azure.microsoft.com/pricing/details/key-vault/).
-When you enable or disable customer-managed keys, or when you modify the key or the key version, the protection of the root encryption key changes, but the data in your Azure Storage account remains encrypted at all times. There is no additional action required on your part to ensure that your data is protected. Rotating the key version doesn't impact performance. There is no downtime associated with rotating the key version.
+### Customer-managed keys with a key vault in the same tenant
-You can enable customer-managed keys on both new and existing storage accounts. When you enable customer-managed keys, you must specify a managed identity to be used to authorize access to the key vault that contains the key. The managed identity may be either a user-assigned or system-assigned managed identity:
+You can configure customer-managed keys with the key vault and storage account in the same tenant or in different Azure AD tenants. To learn how to configure Azure Storage encryption with customer-managed keys when the key vault and storage account are in the same tenants, see one of the following articles:
+
+- [Configure customer-managed keys in an Azure key vault for a new storage account](customer-managed-keys-configure-new-account.md)
+- [Configure customer-managed keys in an Azure key vault for an existing storage account](customer-managed-keys-configure-existing-account.md)
+
+When you enable customer-managed keys with a key vault in the same tenant, you must specify a managed identity that is to be used to authorize access to the key vault that contains the key. The managed identity may be either a user-assigned or system-assigned managed identity:
- When you configure customer-managed keys at the time that you create a storage account, you must use a user-assigned managed identity. - When you configure customer-managed keys on an existing storage account, you can use either a user-assigned managed identity or a system-assigned managed identity.
-To learn more about system-assigned versus user-assigned managed identities, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+To learn more about system-assigned versus user-assigned managed identities, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). To learn how to create and manage a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
-You can switch between customer-managed keys and Microsoft-managed keys at any time. For more information about Microsoft-managed keys, see [About encryption key management](storage-service-encryption.md#about-encryption-key-management).
+### Customer-managed keys with a key vault in a different tenant
-> [!IMPORTANT]
-> Customer-managed keys rely on managed identities for Azure resources, a feature of Azure AD. Managed identities do not currently support cross-tenant scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned to your storage account under the covers. If you subsequently move the subscription, resource group, or storage account from one Azure AD tenant to another, the managed identity associated with the storage account is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+To learn how to configure Azure Storage encryption with customer-managed keys when the key vault and storage account are in different Azure AD tenants, see one of the following articles:
- The key vault that stores the key must have both soft delete and purge protection enabled. Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more information about keys, see [About keys](../../key-vault/keys/about-keys.md).
+- [Configure cross-tenant customer-managed keys for a new storage account](customer-managed-keys-configure-cross-tenant-new-account.md)
+- [Configure cross-tenant customer-managed keys for an existing storage account](customer-managed-keys-configure-cross-tenant-existing-account.md)
-Using a key vault or managed HSM has associated costs. For more information, see [Key Vault pricing](https://azure.microsoft.com/pricing/details/key-vault/).
+### Customer-managed keys with a managed HSM
+
+You can configure customer-managed keys with an Azure Key Vault Managed HSM for a new or existing account. And you can configure customer-managed keys with a managed HSM that's in the same tenant as the storage account, or in a different tenant. The process for configuring customer-managed keys in a managed HSM is the same as for configuring customer-managed keys in a key vault, but the permissions are slightly different. For more information, see [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md).
## Update the key version
-When you configure encryption with customer-managed keys, you have two options for updating the key version:
+Following cryptographic best practices means rotating the key that is protecting your storage account on a regular schedule, typically at least every two years. Azure Storage never modifies the key in the key vault, but you can configure a key rotation policy to rotate the key according to your compliance requirements. For more information, see [Configure cryptographic key auto-rotation in Azure Key Vault](../../key-vault/keys/how-to-configure-key-rotation.md).
+
+After the key is rotated in the key vault, the customer-managed keys configuration for your storage account must be updated to use the new key version. Customer-managed keys support both automatic and manual updating of the key version for the key that is protecting the account. You can decide which approach you want to use when you configure customer-managed keys, or when you update your configuration.
+
+When you modify the key or the key version, the protection of the root encryption key changes, but the data in your Azure Storage account remains encrypted at all times. There is no additional action required on your part to ensure that your data is protected. Rotating the key version doesn't impact performance. There is no downtime associated with rotating the key version.
+
+> [!IMPORTANT]
+> To rotate a key, create a new version of the key in the key vault or managed HSM, according to your compliance requirements. Azure Storage does not handle key rotation, so you will need to manage rotation of the key in the key vault.
+>
+> When you rotate the key used for customer-managed keys, that action is not currently logged to the Azure Monitor logs for Azure Storage.
-- **Automatically update the key version:** To automatically update a customer-managed key when a new version is available, omit the key version when you enable encryption with customer-managed keys for the storage account. If the key version is omitted, then Azure Storage checks the key vault or managed HSM daily for a new version of a customer-managed key. If a new key version is available, then Azure Storage automatically uses the latest version of the key.
+### Automatically update the key version
- Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.
+To automatically update a customer-managed key when a new version is available, omit the key version when you enable encryption with customer-managed keys for the storage account. If the key version is omitted, then Azure Storage checks the key vault or managed HSM daily for a new version of a customer-managed key. If a new key version is available, then Azure Storage automatically uses the latest version of the key.
- If the storage account was previously configured for manual updating of the key version and you want to change it to update automatically, you might need to explicitly change the key version to an empty string. For details on how to do this, see [Configure encryption for automatic updating of key versions](customer-managed-keys-configure-existing-account.md#configure-encryption-for-automatic-updating-of-key-versions).
+Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.
-- **Manually update the key version:** To use a specific version of a key for Azure Storage encryption, specify that key version when you enable encryption with customer-managed keys for the storage account. If you specify the key version, then Azure Storage uses that version for encryption until you manually update the key version.
+If the storage account was previously configured for manual updating of the key version and you want to change it to update automatically, you might need to explicitly change the key version to an empty string. For details on how to do this, see [Configure encryption for automatic updating of key versions](customer-managed-keys-configure-existing-account.md#configure-encryption-for-automatic-updating-of-key-versions).
- When the key version is explicitly specified, then you must manually update the storage account to use the new key version URI when a new version is created. To learn how to update the storage account to use a new version of the key, see [Configure encryption with customer-managed keys stored in Azure Key Vault](customer-managed-keys-configure-key-vault.md) or [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md).
+### Manually update the key version
-When you enable or disable customer-managed keys, or when you modify the key or the key version, the protection of the root encryption key changes, but the data in your Azure Storage account remains encrypted at all times. There is no additional action required on your part to ensure that your data is protected. Rotating the key version doesn't impact performance. There is no downtime associated with rotating the key version.
+To use a specific version of a key for Azure Storage encryption, specify that key version when you enable encryption with customer-managed keys for the storage account. If you specify the key version, then Azure Storage uses that version for encryption until you manually update the key version.
-> [!NOTE]
-> To rotate a key, create a new version of the key in the key vault or managed HSM, according to your compliance policies. Azure Storage does not handle key rotation, so you will need to manage rotation of the key in the key vault. You can [rotate your keys manually](customer-managed-keys-configure-existing-account.md#configure-encryption-for-manual-updating-of-key-versions) or [configure them to rotate automatically](customer-managed-keys-configure-existing-account.md#configure-encryption-for-automatic-updating-of-key-versions).
->
-> When you rotate the key used for customer-managed keys, that action is not currently logged to the Azure Monitor logs for Azure Storage.
+When the key version is explicitly specified, then you must manually update the storage account to use the new key version URI when a new version is created. To learn how to update the storage account to use a new version of the key, see [Configure encryption with customer-managed keys stored in Azure Key Vault](customer-managed-keys-configure-key-vault.md) or [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md).
## Revoke access to a storage account that uses customer-managed keys
-To revoke access to a storage account that uses customer-managed keys, disable the key that is currently being used. To learn how to disable a key in the Azure key vault, see [The impact of changing customer-managed keys](customer-managed-keys-configure-existing-account.md#the-impact-of-changing-customer-managed-keys). After the key has been disabled, clients can't call operations that read from or write to a blob or its metadata. Attempts to call any of the following operations will fail with error code 403 (Forbidden) for all users:
+To revoke access to a storage account that uses customer-managed keys, disable the key in the key vault. After the key has been disabled, clients can't call operations that read from or write to a blob or its metadata. Attempts to call any of the following operations will fail with error code 403 (Forbidden) for all users:
- [List Blobs](/rest/api/storageservices/list-blobs), when called with the `include=metadata` parameter on the request URI - [Get Blob](/rest/api/storageservices/get-blob)
storsimple Storsimple Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-regions.md
If using a StorSimple 8100 or 8600 physical device, the device is available in t
| 6 | Canada | 21 | Ireland | 36 | Poland | 51 | Switzerland | | 7 | Chile | 22 | Israel | 37 | Portugal | 52 | Taiwan | | 8 | Colombia | 23 | Italy | 38 | Puerto Rico | 53 | Thailand |
-| 9 | Czech Republic | 24 | Japan | 39 | Qatar | 54 | Turkey |
+| 9 | Czech Republic | 24 | Japan | 39 | Qatar | 54 | T├╝rkiye |
| 10 | Denmark | 25 | Kenya | 40 | Romania | 55 | Ukraine | | 11 | Egypt | 26 | Kuwait | 41 | Russia | 56 | United Arab Emirates | | 12 | Finland | 27 | Macao SAR | 42 | Saudi Arabia | 57 | United Kingdom |
stream-analytics Stream Analytics Javascript User Defined Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-javascript-user-defined-functions.md
Samstag, 28. Dezember 2019
``` ## User Logging
-The logging mechanism allows you to capture custom information while a job is running. You can use log data to debug or assess the correctness of the custom code in real time. This mechanism is available through the Console.Log() method.
+The logging mechanism allows users to capture custom information while a job is running. Log data can be used to debug or assess the correctness of the custom code in real time. This mechanism is available through three different methods.
+
+### Console.Info()
+Console.Info method is used to log general information during code execution. This method will log data without interrupting computation. The message logged will be marked as Event Level Informational.
+
+```javascript
+console.info('my info message');
+```
+
+### Console.Warn()
+Console.Warn method is used to log data that might not be correct or expected but is still accepted for computation. This method will not interrupt computation and will resume running after the method is returned. The message logged will be marked as Event Level Warning.
+
+```javascript
+console.warn('my warning message');
+```
+
+### Console.Error() and Console.Log()
+Console.Error method is only used to log error cases where code cannot continue to run. This method will throw an exception with the error information provided as the input parameter and job will stop running. The error message logged will be marked as Event Level Error.
```javascript
-console.log('my error message');
+console.error('my error message');
``` You can access log messages through the [diagnostic logs](data-errors.md).+
+## atob() and btoa()
+The method btoa() can be used to encode an ASCII string into Base64. This is usually done to transfer data in a binary format. The atob() method can be used to decode a string of data encoded in Base64 to an ASCII string format.
+
+```javascript
+var myAsciiString = 'ascii string';
+var encodedString = btoa(myAsciiString);
+var decodedString = atob(encodedString);
+```
+ ## Next steps * [Machine Learning UDF](./machine-learning-udf.md)
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
display(
## Arbitrary web APIs
-With HTTP on Spark, any web service can be used in your big data pipeline. In this example, we use the [World Bank API](http://api.worldbank.org/v2/country/) to get information about various countries around the world.
+With HTTP on Spark, any web service can be used in your big data pipeline. In this example, we use the [World Bank API](http://api.worldbank.org/v2/country/) to get information about various countries/regions around the world.
```python
def world_bank_request(country):
)
-# Create a dataframe with specifies which countries we want data on
+# Create a dataframe with specifies which countries/regions we want data on
df = spark.createDataFrame([("br",), ("usa",)], ["country"]).withColumn( "request", http_udf(world_bank_request)(col("country")) )
traffic-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/cli-samples.md
documentationcenter: virtual-network -+
traffic-manager Traffic Manager Geographic Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-geographic-regions.md
This article lists the countries and regions used by the **Geographic** traffic
- SA(Saudi Arabia)
- - TR(Turkey)
+ - TR(T├╝rkiye)
- YE(Yemen)
virtual-machines Boot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/boot-diagnostics.md
Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. ## Boot diagnostics storage account+ When you create a VM in Azure portal, boot diagnostics is enabled by default. The recommended boot diagnostics experience is to use a managed storage account, as it yields significant performance improvements in the time to create an Azure VM. An Azure managed storage account is used, removing the time it takes to create a user storage account to store the boot diagnostics data. > [!IMPORTANT]
When you create a VM in Azure portal, boot diagnostics is enabled by default. Th
An alternative boot diagnostic experience is to use a custom storage account. A user can either create a new storage account or use an existing one. When the storage firewall is enabled on the custom storage account (**Enabled from all networks** option isn't selected), you must: -- Make sure that access through the storage firewall is allowed for the Azure platform to publish the screenshot and serial log. To do this, go to the custom boot diagnostics storage account in the Azure portal and then select **Networking** from the **Security + networking** section. Check if the **Allow Azure services on the trusted services list to access this storage account** checkbox is selected.
+- Make sure that access through the storage firewall is allowed for the Azure platform to publish the screenshot and serial log. To do this, go to the custom boot diagnostics storage account in the Azure portal and then select **Networking** from the **Security + networking** section. Check if the **Allow Azure services on the trusted services list to access this storage account** checkbox is selected.
- Allow storage firewall for users to view the boot screenshots or serial logs. To do this, add your network or the client/browser's Internet IPs as firewall exclusions. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
To configure the storage firewall for Azure Serial Console, see [Use Serial Cons
> The custom storage account associated with boot diagnostics requires the storage account and the associated virtual machines reside in the same region and subscription. ## Boot diagnostics view+ Go to the virtual machine blade in the Azure portal, the boot diagnostics option is under the *Support and Troubleshooting* section in the Azure portal. Selecting boot diagnostics display a screenshot and serial log information. The serial log contains kernel messaging and the screenshot is a snapshot of your VMs current state. Based on if the VM is running Windows or Linux determines what the expected screenshot would look like. For Windows, users see a desktop background and for Linux, users see a login prompt. :::image type="content" source="./media/boot-diagnostics/boot-diagnostics-linux.png" alt-text="Screenshot of Linux boot diagnostics"::: :::image type="content" source="./media/boot-diagnostics/boot-diagnostics-windows.png" alt-text="Screenshot of Windows boot diagnostics":::
-## Enable managed boot diagnostics
+## Enable managed boot diagnostics
+ Managed boot diagnostics can be enabled through the Azure portal, CLI and ARM Templates. ### Enable managed boot diagnostics using the Azure portal+ When you create a VM in the Azure portal, the default setting is to have boot diagnostics enabled using a managed storage account. Navigate to the *Management* tab during the VM creation to view it. :::image type="content" source="./media/boot-diagnostics/boot-diagnostics-enable-portal.png" alt-text="Screenshot enabling managed boot diagnostics during VM creation."::: ### Enable managed boot diagnostics using CLI+ Boot diagnostics with a managed storage account is supported in Azure CLI 2.12.0 and later. If you don't input a name or URI for a storage account, a managed account is used. For more information and code samples, see the [CLI documentation for boot diagnostics](/cli/azure/vm/boot-diagnostics). ### Enable managed boot diagnostics using PowerShell+ Boot diagnostics with a managed storage account is supported in Azure PowerShell 6.6.0 and later. If you don't input a name or URI for a storage account, a managed account is used. For more information and code samples, see the [PowerShell documentation for boot diagnostics](/powershell/module/az.compute/set-azvmbootdiagnostic). ### Enable managed boot diagnostics using Azure Resource Manager (ARM) templates+ Everything after API version 2020-06-01 supports managed boot diagnostics. For more information, see [boot diagnostics instance view](/rest/api/compute/virtualmachines/createorupdate#bootdiagnostics). ```ARM Template
Everything after API version 2020-06-01 supports managed boot diagnostics. For m
} }, "imageReference": {
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "18.04-LTS",
- "version": "latest"
+ "publisher": "publisherName",
+ "offer": "imageOffer",
+ "sku": "imageSKU",
+ "version": "imageVersion"
} }, "networkProfile": {
Everything after API version 2020-06-01 supports managed boot diagnostics. For m
```
+> [!NOTE]
+> Replace publisherName, imageOffer, imageSKU and imageVersion accordingly.
+ ## Limitations+ - Managed boot diagnostics is only available for Azure Resource Manager VMs. - Managed boot diagnostics doesn't support VMs using unmanaged OS disks. - Boot diagnostics doesn't support premium storage accounts or zone redundant storage accounts. If either of these are used for boot diagnostics users receive an `StorageAccountTypeNotSupported` error when starting the VM.
virtual-machines Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts.md
Reserving the entire host provides several benefits beyond those of a standard s
![View of the new resources for dedicated hosts.](./media/virtual-machines-common-dedicated-hosts/dedicated-hosts2.png) + A **host group** is a resource that represents a collection of dedicated hosts. You create a host group in a region and an availability zone, and add hosts to it. A **host** is a resource, mapped to a physical server in an Azure data center. The physical server is allocated when the host is created. A host is created within a host group. A host has a SKU describing which VM sizes can be created. Each host can host multiple VMs, of different sizes, as long as they are from the same size series.
When creating a new host group, make sure the setting for automatic VM placement
Host groups that are enabled for automatic placement don't require all the VMs to be automatically placed. You'll still be able to explicitly pick a host, even when automatic placement is selected for the host group.
-### Limitations
+### Automatic placement limitations
Known issues and limitations when using automatic VM placement: - You won't be able to redeploy your VM. - You won't be able to use DCv2, Lsv2, NVasv4, NVsv3, Msv2, or M-series VMs with dedicated hosts.
-## Host Service Healing
+## Host service healing
In case of any failure relating to the underlying node, network connectivity or software issues can push the host and VMs on the host to a non-healthy state causing disruption and downtime to your workloads. The default action is for Azure to automatically service heal the impacted host to a healthy node and move all VMs to the healthy host. Once the VMs are service healed and restarted the impacted host will be deallocated. During the service healing process the host and VMs would become unavailable incurring a slight downtime.
Provisioning a dedicated host will consume both dedicated host vCPU and the VM f
![Screenshot of the usage and quotas page in the portal](./media/virtual-machines-common-dedicated-hosts/quotas.png) + For more information, see [Virtual machine vCPU quotas](./windows/quotas.md). Free trial and MSDN subscriptions don't have quota for Azure Dedicated Hosts.
The *type* is the hardware generation. Different hardware types for the same VM
The sizes and hardware types vary by region. Refer to the host [pricing page](https://aka.ms/ADHPricing) to learn more. > [!NOTE]
-> Once a Dedicated host is provisoned, you can't change the size or type. If you need a different size of type, you will need to create a new host.
+> Once a Dedicated host is provisioned, you can't change the size or type. If you need a different size of type, you will need to create a new host.
## Host life cycle
Azure monitors and manages the health status of your hosts. The following states
| Health State | Description | |-|-| | Host Available | There are no known issues with your host. |
-| Host Under Investigation | WeΓÇÖre having some issues with the host that weΓÇÖre looking into. This transitional state is required for Azure to try to identify the scope and root cause for the issue identified. Virtual machines running on the host may be impacted. |
+| Host Under Investigation| WeΓÇÖre having some issues with the host that weΓÇÖre looking into. This transitional state is required for Azure to try to identify the scope and root cause for the issue identified. Virtual machines running on the host may be impacted. |
| Host Pending Deallocate | Azure canΓÇÖt restore the host back to a healthy state and ask you to redeploy your virtual machines out of this host. If `autoReplaceOnFailure` is enabled, your virtual machines are *service healed* to healthy hardware. Otherwise, your virtual machine may be running on a host that is about to fail.|
-| Host deallocated | All virtual machines have been removed from the host. You're no longer being charged for this host since the hardware was taken out of rotation. |
+| Host Deallocated| All virtual machines have been removed from the host. You're no longer being charged for this host since the hardware was taken out of rotation. |
+
+## Frequently Asked Questions
+
+**Q**. What happens to my dedicated host in case of a live migration?
+
+**A**. As of today, Azure dedicated hosts do not support live migration and in case of a hardware failure, we service heal the host to a different node.
++
+**Q**. Can I run VMs from multiple VM families on the same dedicated host?
+**A**. No, you would be able to run only VMs from the same family as the underlying dedicated host. For e.g., A Dsv3-Type4 host only supports VMs of Dsv3 VM family.
++
+**Q**. Would I be able to run different VM sizes on a single dedicated host?
+
+**A**. Yes, you can run multiple sizes of VMs on the same dedicated host as long as the all the VMs belong to the same family as the underlying dedicated host and there is enough capacity on the host to support the VMs sizes. For e.g., on a Dsv3-Type4 host you could run D2sv3, D8sv3, D16sv3 VMs at the same time.
## Next steps
Azure monitors and manages the health status of your hosts. The following states
- There's a [sample template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md) that uses both zones and fault domains for maximum resiliency in a region. - You can also save on costs with a [Reserved Instance of Azure Dedicated Hosts](prepay-dedicated-hosts-reserved-instances.md).++++
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
To specify what happens to the attached resources when you delete a VM, use the
- `--data-disk-delete-option` - data disk. - `--nic-delete-option` - NIC.
-In this example, we create a VM and set the OS disk and NIC to be deleted when we delete the VM.
+In this example, we create a VM named *myVM* in the resource group named *myResourceGroup* using an image named *myImage*, and set the OS disk and NIC to be deleted when we delete the VM.
```azurecli-interactive az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image myImage \
--public-ip-sku Standard \ --nic-delete-option delete \ --os-disk-delete-option delete \
You can use the Azure REST API to apply force delete to your scale set. Use the
## FAQ
-### Q: Does this feature work with shared disks?
+### Q: Does this feature work with shared disks?
-A: For shared disks, you can't set the ‘deleteOption’ property to ‘Delete’. You can leave it blank or set it to ‘Detach’
+A: For shared disks, you can't set the ΓÇÿdeleteOptionΓÇÖ property to ΓÇÿDeleteΓÇÖ. You can leave it blank or set it to ΓÇÿDetachΓÇÖ
### Q: Which Azure resources support this feature?
-A: This feature is supported on all managed disk types used as OS disks and Data disks, NICs, and Public IPs
+A: This feature is supported on all managed disk types used as OS disks and Data disks, NICs, and Public IPs
### Q: Can I use this feature on disks and NICs that aren't associated with a VM?
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
description: Learn about ultra disks for Azure VMs
Previously updated : 03/07/2023 Last updated : 03/22/2023
Update-AzVM -VM $vm -ResourceGroupName $resourceGroup
# [Portal](#tab/azure-portal)
-Currently, adjusting disk performance is only supported with Azure CLI or the Azure PowerShell module.
+Ultra disks offer a unique capability that allows you to adjust their performance. You can make these adjustments from the Azure portal, on the disks themselves.
+
+1. Navigate to your VM and select **Disks**.
+1. Select the ultra disk you'd like to modify the performance of.
+
+ :::image type="content" source="media/virtual-machines-disks-getting-started-ultra-ssd/select-ultra-disk-to-modify.png" alt-text="Screenshot of disks blade on your vm, ultra disk is highlighted.":::
+
+1. Select **Size + performance** and then make your modifications.
+1. Select **Save**.
+
+ :::image type="content" source="media/virtual-machines-disks-getting-started-ultra-ssd/modify-ultra-disk-performance.png" alt-text="Screenshot of configuration blade on your ultra disk, disk size, iops, and throughput are highlighted, save is highlighted.":::
# [Azure CLI](#tab/azure-cli)
virtual-machines Ephemeral Os Disks Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks-deploy.md
In the Azure portal, you can choose to use ephemeral disks when deploying a virt
If the option for using an ephemeral disk or OS cache placement or Temp disk placement is greyed out, you might have selected a VM size that doesn't have a cache/temp size larger than the OS image or that doesn't support Premium storage. Go back to the **Basics** page and try choosing another VM size.
-## Scale set template deployment
+## Scale set template deployment
+ The process to create a scale set that uses an ephemeral OS disk is to add the `diffDiskSettings` property to the `Microsoft.Compute/virtualMachineScaleSets/virtualMachineProfile` resource type in the template. Also, the caching policy must be set to `ReadOnly` for the ephemeral OS disk. placement can be changed to `CacheDisk` for OS cache disk placement.
The process to create a scale set that uses an ephemeral OS disk is to add the `
"createOption": "FromImage" }, "imageReference": {
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "16.04-LTS",
- "version": "latest"
+ "publisher": "publisherName",
+ "offer": "offerName",
+ "sku": "skuName",
+ "version": "imageVersion"
} }, "osProfile": {
The process to create a scale set that uses an ephemeral OS disk is to add the `
} ```
+> [!NOTE]
+> Replace all the other values accordingly.
+ ## VM template deployment You can deploy a VM with an ephemeral OS disk using a template. The process to create a VM that uses ephemeral OS disks is to add the `diffDiskSettings` property to Microsoft.Compute/virtualMachines resource type in the template. Also, the caching policy must be set to `ReadOnly` for the ephemeral OS disk. placement option can be changed to `CacheDisk` for OS cache disk placement.
To use an ephemeral disk for a CLI VM deployment, set the `--ephemeral-os-disk`
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image imageName \
--ephemeral-os-disk true \ --ephemeral-os-disk-placement ResourceDisk \ --os-disk-caching ReadOnly \
az vm create \
--generate-ssh-keys ```
+> [!NOTE]
+> Replace `myVM`, `myResourceGroup`, `imageName` and `azureuser` accordingly.
+ For scale sets, you use the same `--ephemeral-os-disk true` parameter for [az-vmss-create](/cli/azure/vmss#az-vmss-create) and set the `--os-disk-caching` parameter to `ReadOnly` and the `--ephemeral-os-disk-placement` parameter to `ResourceDisk` for temp disk placement or `CacheDisk` for cache disk placement. ## Reimage a VM using REST
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Ephemeral OS disks are created on the local virtual machine (VM) storage and not saved to the remote Azure Storage. Ephemeral OS disks work well for stateless workloads, where applications are tolerant of individual VM failures but are more affected by VM deployment time or reimaging of individual VM instances. With Ephemeral OS disk, you get lower read/write latency to the OS disk and faster VM reimage.
-
-The key features of ephemeral disks are:
+Ephemeral OS disks are created on the local virtual machine (VM) storage and not saved to the remote Azure Storage. Ephemeral OS disks work well for stateless workloads, where applications are tolerant of individual VM failures but are more affected by VM deployment time or reimaging of individual VM instances. With Ephemeral OS disk, you get lower read/write latency to the OS disk and faster VM reimage.
+
+The key features of ephemeral disks are:
+ - Ideal for stateless applications. - Supported by Marketplace, custom images, and by [Azure Compute Gallery](./shared-image-galleries.md) (formerly known as Shared Image Gallery). - Ability to fast reset or reimage VMs and scale set instances to the original boot state.
The key features of ephemeral disks are:
- Ephemeral OS disks are free, you incur no storage cost for OS disks. - Available in all Azure regions.
-
Key differences between persistent and ephemeral OS disks: | | Persistent OS Disk | Ephemeral OS Disk |
Key differences between persistent and ephemeral OS disks:
\* 4 TiB is the maximum supported OS disk size for managed (persistent) disks. However, many OS disks are partitioned with master boot record (MBR) by default and because of this are limited to 2 TiB. For details, see [OS disk](managed-disks-overview.md#os-disk). ## Placement options for Ephemeral OS disks+ Ephemeral OS disk can be stored either on VM's OS cache disk or VM's temp/resource disk.
-[DiffDiskPlacement](/rest/api/compute/virtualmachines/list#diffdiskplacement) is the new property that can be used to specify where you want to place the Ephemeral OS disk. With this feature, when a Windows VM is provisioned, we configure the pagefile to be located on the OS Disk.
+[DiffDiskPlacement](/rest/api/compute/virtualmachines/list#diffdiskplacement) is the new property that can be used to specify where you want to place the Ephemeral OS disk. With this feature, when a Windows VM is provisioned, we configure the pagefile to be located on the OS Disk.
## Size requirements You can choose to deploy Ephemeral OS Disk on VM cache or VM temp disk. The image OS diskΓÇÖs size should be less than or equal to the temp/cache size of the VM size chosen.
-For example, if you want to opt for **OS cache placement**: Standard Windows Server images from the marketplace are about 127 GiB, which means that you need a VM size that has a cache equal to or larger than 127 GiB. The Standard_DS3_v2 has a cache size of 127 GiB, which is large enough. In this case, the Standard_DS3_v2 is the smallest size in the DSv2 series that you can use with this image.
+For example, if you want to opt for **OS cache placement**: Standard Windows Server images from the marketplace are about 127 GiB, which means that you need a VM size that has a cache equal to or larger than 127 GiB. The Standard_DS3_v2 has a cache size of 127 GiB, which is large enough. In this case, the Standard_DS3_v2 is the smallest size in the DSv2 series that you can use with this image.
-If you want to opt for **Temp disk placement**: Standard Ubuntu server image from marketplace is about 30 GiB. To enable Ephemeral OS disk on temp, the temp disk size must be equal to or larger than 30 GiB. Standard_B4ms has a temp size of 32 GiB, which can fit the 30 GiB OS disk. Upon creation of the VM, the temp disk space would be 2 GiB.
-> [!IMPORTANT]
+For example, if you want to opt for **Temp disk placement**: Standard Ubuntu server image from marketplace is about 30 GiB. To enable Ephemeral OS disk on temp, the temp disk size must be equal to or larger than 30 GiB. Standard_B4ms has a temp size of 32 GiB, which can fit the 30 GiB OS disk. Upon creation of the VM, the temp disk space would be 2 GiB.
+
+> [!IMPORTANT]
> If opting for temp disk placement the Final Temp disk size = (Initial temp disk size - OS image size). In the case of **Temp disk placement**, as Ephemeral OS disk is placed on temp disk it will share the IOPS with temp disk as per the VM size chosen by you.
Basic Linux and Windows Server images in the Marketplace that are denoted by `[s
Ephemeral disks also require that the VM size supports **Premium storage**. The sizes usually (but not always) have an `s` in the name, like DSv2 and EsV3. For more information, see [Azure VM sizes](sizes.md) for details around which sizes support Premium storage. > [!NOTE]
->
+>
> Ephemeral disk will not be accessible through the portal. You will receive a "Resource not Found" or "404" error when accessing the ephemeral disk which is expected.
->
+>
+
+## Unsupported features
-## Unsupported features
- Capturing VM images-- Disk snapshots -- Azure Disk Encryption
+- Disk snapshots
+- Azure Disk Encryption
- Azure Backup - Azure Site Recovery -- OS Disk Swap
+- OS Disk Swap
+
+## Trusted Launch for Ephemeral OS disks
- ## Trusted Launch for Ephemeral OS disks
Ephemeral OS disks can be created with Trusted launch. Not all VM sizes and regions are supported for trusted launch. Check [limitations of trusted launch](trusted-launch.md#limitations) for supported sizes and regions. VM guest state (VMGS) is specific to trusted launch VMs. It is a blob that is managed by Azure and contains the unified extensible firmware interface (UEFI) secure boot signature databases and other security information. When using trusted launch by default **1 GiB** from the **OS cache** or **temp storage** based on the chosen placement option is reserved for VMGS.The lifecycle of the VMGS blob is tied to that of the OS Disk.
-For example, If you try to create a Trusted launch Ephemeral OS disk VM using OS image of size 56 GiB with VM size [Standard_DS4_v2](dv2-dsv2-series.md) using temp disk placement you would get an error as
+For example, If you try to create a Trusted launch Ephemeral OS disk VM using OS image of size 56 GiB with VM size [Standard_DS4_v2](dv2-dsv2-series.md) using temp disk placement you would get an error as
**"OS disk of Ephemeral VM with size greater than 55 GB is not allowed for VM size Standard_DS4_v2 when the DiffDiskPlacement is ResourceDisk."** This is because the temp storage for [Standard_DS4_v2](dv2-dsv2-series.md) is 56 GiB, and 1 GiB is reserved for VMGS when using trusted launch. For the same example above, if you create a standard Ephemeral OS disk VM you would not get any errors and it would be a successful operation. > [!IMPORTANT]
->
+>
> While using ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the vTPM after VM creation may not be persisted for operations like reimaging and platform events like service healing.
->
+>
For more information on [how to deploy a trusted launch VM](trusted-launch-portal.md) ## Confidential VMs using Ephemeral OS disks+ AMD-based Confidential VMs cater to high security and confidentiality requirements of customers. These VMs provide a strong, hardware-enforced boundary to help meet your security needs. There are limitations to use Confidential VMs. Check the [region](../confidential-computing/confidential-vm-overview.md#regions), [size](../confidential-computing/confidential-vm-overview.md#size-support) and [OS supported](../confidential-computing/confidential-vm-overview.md#os-support) limitations for confidential VMs. Virtual machine guest state (VMGS) blob contains the security information of the confidential VM. Confidential VMs using Ephemeral OS disks by default **1 GiB** from the **OS cache** or **temp storage** based on the chosen placement option is reserved for VMGS.The lifecycle of the VMGS blob is tied to that of the OS Disk. > [!IMPORTANT]
->
+>
> When choosing a confidential VM with full OS disk encryption before VM deployment that uses a customer-managed key (CMK). [Updating a CMK key version](../storage/common/customer-managed-keys-overview.md#update-the-key-version) or [key rotation](../key-vault/keys/how-to-configure-key-rotation.md) is not supported with Ephemeral OS disk. Confidential VMs using Ephemeral OS disks need to be deleted before updating or rotating the keys and can be re-created subsequently.
->
+>
For more information on [confidential VM](../confidential-computing/confidential-vm-overview.md) ## Customer Managed key+ You can choose to use customer managed keys or platform managed keys when you enable end-to-end encryption for VMs using Ephemeral OS disk. Currently this option is available only via [PowerShell](./windows/disks-enable-customer-managed-keys-powershell.md), [CLI](./linux/disks-enable-customer-managed-keys-cli.md) and SDK in all regions. > [!IMPORTANT]
->
+>
> [Updating a CMK key version](../storage/common/customer-managed-keys-overview.md#update-the-key-version) or [key rotation](../key-vault/keys/how-to-configure-key-rotation.md) of customer managed key is not supported with Ephemeral OS disk. VMs using Ephemeral OS disks need to be deleted before updating or rotating the keys and can be re-created subsequently.
->
+>
For more information on [Encryption at host](./disk-encryption.md)
-
+ ## Next steps+ Create a VM with ephemeral OS disk using [Azure Portal/CLI/PowerShell/ARM template](ephemeral-os-disks-deploy.md). Check out the [frequently asked questions on ephemeral os disk](ephemeral-os-disks-faq.md).
virtual-machines Agent Dependency Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-dependency-windows.md
+ Previously updated : 06/01/2021 Last updated : 03/27/2023 # Azure Monitor Dependency virtual machine extension for Windows
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
Previously updated : 02/15/2023 Last updated : 03/27/2023
The Network Watcher Agent extension can be configured for the following Linux di
||| | Ubuntu | 12+ | | Debian | 7 and 8 |
-| Red Hat | 6, 7, 8.6 |
+| Red Hat | 6, 7 and 8+ |
| Oracle Linux | 6.8+, 7 and 8+ | | SUSE Linux Enterprise Server | 11, 12 and 15 | | OpenSUSE Leap | 42.3+ |
virtual-machines Network Watcher Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-update.md
tags: azure-resource-manager Previously updated : 02/15/2023 Last updated : 03/27/2023
This article assumes you have the Network Watcher extension installed in your VM
## Latest version
-The latest version of the Network Watcher extension is `1.4.2423.1`.
+The latest version of the Network Watcher extension is `1.4.2573.1`.
### Identify latest version # [Linux](#tab/linux)
-```powershell
+```azurecli
az vm extension image list-versions --publisher Microsoft.Azure.NetworkWatcher --location westeurope --name NetworkWatcherAgentLinux -o table ``` # [Windows](#tab/windows)
-```powershell
+```azurecli
az vm extension image list-versions --publisher Microsoft.Azure.NetworkWatcher --location westeurope --name NetworkWatcherAgentWindows -o table ``` - ## Update your extension using a PowerShell script Customers with large deployments who need to update multiple VMs at once. For updating select VMs manually, see the next section.
Information about the extension appears multiple times in the JSON output. The f
You should see something like the below: ![Azure CLI Screenshot](./media/network-watcher/azure-cli-screenshot.png) + #### Use PowerShell Run the following commands from a PowerShell prompt:
Locate the Azure Network Watcher extension in the output and identify the ve
You should see something like the below: ![PowerShell Screenshot](./media/network-watcher/powershell-screenshot.png) + ### Update your extension If your version is below the latest version mentioned above, update your extension by using any of the following options.
Removing the extension
```powershell #Same command for Linux and Windows Remove-AzVMExtension -ResourceGroupName "SampleRG" -VMName "Sample-VM" -Name "AzureNetworkWatcherExtension"
-```
+```
Installing the extension again
If you have auto-upgrade set to true for the Network Watcher extension, reboot y
## Support If you need more help at any point in this article, see the Network Watcher extension documentation for [Linux](./network-watcher-linux.md) or [Windows](./network-watcher-windows.md). You can also contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get support**. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).+
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
In the case of expanding a data disk when there are several data disks present o
Start by identifying the relationship between disk utilization, mount point, and device, with the ```df``` command. ```bash
-linux:~ # df -Th
+df -Th
+```
+
+```output
Filesystem Type Size Used Avail Use% Mounted on /dev/sda1 xfs 97G 1.8G 95G 2% / <truncated> /dev/sdd1 ext4 32G 30G 727M 98% /opt/db/data /dev/sde1 ext4 32G 49M 30G 1% /opt/db/log-
- > [!NOTE]
- > If you are using an ext3 file system, you can use the resize2fs command instead`.
``` Here we can see, for example, the `/opt/db/data` filesystem is nearly full, and is located on the `/dev/sdd1` partition. The output of `df` will show the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This will be important later. Now locate the LUN which correlates to `/dev/sdd` by examining the contents of `/dev/disk/azure/scsi1`. The output of the following `ls` command will show that the device known as `/dev/sdd` within the Linux OS is located at LUN1 when looking in the Azure portal.
+```bash
+sudo ls -alF /dev/disk/azure/scsi1/
+```
+ ```output
-linux:~ # ls -alF /dev/disk/azure/scsi1/
total 0 drwxr-xr-x. 2 root root 140 Sep 9 21:54 ./ drwxr-xr-x. 4 root root 80 Sep 9 21:48 ../
This article requires an existing VM in Azure with at least one data disk attach
In the following samples, replace example parameter names such as *myResourceGroup* and *myVM* with your own values. > [!IMPORTANT]
-> If your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
+> If your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
1. Operations on virtual hard disks can't be performed with the VM running. Deallocate your VM with [az vm deallocate](/cli/azure/vm#az-vm-deallocate). The following example deallocates the VM named *myVM* in the resource group named *myResourceGroup*:
If a data disk was expanded without downtime using the procedure mentioned previ
1. Identify the currently recognized size on the first line of output from `fdisk -l /dev/sda`
+ ```bash
+ sudo fdisk -l /dev/sda
```
- root@linux:~# fdisk -l /dev/sda
+
+ ```output
Disk /dev/sda: 256 GiB, 274877906944 bytes, 536870912 sectors Disk model: Virtual Disk Units: sectors of 1 * 512 = 512 bytes
If a data disk was expanded without downtime using the procedure mentioned previ
Device Boot Start End Sectors Size Id Type /dev/sda1 2048 536870878 536868831 256G 83 Linux ```
-
+ 1. Insert a `1` character into the rescan file for this device. Note the reference to sda, this would change if a different disk device was resized.
- ```
- root@linux:~# echo 1 > /sys/class/block/sda/device/rescan
+ ```bash
+ sudo echo 1 > /sys/class/block/sda/device/rescan
``` 1. Verify that the new disk size has been recognized
+ ```bash
+ sudo fdisk -l /dev/sda
```
- root@linux:~# fdisk -l /dev/sda
+
+ ```output
Disk /dev/sda: 512 GiB, 549755813888 bytes, 1073741824 sectors Disk model: Virtual Disk Units: sectors of 1 * 512 = 512 bytes
On Ubuntu 16.x and newer, the root partition of the OS disk and filesystems will
As shown in the following example, the OS disk has been resized from the portal to 100 GB. The **/dev/sda1** file system mounted on **/** now displays 97 GB.
+```bash
+df -Th
```
-user@ubuntu:~# df -Th
+
+```output
Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 314M 0 314M 0% /dev tmpfs tmpfs 65M 2.3M 63M 4% /run
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
- ```
+ ```bash
sudo -i ``` 1. Use the following command to install the **growpart** package, which will be used to resize the partition, if it isn't already present:
- ```
+ ```bash
zypper install growpart ``` 1. Use the `lsblk` command to find the partition mounted on the root of the file system (**/**). In this case, we see that partition 4 of device **sda** is mounted on **/**:
+ ```bash
+ lsblk
```
- linux:~ # lsblk
+
+ ```output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 48G 0 disk Γö£ΓöÇsda1 8:1 0 2M 0 part
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Resize the required partition by using the `growpart` command and the partition number determined in the preceding step:
- ```
+ ```bash
growpart /dev/sda 4
+ ```
+
+ ```output
CHANGED: partition=4 start=3151872 old: size=59762655 end=62914527 new: size=97511391 end=100663263 ``` 1. Run the `lsblk` command again to check whether the partition has been increased. The following output shows that the **/dev/sda4** partition has been resized to 46.5 GB:
-
- ```
+
+ ```bash
lsblk
+ ```
+
+ ```output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 48G 0 disk Γö£ΓöÇsda1 8:1 0 2M 0 part
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Identify the type of file system on the OS disk by using the `lsblk` command with the `-f` flag:
- ```
+ ```bash
lsblk -f
+ ```
+
+ ```output
NAME FSTYPE LABEL UUID MOUNTPOINT sda Γö£ΓöÇsda1
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
``` 1. Based on the file system type, use the appropriate commands to resize the file system.
-
+ For **xfs**, use this command:
-
- ```
- linux:~ #xfs_growfs /
+
+ ```bash
+ xfs_growfs /
```
-
+ Example output:
-
- ```
- xfs_growfs /
+
+ ```output
meta-data=/dev/sda4 isize=512 agcount=4, agsize=1867583 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 rmapbt=0
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 7470331 to 12188923 ```
-
+ For **ext4**, use this command:
-
- ```
+
+ ```bash
resize2fs /dev/sda4 ```
-
+ 1. Verify the increased file system size for **df -Th** by using this command:
-
- ```
+
+ ```bash
df -Thl ```
-
+ Example output:
-
- ```
- df -Thl
+
+ ```output
Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 445M 4.0K 445M 1% /dev tmpfs tmpfs 458M 0 458M 0% /dev/shm
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
tmpfs tmpfs 92M 0 92M 0% /run/user/1000 tmpfs tmpfs 92M 0 92M 0% /run/user/490 ```
-
+ In the preceding example, we can see that the file system size for the OS disk has been increased.
-# [Red Hat with LVM](#tab/rhellvm)
+# [Red Hat/CentOS with LVM](#tab/rhellvm)
1. Follow the procedure above to expand the disk in the Azure infrastructure.
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Use the `lsblk` command to determine which logical volume (LV) is mounted on the root of the file system (**/**). In this case, we see that **rootvg-rootlv** is mounted on **/**. If a different filesystem is in need of resizing, substitute the LV and mount point throughout this section.
- ```shell
+ ```bash
lsblk -f
+ ```
+
+ ```output
NAME FSTYPE LABEL UUID MOUNTPOINT fd0 sda
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash vgdisplay rootvg
+ ```
+
+ ```output
Volume group VG Name rootvg System ID
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
``` In this example, the line **Free PE / Size** shows that there's 38.02 GB free in the volume group, as the disk has already been resized.
-
+ 1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts This package is preinstalled on most marketplace images ```bash yum install cloud-utils-growpart gdisk ```
+ In RHEL/CentOS 8.x VMs you can use `dnf` command instead of `yum`.
+ 1. Determine which disk and partition holds the LVM physical volume (PV) or volumes in the volume group named **rootvg** by using the **pvscan** command. Note the size and free space listed between the brackets (**[** and **]**). ```bash pvscan
+ ```
+
+ ```output
PV /dev/sda4 VG rootvg lvm2 [<63.02 GiB / <38.02 GiB free] ```
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash lsblk /dev/sda4
+ ```
+
+ ```output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda4 8:4 0 63G 0 part Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash growpart /dev/sda 4
+ ```
+
+ ```output
CHANGED: partition=4 start=2054144 old: size=132161536 end=134215680 new: size=199272414 end=201326558 ```
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash lsblk /dev/sda4
+ ```
+
+ ```output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda4 8:4 0 95G 0 part Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash pvresize /dev/sda4
+ ```
+
+ ```output
Physical volume "/dev/sda4" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized ```
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash pvscan
+ ```
+
+ ```output
PV /dev/sda4 VG rootvg lvm2 [<95.02 GiB / <70.02 GiB free] ```
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
Example output:
- ```bash
- lvresize -r -L +10G /dev/mapper/rootvg-rootlv
+ ```output
Size of logical volume rootvg/rootlv changed from 2.00 GiB (512 extents) to 12.00 GiB (3072 extents). Logical volume rootvg/rootlv successfully resized. meta-data=/dev/mapper/rootvg-rootlv isize=512 agcount=4, agsize=131072 blks
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
Example output:
- ```shell
+ ```bash
df -Th /
+ ```
+
+ ```output
Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/rootvg-rootlv xfs 12G 71M 12G 1% / ```
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
> [!NOTE] > To use the same procedure to resize any other logical volume, change the **lv** name in step **12**.
-# [Red Hat with raw disks](#tab/rhelraw)
+# [Red Hat/CentOS without LVM](#tab/rhelraw)
1. Follow the procedure above to expand the disk in the Azure infrastructure. 1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user: ```bash
- sudo -i
+ sudo -i
``` 1. When the VM has restarted, perform the following steps:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
yum install cloud-utils-growpart gdisk ```
+ In RHEL/CentOS 8.x VMs you can use `dnf` command instead of `yum`.
+ 1. Use the **lsblk -f** command to verify the partition and filesystem type holding the root (**/**) partition ```bash lsblk -f
+ ```
+
+ ```output
NAME FSTYPE LABEL UUID MOUNTPOINT sda Γö£ΓöÇsda1 xfs 2a7bb59d-6a71-4841-a3c6-cba23413a5d2 /boot
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash gdisk -l /dev/sda
+ ```
+
+ ```output
GPT fdisk (gdisk) version 0.8.10 Partition table scan:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash growpart /dev/sda 2
+ ```
+
+ ```output
CHANGED: partition=2 start=2050048 old: size=60862464 end=62912512 new: size=98613214 end=100663262 ```
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash gdisk -l /dev/sda
+ ```
+
+ ```output
GPT fdisk (gdisk) version 0.8.10 Partition table scan:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash xfs_growfs /
+ ```
+
+ ```output
meta-data=/dev/sda2 isize=512 agcount=4, agsize=1901952 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
```bash df -hl
+ ```
+
+ ```output
Filesystem Size Used Avail Use% Mounted on devtmpfs 452M 0 452M 0% /dev tmpfs 464M 0 464M 0% /dev/shm
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
Previously updated : 12/20/2022 Last updated : 03/07/2023
virtual-machines Image Builder Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-vnet.md
Title: Create a Windows VM with Azure VM Image Builder by using an existing virt
description: Use Azure VM Image Builder to create a basic, customized Windows image that has access to existing resources on a virtual network. - Previously updated : 03/02/2021+ Last updated : 03/27/2023
Submit the image configuration to Azure VM Image Builder.
```powershell-interactive New-AzResourceGroupDeployment -ResourceGroupName $imageResourceGroup -TemplateFile $templateFilePath -api-version "2020-02-14" -imageTemplateName $imageTemplateName -svclocation $location-
-# note this will take minute, as validation is run (security / dependencies etc.)
```
+> [!NOTE]
+> This will take a minute, as validation is run in regards to security, dependenciec, etc.
+ Start the image build. ```powershell-interactive
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
Title: Accelerated Networking overview
-description: Accelerated Networking to improves networking performance of Azure VMs.
+description: Learn how Accelerated Networking can improve the networking performance of Azure VMs.
ms.devlang: na
vm-windows Previously updated : 02/15/2022 Last updated : 03/20/2023
-# What is Accelerated Networking?
+# Accelerated Networking (AccelNet) overview
-Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and CPU utilization for the most demanding network workloads on supported VM types. The following diagram illustrates how two VMs communicate with and without accelerated networking:
+This article explains Accelerated Networking and describes its benefits, constraints, and supported configurations. Accelerated Networking enables [single root I/O virtualization (SR-IOV)](/windows-hardware/drivers/network/overview-of-single-root-i-o-virtualization--sr-iov-) on supported virtual machine (VM) types, greatly improving networking performance. This high-performance data path bypasses the host, which reduces latency, jitter, and CPU utilization for the most demanding network workloads.
-![Communication between Azure virtual machines with and without accelerated networking](./media/create-vm-accelerated-networking/accelerated-networking.png)
+The following diagram illustrates how two VMs communicate with and without Accelerated Networking:
-Without accelerated networking, all networking traffic in and out of the VM must traverse the host and the virtual switch. The virtual switch provides all policy enforcement, such as network security groups, access control lists, isolation, and other network virtualized services to network traffic.
+![Screenshot that shows communication between Azure VMs with and without Accelerated Networking.](./media/create-vm-accelerated-networking/accelerated-networking.png)
-> [!NOTE]
-> To learn more about virtual switches, see [Hyper-V Virtual Switch](/windows-server/virtualization/hyper-v-virtual-switch/hyper-v-virtual-switch).
+**Without Accelerated Networking**, all networking traffic in and out of the VM traverses the host and the virtual switch. The virtual switch provides all policy enforcement to network traffic. Policies include network security groups, access control lists, isolation, and other network virtualized services. To learn more about virtual switches, see [Hyper-V Virtual Switch](/windows-server/virtualization/hyper-v-virtual-switch/hyper-v-virtual-switch).
-With accelerated networking, network traffic arrives at the VM's network interface (NIC) and is then forwarded to the VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. Because policy is applied in hardware, the NIC can forward network traffic directly to the VM. The NIC bypasses the host and the virtual switch, while it maintains all the policy it applied in the host.
-
-The benefits of accelerated networking only apply to the VM that it's enabled on. For the best results, enable this feature on at least two VMs connected to the same Azure virtual network. When communicating across virtual networks or connecting on-premises, this feature has minimal impact to overall latency.
+**With Accelerated Networking**, network traffic that arrives at the VM's network interface (NIC) forwards directly to the VM. Accelerated Networking offloads all network policies that the virtual switch applied, and applies them in hardware. Because hardware applies policy, the NIC can forward network traffic directly to the VM. The NIC bypasses the host and the virtual switch, while it maintains all the policy it applied in the host.
## Benefits -- **Lower Latency / Higher packets per second (pps)**: Eliminating the virtual switch from the data path removes the time packets spend in the host for policy processing. It also increases the number of packets that can be processed inside the VM.
+Accelerated Networking has the following benefits:
-- **Reduced jitter**: Virtual switch processing depends on the amount of policy that needs to be applied. It also depends on the workload of the CPU that's doing the processing. Offloading the policy enforcement to the hardware removes that variability by delivering packets directly to the VM. Offloading also removes the host-to-VM communication, all software interrupts, and all context switches.
+- **Lower latency and higher packets per second (pps).** Removing the virtual switch from the data path eliminates the time packets spend in the host for policy processing, and increases the number of packets that the VM can process.
-- **Decreased CPU utilization**: Bypassing the virtual switch in the host leads to less CPU utilization for processing network traffic.
+- **Reduced jitter.** Virtual switch processing time depends on the amount of policy to apply and the workload of the CPU that does the processing. Offloading policy enforcement to the hardware removes that variability by delivering packets directly to the VM. Offloading also removes the host-to-VM communication, all software interrupts, and all context switches.
-## Supported operating systems
+- **Decreased CPU utilization.** Bypassing the virtual switch in the host leads to less CPU utilization for processing network traffic.
-The following versions of Windows are supported:
+## Limitations and constraints
-- **Windows Server 2022**-- **Windows Server 2019 Standard/Datacenter**-- **Windows Server 2016 Standard/Datacenter** -- **Windows Server 2012 R2 Standard/Datacenter**-- **Windows 10, version 21H2 or later** _(includes Windows 10 Enterprise multi-session)_-- **Windows 11** _(includes Windows 11 Enterprise multi-session)_
+- The benefits of Accelerated Networking apply only to the VM that enables it.
-The following distributions are supported out of the box from the Azure Gallery:
-- **Ubuntu 14.04 with the linux-azure kernel**-- **Ubuntu 16.04 or later** -- **SLES12 SP3 or later** -- **RHEL 7.4 or later**-- **CentOS 7.4 or later**-- **CoreOS Linux**-- **Debian "Stretch" with backports kernel, Debian "Buster" or later**-- **Oracle Linux 7.4 and later with Red Hat Compatible Kernel (RHCK)**-- **Oracle Linux 7.5 and later with UEK version 5**-- **FreeBSD 10.4, 11.1 & 12.0 or later**
+- For best results, you should enable Accelerated Networking on at least two VMs in the same Azure virtual network. This feature has minimal impact on latency when you communicate across virtual networks or connect on-premises.
-## Limitations and constraints
+- You can't enable Accelerated Networking on a running VM. You can enable Accelerated Networking on a supported VM only when the VM is stopped and deallocated.
+
+- You can't deploy virtual machines (classic) with Accelerated Networking through Azure Resource Manager.
+
+### Supported regions
+
+Accelerated Networking is available in all global Azure regions and the Azure Government Cloud.
+
+### Supported operating systems
+
+The following versions of Windows support Accelerated Networking:
+
+- Windows Server 2022
+- Windows Server 2019 Standard/Datacenter
+- Windows Server 2016 Standard/Datacenter
+- Windows Server 2012 R2 Standard/Datacenter
+- Windows 10, version 21H2 or later, including Windows 10 Enterprise multisession
+- Windows 11, including Windows 11 Enterprise multisession
+
+The following Linux and FreeBSD distributions from the Azure Gallery support Accelerated Networking out of the box:
+
+- Ubuntu 14.04 with the linux-azure kernel
+- Ubuntu 16.04 or later
+- SLES12 SP3 or later
+- RHEL 7.4 or later
+- CentOS 7.4 or later
+- CoreOS Linux
+- Debian "Stretch" with backports kernel
+- Debian "Buster" or later
+- Oracle Linux 7.4 and later with Red Hat Compatible Kernel (RHCK)
+- Oracle Linux 7.5 and later with UEK version 5
+- FreeBSD 10.4, 11.1 & 12.0 or later
### Supported VM instances
-Accelerated Networking is supported on most general purpose and compute-optimized instance sizes with 2 or more vCPUs. On instances that support hyperthreading, Accelerated Networking is supported on VM instances with 4 or more vCPUs.
+- Most general-purpose and compute-optimized VM instance sizes with two or more vCPUs support Accelerated Networking. On instances that support hyperthreading, VM instances with four or more vCPUs support Accelerated Networking.
-Support for Accelerated Networking can be found in the individual [virtual machine sizes](../virtual-machines/sizes.md) documentation.
+- To check whether a VM size supports Accelerated Networking, see [Sizes for virtual machines in Azure](../virtual-machines/sizes.md).
-The list of Virtual Machine SKUs that support Accelerated Networking can be queried directly via the following Azure CLI [`az vm list-skus`](/cli/azure/vm#az-vm-list-skus) command.
+- You can directly query the list of VM SKUs that support Accelerated Networking by using the Azure CLI [az vm list-skus](/cli/azure/vm#az-vm-list-skus) command.
-> [!NOTE]
-> Although NC and NV sizes will show in the command below, they do not support Accelerated Networking. Enabling Accelerated Networking on NC or NV VMs will have no effect.
+ ```azurecli-interactive
+ az vm list-skus \
+ --location westus \
+ --all true \
+ --resource-type virtualMachines \
+ --query '[].{size:size, name:name, acceleratedNetworkingEnabled: capabilities[?name==`AcceleratedNetworkingEnabled`].value | [0]}' \
+ --output table
+ ```
-```azurecli-interactive
-az vm list-skus \
- --location westus \
- --all true \
- --resource-type virtualMachines \
- --query '[].{size:size, name:name, acceleratedNetworkingEnabled: capabilities[?name==`AcceleratedNetworkingEnabled`].value | [0]}' \
- --output table
-```
+ >[!NOTE]
+ >Although NC and NV sizes appear in the command output, those sizes don't support Accelerated Networking. Enabling Accelerated Networking on NC or NV VMs has no effect.
-### Custom images (or) Azure compute gallery images
+### Custom VM images
-If you're using a custom image and your image supports Accelerated Networking, make sure that you have the required drivers to work with Mellanox ConnectX-3, ConnectX-4 Lx, and ConnectX-5 NICs on Azure. Also, Accelerated Networking requires network configurations that exempt the configuration of the virtual functions (mlx4_en and mlx5_core drivers). In images that have cloud-init >=19.4, networking is correctly configured to support Accelerated Networking during provisioning.
+If you use a custom image that supports Accelerated Networking, make sure you have the required drivers to work with Mellanox ConnectX-3, ConnectX-4 Lx, and ConnectX-5 NICs on Azure. Accelerated Networking also requires network configurations that exempt configuration of the virtual functions on the mlx4_en and mlx5_core drivers. Images with cloud-init version 19.4 or greater have networking correctly configured to support Accelerated Networking during provisioning.
+The following example shows a sample configuration drop-in for `NetworkManager` on RHEL or CentOS:
-Sample configuration drop-in for NetworkManager (RHEL, CentOS):
-```
+```bash
sudo mkdir -p /etc/NetworkManager/conf.d sudo cat /etc/NetworkManager/conf.d/99-azure-unmanaged-devices.conf <<EOF
-# Ignore SR-IOV interface on Azure, since it'll be transparently bonded
+# Ignore SR-IOV interface on Azure, since it's transparently bonded
# to the synthetic interface [keyfile] unmanaged-devices=driver:mlx4_core;driver:mlx5_core EOF ```
-Sample configuration drop-in for networkd (Ubuntu, Debian, Flatcar):
-```
+The following example shows a sample configuration drop-in for `networkd` on Ubuntu, Debian, or Flatcar:
+
+```bash
sudo mkdir -p /etc/systemd/network sudo cat /etc/systemd/network/99-azure-unmanaged-devices.network <<EOF
-# Ignore SR-IOV interface on Azure, since it'll be transparently bonded
+# Ignore SR-IOV interface on Azure, since it's transparently bonded
# to the synthetic interface [Match] Driver=mlx4_en mlx5_en mlx4_core mlx5_core
Unmanaged=yes
EOF ```
-### Regions
-
-Accelerated networking is available in all global Azure regions and Azure Government Cloud.
-
-### Enabling accelerated networking on a running VM
-
-A supported VM size without accelerated networking enabled can only have the feature enabled when it's stopped and deallocated.
-
-### Deployment through Azure Resource Manager
-
-Virtual machines (classic) can't be deployed with accelerated networking.
- ## Next steps
-* Learn [how Accelerated Networking works](./accelerated-networking-how-it-works.md)
-* Learn how to [create a VM with Accelerated Networking in PowerShell](./create-vm-accelerated-networking-powershell.md)
-* Learn how to [create a VM with Accerelated Networking using Azure CLI](./create-vm-accelerated-networking-cli.md)
-* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
+- [How Accelerated Networking works in Linux and FreeBSD VMs](./accelerated-networking-how-it-works.md)
+- [Create a VM with Accelerated Networking by using PowerShell](./create-vm-accelerated-networking-powershell.md)
+- [Create a VM with Accelerated Networking by using Azure CLI](./create-vm-accelerated-networking-cli.md)
+- [Proximity placement groups](../virtual-machines/co-location.md)
virtual-network Container Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/container-networking-overview.md
Title: Container networking with Azure Virtual Network description: Learn about the Azure Virtual Network container network interface (CNI) plug-in and how to enable containers to use an Azure Virtual Network.- -
-tags: azure-resource-manager
- Previously updated : 9/18/2018 Last updated : 03/25/2023
Bring the rich set of Azure network capabilities to containers, by utilizing the same software defined networking stack that powers virtual machines. The Azure Virtual Network container network interface (CNI) plug-in installs in an Azure Virtual Machine. The plug-in assigns IP addresses from a virtual network to containers brought up in the virtual machine, attaching them to the virtual network, and connecting them directly to other containers and virtual network resources. The plug-in doesnΓÇÖt rely on overlay networks, or routes, for connectivity, and provides the same performance as virtual machines. At a high level, the plug-in provides the following capabilities: - A virtual network IP address is assigned to every Pod, which could consist of one or more containers.+ - Pods can connect to peered virtual networks and to on-premises over ExpressRoute or a site-to-site VPN. Pods are also reachable from peered and on-premises networks.-- Pods can access services such as Azure Storage and Azure SQL Database, that are protected by virtual network service endpoints.+
+- Pods can access services such as Azure Storage and Azure SQL Database that are protected by virtual network service endpoints.
+ - Network security groups and routes can be applied directly to Pods.+ - Pods can be placed directly behind an Azure internal or public Load Balancer, just like virtual machines+ - Pods can be assigned a public IP address, which makes them directly accessible from the internet. Pods can also access the internet themselves.+ - Works seamlessly with Kubernetes resources such as Services, Ingress controllers, and Kube DNS. A Kubernetes Service can also be exposed internally or externally through the Azure Load Balancer. The following picture shows how the plug-in provides Azure Virtual Network capabilities to Pods:
-![Container networking overview](./media/container-networking/container-networking-overview.png)
The plug-in supports both Linux and Windows platforms.
The plug-in supports both Linux and Windows platforms.
Pods are brought up in a virtual machine that is part of a virtual network. A pool of IP addresses for the Pods is configured as secondary addresses on a virtual machine's network interface. Azure CNI sets up the basic Network connectivity for Pods and manages the utilization of the IP addresses in the pool. When a Pod comes up in the virtual machine, Azure CNI assigns an available IP address from the pool and connects the Pod to a software bridge in the virtual machine. When the Pod terminates, the IP address is added back to the pool. The following picture shows how Pods connect to a virtual network:
-![Container networking detail](./media/container-networking/container-networking-detail.png)
## Internet access
The plug-in supports up to 250 Pods per virtual machine and up to 16,000 Pods in
The plug-in can be used in the following ways, to provide basic virtual network attach for Pods or Docker containers: - **Azure Kubernetes Service**: The plug-in is integrated into the Azure Kubernetes Service (AKS), and can be used by choosing the *Advanced Networking* option. Advanced Networking lets you deploy a Kubernetes cluster in an existing, or a new, virtual network. To learn more about Advanced Networking and the steps to set it up, see [Network configuration in AKS](../aks/configure-azure-cni.md?toc=%2fazure%2fvirtual-network%2ftoc.json).+ - **AKS-Engine**: AKS-Engine is a tool that generates an Azure Resource Manager template for the deployment of a Kubernetes cluster in Azure. For detailed instructions, see [Deploy the plug-in for AKS-Engine Kubernetes clusters](deploy-container-networking.md#deploy-the-azure-virtual-network-container-network-interface-plug-in).+ - **Creating your own Kubernetes cluster in Azure**: The plug-in can be used to provide basic networking for Pods in Kubernetes clusters that you deploy yourself, without relying on AKS, or tools like the AKS-Engine. In this case, the plug-in is installed and enabled on every virtual machine in a cluster. For detailed instructions, see [Deploy the plug-in for a Kubernetes cluster that you deploy yourself](deploy-container-networking.md#deploy-plug-in-for-a-kubernetes-cluster).+ - **Virtual network attach for Docker containers in Azure**: The plug-in can be used in cases where you donΓÇÖt want to create a Kubernetes cluster, and would like to create Docker containers with virtual network attach, in virtual machines. For detailed instructions, see [Deploy the plug-in for Docker](deploy-container-networking.md#deploy-plug-in-for-docker-containers). ## Next steps
-[Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers
+* [Deploy container networking for a stand-alone Linux Docker host](/azure/virtual-network/deploy-container-networking-docker-linux)
+
+* [Deploy container networking for a stand-alone Windows Docker host](/azure/virtual-network/deploy-container-networking-docker-windows)
+
+* [Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
Title: Create an Azure VM with Accelerated Networking using Azure CLI
-description: Learn how to create a Linux virtual machine with Accelerated Networking enabled.
+ Title: Use Azure CLI to create a Windows or Linux VM with Accelerated Networking
+description: Use Azure CLI to create and manage virtual machines that have Accelerated Networking enabled for improved network performance.
tags: azure-resource-manager
Previously updated : 03/24/2022 Last updated : 03/20/2023
-# Create a Linux virtual machine with Accelerated Networking using Azure CLI
+# Use Azure CLI to create a Windows or Linux VM with Accelerated Networking
-## Portal creation
+This article describes how to create a Linux or Windows virtual machine (VM) with Accelerated Networking (AccelNet) enabled by using the Azure CLI command-line interface. The article also discusses how to enable and manage Accelerated Networking on existing VMs.
-Though this article provides steps to create a virtual machine with accelerated networking using the Azure CLI, you can also [create a virtual machine with accelerated networking using the Azure portal](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json). When creating a virtual machine in the portal, in the **Create a virtual machine** blade, choose the **Networking** tab. In this tab, there is an option for **Accelerated networking**. If you have chosen a [supported operating system](./accelerated-networking-overview.md#supported-operating-systems) and [VM size](./accelerated-networking-overview.md#supported-vm-instances), this option will automatically populate to "On." If not, it will populate the "Off" option for Accelerated Networking and give the user a reason why it isn't enabled.
-You can also enable or disable accelerated networking through the portal after VM creation by navigating to the network interface and clicking the button at the top of the **Overview** blade.
+You can also create a VM with Accelerated Networking enabled by using the [Azure portal](quick-create-portal.md). For more information about using the Azure portal to manage Accelerated Networking on VMs, see [Manage Accelerated Networking through the portal](#manage-accelerated-networking-through-the-portal).
->[!NOTE]
-> The Accelerated Networking setting in the portal reflects the user-selected state. AccelNet allows choosing ΓÇ£DisabledΓÇ¥ even if the VM size requires AccelNet. For those AccelNet-required VM sizes, AccelNet will be enabled at runtime regardless of the user setting seen in the portal.
->
-> Only supported operating systems can be enabled through the portal. If you're using a custom image, and your image supports Accelerated Networking, create your VM using CLI or PowerShell.
+To use Azure PowerShell to create a Windows VM with Accelerated Networking enabled, see [Use Azure PowerShell to create a Linux VM with Accelerated Networking](create-vm-accelerated-networking-powershell.md).
-After the VM is created, you can confirm that Accelerated Networking is enabled by following the [confirmation instructions](#confirm-that-accelerated-networking-is-enabled).
+## Prerequisites
-## CLI creation
-### Create a virtual network
+- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- The latest version of [Azure CLI installed](/cli/azure/install-azure-cli). Sign in to Azure by using the [az login](/cli/azure/reference-index#az-login) command.
-Install the latest [Azure CLI](/cli/azure/install-azure-cli) and log in to an Azure account using [az login](/cli/azure/reference-index). In the following examples, replace example parameter names with your own values. Example parameter names included *myResourceGroup*, *myNic*, and *myVm*.
+## Create a VM with Accelerated Networking
-Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *myResourceGroup* in the *centralus* location:
+In the following examples, you can replace the example parameters such as `<myResourceGroup>`, `<myNic>`, and `<myVm>` with your own values.
-```azurecli
-az group create --name myResourceGroup --location centralus
-```
+### Create a virtual network
-Select a supported Linux region listed in [Linux Accelerated Networking](https://azure.microsoft.com/updates/accelerated-networking-in-expanded-preview).
+1. Use [az group create](/cli/azure/group#az-group-create) to create a resource group to contain the resources. Be sure to select a supported Windows or Linux region as listed in [Windows and Linux Accelerated Networking](https://azure.microsoft.com/updates/accelerated-networking-in-expanded-preview).
-Create a virtual network with [az network vnet create](/cli/azure/network/vnet). The following example creates a virtual network named *myVnet* with one subnet:
+ ```azurecli
+ az group create --name <myResourceGroup> --location <myAzureRegion>
+ ```
-```azurecli
-az network vnet create \
- --resource-group myResourceGroup \
- --name myVnet \
- --address-prefix 192.168.0.0/16 \
- --subnet-name mySubnet \
- --subnet-prefix 192.168.1.0/24
-```
+1. Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network with one subnet in the resource group:
+
+ ```azurecli
+ az network vnet create \
+ --resource-group <myResourceGroup> \
+ --name <myVnet> \
+ --address-prefix 192.168.0.0/16 \
+ --subnet-name <mySubnet> \
+ --subnet-prefix 192.168.1.0/24
+ ```
### Create a network security group
-Create a network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named *myNetworkSecurityGroup*:
-```azurecli
-az network nsg create \
- --resource-group myResourceGroup \
- --name myNetworkSecurityGroup
-```
+1. Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to create a network security group (NSG).
+
+ ```azurecli
+ az network nsg create \
+ --resource-group <myResourceGroup> \
+ --name <myNsg>
+ ```
+
+1. The NSG contains several default rules, one of which disables all inbound access from the internet. Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to open a port to allow remote desktop protocol (RDP) or secure shell (SSH) access to the VM.
+
+ # [Windows](#tab/windows)
+
+ ```azurecli
+ az network nsg rule create \
+ --resource-group <myResourceGroup> \
+ --nsg-name <myNsg> \
+ --name Allow-RDP-Internet \
+ --access Allow \
+ --protocol Tcp \
+ --direction Inbound \
+ --priority 100 \
+ --source-address-prefix Internet \
+ --source-port-range "*" \
+ --destination-address-prefix "*" \
+ --destination-port-range 3389
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```azurecli
+ az network nsg rule create \
+ --resource-group <myResourceGroup> \
+ --nsg-name <myNsg> \
+ --name Allow-SSH-Internet \
+ --access Allow \
+ --protocol Tcp \
+ --direction Inbound \
+ --priority 100 \
+ --source-address-prefix Internet \
+ --source-port-range "*" \
+ --destination-address-prefix "*" \
+ --destination-port-range 22
+ ```
-The network security group contains several default rules, one of which disables all inbound access from the Internet. Open a port to allow SSH access to the virtual machine with [az network nsg rule create](/cli/azure/network/nsg/rule):
+
+### Create a network interface with Accelerated Networking
-```azurecli
-az network nsg rule create \
- --resource-group myResourceGroup \
- --nsg-name myNetworkSecurityGroup \
- --name Allow-SSH-Internet \
- --access Allow \
- --protocol Tcp \
- --direction Inbound \
- --priority 100 \
- --source-address-prefix Internet \
- --source-port-range "*" \
- --destination-address-prefix "*" \
- --destination-port-range 22
-```
+1. Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address. The VM doesn't need a public IP address if you don't access it from the internet, but you need the public IP to complete the steps for this article.
-### Create a network interface with Accelerated Networking
+ ```azurecli
+ az network public-ip create \
+ --name <myPublicIp> \
+ --resource-group <myResourceGroup>
+ ```
-Create a public IP address with [az network public-ip create](/cli/azure/network/public-ip). A public IP address isn't required if you don't plan to access the VM from the Internet. However, it's required to complete the steps in this article.
+1. Use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create a network interface (NIC) with Accelerated Networking enabled. The following example creates a NIC in the subnet of the virtual network, and associates the NSG to the NIC.
-```azurecli
-az network public-ip create \
- --name myPublicIp \
- --resource-group myResourceGroup
-```
+ ```azurecli
+ az network nic create \
+ --resource-group <myResourceGroup> \
+ --name <myNic> \
+ --vnet-name <myVnet> \
+ --subnet <mySubnet> \
+ --accelerated-networking true \
+ --public-ip-address <myPublicIp> \
+ --network-security-group <myNsg>
+ ```
+
+### Create a VM and attach the NIC
-Create a network interface with [az network nic create](/cli/azure/network/nic) with Accelerated Networking enabled. The following example creates a network interface named *myNic* in the *mySubnet* subnet of the *myVnet* virtual network and associates the *myNetworkSecurityGroup* network security group to the network interface:
+Use [az vm create](/cli/azure/vm#az-vm-create) to create the VM, and use the `--nics` option to attach the NIC you created. Make sure to select a VM size and distribution that's listed in [Windows and Linux Accelerated Networking](https://azure.microsoft.com/updates/accelerated-networking-in-expanded-preview). For a list of all VM sizes and characteristics, see [Sizes for virtual machines in Azure](../virtual-machines/sizes.md).
+
+# [Windows](#tab/windows)
+
+The following example creates a Windows Server 2019 Datacenter VM with a size that supports Accelerated Networking, Standard_DS4_v2.
```azurecli
-az network nic create \
- --resource-group myResourceGroup \
- --name myNic \
- --vnet-name myVnet \
- --subnet mySubnet \
- --accelerated-networking true \
- --public-ip-address myPublicIp \
- --network-security-group myNetworkSecurityGroup
+az vm create \
+ --resource-group <myResourceGroup> \
+ --name <myVm> \
+ --image Win2019Datacenter \
+ --size Standard_DS4_v2 \
+ --admin-username <myAdminUser> \
+ --admin-password <myAdminPassword> \
+ --nics <myNic>
```
-### Create a VM and attach the NIC
-When you create the VM, specify the NIC you created with `--nics`. Select a size and distribution listed in [Linux accelerated networking](https://azure.microsoft.com/updates/accelerated-networking-in-expanded-preview).
+# [Linux](#tab/linux)
-Create a VM with [az vm create](/cli/azure/vm). The following example creates a VM named *myVM* with the UbuntuLTS image and a size that supports Accelerated Networking (*Standard_DS4_v2*):
+The following example creates a VM with the UbuntuLTS OS image and a size that supports Accelerated Networking, Standard_DS4_v2.
```azurecli az vm create \
- --resource-group myResourceGroup \
- --name myVM \
- --image UbuntuLTS \
- --size Standard_DS4_v2 \
- --admin-username azureuser \
- --generate-ssh-keys \
- --nics myNic
+ --resource-group <myResourceGroup> \
+ --name <myVm> \
+ --image UbuntuLTS \
+ --size Standard_DS4_v2 \
+ --admin-username <myAdminUser> \
+ --generate-ssh-keys \
+ --nics <myNic>
```
-For a list of all VM sizes and characteristics, see [Linux VM sizes](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+
-Once the VM is created, output similar to the following example output is returned. Take note of the **publicIpAddress**. This address is used to access the VM in subsequent steps.
+After the VM is created, you get output similar to the following example. For a Linux machine, take note of the `publicIpAddress`, which you enter to access the VM in the next step.
```output { "fqdns": "",
- "id": "/subscriptions/<ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVm",
"location": "centralus", "macAddress": "00-0D-3A-23-9A-49", "powerState": "VM running",
Once the VM is created, output similar to the following example output is return
} ```
-### Confirm that accelerated networking is enabled
+## Confirm that accelerated networking is enabled
-Use the following command to create an SSH session with the VM. Replace `<your-public-ip-address>` with the public IP address assigned to the virtual machine that you created, and replace *azureuser* if you used a different value for `--admin-username` when you created the VM.
+# [Windows](#tab/windows)
-```bash
-ssh azureuser@<your-public-ip-address>
-```
+Once you create the VM in Azure, connect to the VM and confirm that the Ethernet controller is installed in Windows.
+
+1. In the [Azure portal](https://portal.azure.com), search for and select *virtual machines*.
+
+1. On the **Virtual machines** page, select your new VM.
+
+1. On the VM's **Overview** page, select **Connect**.
+
+1. On the **Connect** screen, select **Native RDP**.
-From the Bash shell, enter `uname -r` and confirm that the kernel version is one of the following versions, or greater:
+1. On the **Native RDP** screen, select **Download RDP file**.
-* **Ubuntu 16.04**: 4.11.0-1013
-* **SLES SP3**: 4.4.92-6.18
-* **RHEL**: 3.10.0-693, 2.6.32-573*
-* **CentOS**: 3.10.0-693
+1. Open the downloaded RDP file, and then sign in with the credentials you entered when you created the VM.
+
+1. On the remote VM, right-click **Start** and select **Device Manager**.
+
+1. In the **Device Manager** window, expand the **Network adapters** node.
+
+1. Confirm that the **Mellanox ConnectX-4 Lx Virtual Ethernet Adapter** appears, as shown in the following image:
+
+ ![Mellanox ConnectX-3 Virtual Function Ethernet Adapter, new network adapter for accelerated networking, Device Manager](./media/create-vm-accelerated-networking/device-manager.png)
+
+ The presence of the adapter confirms that Accelerated Networking is enabled for your VM.
> [!NOTE]
-> Other kernel versions may be supported. For the most up to date list, reference the compatibility tables for each distrubution at [Supported Linux and FreeBSD virtual machines for Hyper-V](/windows-server/virtualization/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows) and confirm that SR-IOV is supported. Additional details can be found in the release notes for the [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). * RHEL 6.7-6.10 are supported if the Mellanox VF version 4.5+ is installed before Linux Integration Services 4.3+.
+> If the Mellanox adapter fails to start, open an administrator command prompt on the remote VM and enter the following command:
+>
+> `netsh int tcp set global rss = enabled`
-Confirm that the Mellanox VF device is exposed to the VM with the `lspci` command. The returned output is similar to the following output:
+# [Linux](#tab/linux)
-```output
-0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
-0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
-0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
-0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
-0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
-0001:00:02.0 Ethernet controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
-```
+1. Use the following command to create an SSH session with the VM. Replace `<myPublicIp>` with the public IP address assigned to the VM you created, and replace `<myAdminUser>` with the `--admin-username` you specified when you created the VM.
-Check for activity on the VF (virtual function) with the `ethtool -S eth0 | grep vf_` command. If you receive output similar to the following sample output, accelerated networking is enabled and active.
+ ```bash
+ ssh <myAdminUser>@<myPublicIp>
+ ```
-```output
-vf_rx_packets: 992956
-vf_rx_bytes: 2749784180
-vf_tx_packets: 2656684
-vf_tx_bytes: 1099443970
-vf_tx_dropped: 0
-```
-Accelerated Networking is now enabled for your VM.
+1. From a Bash shell on the remote VM, enter `uname -r` and confirm that the kernel version is one of the following versions, or greater:
-## Handle dynamic binding and revocation of virtual function
-Applications must run over the synthetic NIC that is exposed in VM. If the application runs directly over the VF NIC, it doesn't receive **all** packets that are destined to the VM, since some packets show up over the synthetic interface. If you run an application over the synthetic NIC, it guarantees that the application receives **all** packets that are destined to it. It also makes sure that the application keeps running, even if the VF is revoked during host servicing. Applications binding to the synthetic NIC is a **mandatory** requirement for all applications taking advantage of **Accelerated Networking**.
+ - **Ubuntu 16.04**: 4.11.0-1013.
+ - **SLES SP3**: 4.4.92-6.18.
+ - **RHEL**: 3.10.0-693, 2.6.32-573. RHEL 6.7-6.10 are supported if the Mellanox VF version 4.5+ is installed before Linux Integration Services 4.3+.
+ - **CentOS**: 3.10.0-693.
-For more details on application binding requirements, see [How Accelerated Networking works in Linux and FreeBSD VMs](./accelerated-networking-how-it-works.md#application-usage).
+ > [!NOTE]
+ > Other kernel versions may be supported. For an updated list, see the compatibility tables for each distribution at [Supported Linux and FreeBSD virtual machines for Hyper-V](/windows-server/virtualization/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows), and confirm that SR-IOV is supported. You can find more details in the release notes for [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). *
-## Enable Accelerated Networking on existing VMs
-If you've created a VM without Accelerated Networking, it's possible to enable this feature on an existing VM. The VM must support Accelerated Networking by meeting the following prerequisites that are also outlined:
+1. Use the `lspci` command to confirm that the Mellanox VF device is exposed to the VM. The returned output should be similar to the following example:
-* The VM must be a supported size for Accelerated Networking.
-* The VM must be a supported Azure Gallery image (and kernel version for Linux).
-* All VMs in an availability set or VMSS must be stopped/deallocated before enabling Accelerated Networking on any NIC.
-* All individual VMs that are not in an availability set or VMSS must also be stopped/deallocated before enabling Accelerated Networking on any NIC.
+ ```output
+ 0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
+ 0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
+ 0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
+ 0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
+ 0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
+ 0001:00:02.0 Ethernet controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
+ ```
-### Individual VMs & VMs in an availability set
-First stop/deallocate the VM or, if an Availability Set, all the VMs in the Set:
+1. Use the `ethtool -S eth0 | grep vf_` command to check for activity on the virtual function (VF). If accelerated networking is enabled and active, you receive output similar to the following example:
-```azurecli
-az vm deallocate \
- --resource-group myResourceGroup \
- --name myVM
-```
+ ```output
+ vf_rx_packets: 992956
+ vf_rx_bytes: 2749784180
+ vf_tx_packets: 2656684
+ vf_tx_bytes: 1099443970
+ vf_tx_dropped: 0
+ ```
-If your VM was created individually without an availability set, you only must stop or deallocate the individual VM to enable Accelerated Networking. If your VM was created with an availability set, all VMs contained in the set must be stopped or deallocated before enabling Accelerated Networking on any of the NICs.
+
-Once stopped, enable Accelerated Networking on the NIC of your VM:
+## Handle dynamic binding and revocation of virtual function
-```azurecli
-az network nic update \
- --name myNic \
- --resource-group myResourceGroup \
- --accelerated-networking true
-```
+Binding to the synthetic NIC that's exposed in the VM is a mandatory requirement for all applications that take advantage of Accelerated Networking. If an application runs directly over the VF NIC, it doesn't receive all packets that are destined to the VM, because some packets show up over the synthetic interface.
-Restart your VM or, if in an Availability Set, all the VMs in the Set and confirm that Accelerated Networking is enabled:
+You must run an application over the synthetic NIC to guarantee that the application receives all packets that are destined to it. Binding to the synthetic NIC also ensures that the application keeps running even if the VF is revoked during host servicing.
-```azurecli
-az vm start --resource-group myResourceGroup \
- --name myVM
-```
+For more information about application binding requirements, see [How Accelerated Networking works in Linux and FreeBSD VMs](./accelerated-networking-how-it-works.md#application-usage).
-### VMSS
-VMSS is slightly different but follows the same workflow. First, stop the VMs:
+<a name="enable-accelerated-networking-on-existing-vms"></a>
+## Manage Accelerated Networking on existing VMs
-```azurecli
-az vmss deallocate \
- --name myvmss \
- --resource-group myrg
-```
+It's possible to enable Accelerated Networking on an existing VM. The VM must meet the following requirements to support Accelerated Networking:
-Once the VMs are stopped, update the Accelerated Networking property under the network interface:
+- Be a supported size for Accelerated Networking.
+- Be a supported Azure Marketplace image and kernel version for Linux.
+- Be stopped or deallocated before you can enable Accelerated Networking on any NIC. This requirement applies to all individual VMs or VMs in an availability set or Azure Virtual Machine Scale Sets.
-```azurecli
-az vmss update --name myvmss \
- --resource-group myrg \
- --set virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].enableAcceleratedNetworking=true
-```
+### Enable Accelerated Networking on individual VMs or VMs in availability sets
->[!NOTE]
-> A VMSS has VM upgrades that apply updates using three different settings, automatic, rolling, and manual. In these instructions, the policy is set to automatic so that the VMSS will pick up the changes immediately after reboot. To set it to automatic so that the changes are immediately picked up:
+1. First, stop and deallocate the VM, or all the VMs in the availability set.
-```azurecli
-az vmss update \
- --name myvmss \
- --resource-group myrg \
- --set upgradePolicy.mode="automatic"
-```
+ ```azurecli
+ az vm deallocate \
+ --resource-group <myResourceGroup> \
+ --name <myVm>
+ ```
-Finally, restart the VMSS:
+ If you created your VM individually without an availability set, you must stop or deallocate only the individual VM to enable Accelerated Networking. If you created your VM with an availability set, you must stop or deallocate all VMs in the set before you can enable Accelerated Networking on any of the NICs.
-```azurecli
-az vmss start \
- --name myvmss \
- --resource-group myrg
-```
+1. Once the VM is stopped, enable Accelerated Networking on the NIC of your VM.
+
+ ```azurecli
+ az network nic update \
+ --name <myNic> \
+ --resource-group <myResourceGroup> \
+ --accelerated-networking true
+ ```
+
+1. Restart your VM, or all the VMs in the availability set, and [confirm that Accelerated Networking is enabled](#confirm-that-accelerated-networking-is-enabled).
+
+ ```azurecli
+ az vm start --resource-group <myResourceGroup> \
+ --name <myVm>
+ ```
-Once you restart, wait for the upgrades to finish but once completed, the VF appears inside the VM. (Make sure you're using a supported OS and VM size.)
+### Enable Accelerated Networking on Virtual Machine Scale Sets
-### Resizing existing VMs with Accelerated Networking
+Azure Virtual Machine Scale Sets is slightly different, but follows the same workflow.
-VMs with Accelerated Networking enabled can only be resized to VMs that support Accelerated Networking.
+1. First, stop the VMs:
-A VM with Accelerated Networking enabled can't be resized to a VM instance that doesn't support Accelerated Networking using the resize operation. Instead, to resize one of these VMs:
+ ```azurecli
+ az vmss deallocate \
+ --name <myVmss> \
+ --resource-group <myResourceGroup>
+ ```
-* Stop/Deallocate the VM or if in an availability set/VMSS, stop/deallocate all the VMs in the set/VMSS.
-* Accelerated Networking must be disabled on the NIC of the VM or if in an availability set/VMSS, all VMs in the set/VMSS.
-* Once Accelerated Networking is disabled, the VM/availability set/VMSS can be moved to a new size that doesn't support Accelerated Networking and restarted.
+1. Once the VMs are stopped, update the Accelerated Networking property under the network interface.
+
+ ```azurecli
+ az vmss update --name <myVmss> \
+ --resource-group <myResourceGroup> \
+ --set virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].enableAcceleratedNetworking=true
+ ```
+
+1. Virtual Machine Scale Sets has an upgrade policy that applies updates by using automatic, rolling, or manual settings. The following instructions set the policy to automatic so Virtual Machine Scale Sets picks up the changes immediately after restart.
+
+ ```azurecli
+ az vmss update \
+ --name <myVmss> \
+ --resource-group <myResourceGroup> \
+ --set upgradePolicy.mode="automatic"
+ ```
+
+1. Finally, restart Virtual Machine Scale Sets.
+
+ ```azurecli
+ az vmss start \
+ --name <myVmss> \
+ --resource-group <myResourceGroup>
+ ```
+
+Once you restart and the upgrades finish, the VF appears inside VMs that use a supported OS and VM size.
+
+### Resize existing VMs with Accelerated Networking
+
+You can resize VMs with Accelerated Networking enabled only to sizes that also support Accelerated Networking. You can't resize a VM with Accelerated Networking to a VM instance that doesn't support Accelerated Networking by using the resize operation. Instead, use the following process to resize these VMs:
+
+1. Stop and deallocate the VM or all the VMs in the availability set or Virtual Machine Scale Sets.
+1. Disable Accelerated Networking on the NIC of the VM or all the VMs in the availability set or Virtual Machine Scale Sets.
+1. Move the VM or VMs to a new size that doesn't support Accelerated Networking, and restart them.
+
+## Manage Accelerated Networking through the portal
+
+When you [create a VM in the Azure portal](/azure/virtual-machines/linux/quick-create-portal), you can select the **Enable accelerated networking** checkbox on the **Networking** tab of the **Create a virtual machine** screen.
+
+If the VM uses a [supported operating system](./accelerated-networking-overview.md#supported-operating-systems) and [VM size](./accelerated-networking-overview.md#supported-vm-instances) for Accelerated Networking, the **Enable accelerated networking** checkbox on the **Networking** tab of the **Create a virtual machine** screen is automatically selected. If Accelerated Networking isn't supported, the checkbox isn't selected, and a message explains the reason.
+
+>[!NOTE]
+>- You can enable Accelerated Networking during portal VM creation only for Azure Marketplace supported operating systems. To create and enable Accelerated Networking for a VM with a custom OS image, you must use Azure CLI or PowerShell.
+>
+>- The Accelerated Networking setting in the portal shows the user-selected state. Accelerated Networking allows choosing **Disabled** in the portal even if the VM size requires Accelerated Networking. VM sizes that require Accelerated Networking enable Accelerated Networking at runtime regardless of the user setting in the portal.
+
+To enable or disable Accelerated Networking for an existing VM through the Azure portal:
+
+1. From the [Azure portal](https://portal.azure.com) page for the VM, select **Networking** from the left menu.
+1. On the **Networking** page, select the **Network Interface**.
+1. At the top of the NIC **Overview** page, select **Edit accelerated networking**.
+1. Select **Automatic**, **Enabled**, or **Disabled**, and then select **Save**.
+
+To confirm whether Accelerated Networking is enabled for an existing VM:
+
+1. From the portal page for the VM, select **Networking** from the left menu.
+1. On the **Networking** page, select the **Network Interface**.
+1. On the network interface **Overview** page, under **Essentials**, note whether **Accelerated networking** is set to **Enabled** or **Disabled**.
## Next steps
-* Learn [how Accelerated Networking works](./accelerated-networking-how-it-works.md)
-* Learn how to [create a VM with Accelerated Networking in PowerShell](../virtual-network/create-vm-accelerated-networking-powershell.md)
-* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
+
+- [How Accelerated Networking works in Linux and FreeBSD VMs](./accelerated-networking-how-it-works.md)
+- [Create a VM with Accelerated Networking by using PowerShell](../virtual-network/create-vm-accelerated-networking-powershell.md)
+- [Proximity placement groups](../virtual-machines/co-location.md)
virtual-network Create Vm Accelerated Networking Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-powershell.md
Title: Create Windows VM with accelerated networking - Azure PowerShell
-description: Create a Windows virtual machine (VM) with Accelerated Networking for improved network performance
+ Title: Use PowerShell to create a VM with Accelerated Networking
+description: Use Azure PowerShell to create and manage Windows virtual machines that have Accelerated Networking enabled for improved network performance.
vm-windows Previously updated : 03/22/2022 Last updated : 03/20/2023
-# Create a Windows VM with accelerated networking using Azure PowerShell
+# Use Azure PowerShell to create a VM with Accelerated Networking
-## VM creation using the portal
+This article describes how to use Azure PowerShell to create a Windows virtual machine (VM) with Accelerated Networking (AccelNet) enabled. The article also discusses how to enable and manage Accelerated Networking on existing VMs.
-Though this article provides steps to create a VM with accelerated networking using Azure PowerShell, you can also use the Azure portal to create a virtual machine that enables accelerated networking. When [creating a VM in the Azure Portal](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json), in the **Create a virtual machine** page, choose the **Networking** tab. This tab has an option for **Accelerated networking**. If you have chosen a [supported operating system](./accelerated-networking-overview.md#supported-operating-systems) and [VM size](./accelerated-networking-overview.md#supported-vm-instances), this option is automatically set to **On**. Otherwise, the option is set to **Off**, and Azure displays the reason why it can't be enabled.
-You can also enable or disable accelerated networking through the portal after VM creation by navigating to the network interface and clicking the button at the top of the **Overview** blade.
+You can also create a VM with Accelerated Networking enabled by using the [Azure portal](quick-create-portal.md). For more information about using the Azure portal to manage Accelerated Networking on VMs, see [Manage Accelerated Networking through the portal](#manage-accelerated-networking-through-the-portal).
-> [!NOTE]
-> Only supported operating systems can be enabled through the portal. If you are using a custom image, and your image supports accelerated networking, please create your VM using CLI or PowerShell.
-
-After you create the VM, you can confirm whether accelerated networking is enabled. Follow these instructions:
-
-1. Go to the [Azure portal](https://portal.azure.com) to manage your VMs. Search for and select **Virtual machines**.
+To use Azure CLI to create a Linux or Windows VM with Accelerated Networking enabled, see [Use Azure CLI to create a VM with Accelerated Networking](create-vm-accelerated-networking-cli.md).
-2. In the virtual machine list, choose your new VM.
+## Prerequisites
-3. In the VM menu bar, choose **Networking**.
+- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-In the network interface information, next to the **Accelerated networking** label, the portal displays either **Disabled** or **Enabled** for the accelerated networking status.
+- [Azure PowerShell](/powershell/azure/install-az-ps) 1.0.0 or later installed. To find your currently installed version, run `Get-Module -ListAvailable Az`. If you need to install or upgrade, install the latest version of the Az module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az).
-## VM creation using PowerShell
+- In PowerShell, sign in to your Azure account by using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).
-Before you proceed, install [Azure PowerShell](/powershell/azure/install-az-ps) version 1.0.0 or later. To find your currently installed version, run `Get-Module -ListAvailable Az`. If you need to install or upgrade, install the latest version of the Az module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az). In a PowerShell session, sign in to an Azure account using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).
+## Create a VM with Accelerated Networking
-In the following examples, replace example parameter names with your own values. Example parameter names included *myResourceGroup*, *myNic*, and *myVM*.
+In the following examples, you can replace the example parameters such as `<myResourceGroup>`, `<myNic>`, and `<myVm>` with your own values.
### Create a virtual network
-1. Create a resource group with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). The following command creates a resource group named *myResourceGroup* in the *centralus* location:
+1. Use [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup) to create a resource group to contain the resources.
- ```azurepowershell
- New-AzResourceGroup -Name "myResourceGroup" -Location "centralus"
- ```
+ ```azurepowershell
+ New-AzResourceGroup -Name "<myResourceGroup>" -Location "<myAzureRegion>"
+ ```
-2. Create a subnet configuration with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.Network/New-azVirtualNetworkSubnetConfig). The following command creates a subnet named *mySubnet*:
+1. Use [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.Network/New-azVirtualNetworkSubnetConfig) to create a subnet configuration.
- ```azurepowershell
- $subnet = New-AzVirtualNetworkSubnetConfig `
- -Name "mySubnet" `
- -AddressPrefix "192.168.1.0/24"
- ```
+ ```azurepowershell
+ $subnet = New-AzVirtualNetworkSubnetConfig `
+ -Name "<mySubnet>" `
+ -AddressPrefix "<192.168.1.0/24>"
+ ```
-3. Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.Network/New-azVirtualNetwork), with the *mySubnet* subnet.
+1. Use [New-AzVirtualNetwork](/powershell/module/az.Network/New-azVirtualNetwork) to create a virtual network with the subnet.
- ```azurepowershell
- $vnet = New-AzVirtualNetwork -ResourceGroupName "myResourceGroup" `
- -Location "centralus" `
- -Name "myVnet" `
- -AddressPrefix "192.168.0.0/16" `
- -Subnet $Subnet
- ```
+ ```azurepowershell
+ $vnet = New-AzVirtualNetwork -ResourceGroupName "<myResourceGroup>" `
+ -Location "<myAzureRegion>" `
+ -Name "<myVnet>" `
+ -AddressPrefix "<192.168.0.0/16>" `
+ -Subnet $Subnet
+ ```
### Create a network security group
-1. Create a network security group rule with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.Network/New-azNetworkSecurityRuleConfig).
-
- ```azurepowershell
- $rdp = New-AzNetworkSecurityRuleConfig `
- -Name 'Allow-RDP-All' `
- -Description 'Allow RDP' `
- -Access Allow `
- -Protocol Tcp `
- -Direction Inbound `
- -Priority 100 `
- -SourceAddressPrefix * `
- -SourcePortRange * `
- -DestinationAddressPrefix * `
- -DestinationPortRange 3389
- ```
-
-2. Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.Network/New-azNetworkSecurityGroup) and assign the *Allow-RDP-All* security rule to it. Aside from the *Allow-RDP-All* rule, the network security group contains several default rules. One default rule disables all inbound access from the internet. Once it's created, the *Allow-RDP-All* rule is assigned to the network security group so that you can remotely connect to the VM.
-
- ```azurepowershell
- $nsg = New-AzNetworkSecurityGroup `
- -ResourceGroupName myResourceGroup `
- -Location centralus `
- -Name "myNsg" `
- -SecurityRules $rdp
- ```
-
-3. Associate the network security group to the *mySubnet* subnet with [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.Network/Set-azVirtualNetworkSubnetConfig). The rule in the network security group is effective for all resources deployed in the subnet.
-
- ```azurepowershell
- Set-AzVirtualNetworkSubnetConfig `
- -VirtualNetwork $vnet `
- -Name 'mySubnet' `
- -AddressPrefix "192.168.1.0/24" `
- -NetworkSecurityGroup $nsg
- ```
+1. A network security group (NSG) contains several default rules, one of which disables all inbound access from the internet. Use [New-AzNetworkSecurityRuleConfig](/powershell/module/az.Network/New-azNetworkSecurityRuleConfig) to create a new rule so that you can remotely connect to the VM via Remote Desktop Protocol (RDP).
+
+ ```azurepowershell
+ $rdp = New-AzNetworkSecurityRuleConfig `
+ -Name "Allow-RDP-All" `
+ -Description "Allow RDP" `
+ -Access Allow `
+ -Protocol Tcp `
+ -Direction Inbound `
+ -Priority 100 `
+ -SourceAddressPrefix * `
+ -SourcePortRange * `
+ -DestinationAddressPrefix * `
+ -DestinationPortRange 3389
+ ```
+
+1. Use [New-AzNetworkSecurityGroup](/powershell/module/az.Network/New-azNetworkSecurityGroup) to create the NSG and assign the `Allow-RDP-All` rule to the NSG.
+
+ ```azurepowershell
+ $nsg = New-AzNetworkSecurityGroup `
+ -ResourceGroupName "<myResourceGroup>" `
+ -Location "<myAzureRegion>" `
+ -Name "<myNsg>" `
+ -SecurityRules $rdp
+ ```
+
+1. Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.Network/Set-azVirtualNetworkSubnetConfig) to associate the NSG to the subnet. The NSG rules are effective for all resources deployed in the subnet.
+
+ ```azurepowershell
+ Set-AzVirtualNetworkSubnetConfig `
+ -VirtualNetwork $vnet `
+ -Name "<mySubnet>" `
+ -AddressPrefix "<192.168.1.0/24>" `
+ -NetworkSecurityGroup $nsg
+ ```
### Create a network interface with accelerated networking
-1. Create a public IP address with [New-AzPublicIpAddress](/powershell/module/az.Network/New-azPublicIpAddress). A public IP address is unnecessary if you don't plan to access the VM from the internet. However, it's required to complete the steps in this article.
+1. Use [New-AzPublicIpAddress](/powershell/module/az.Network/New-azPublicIpAddress) to create a public IP address. The VM doesn't need a public IP address if you don't access it from the internet, but you need the public IP to complete the steps for this article.
- ```azurepowershell
- $publicIp = New-AzPublicIpAddress `
- -ResourceGroupName myResourceGroup `
- -Name 'myPublicIp' `
- -location centralus `
- -AllocationMethod Dynamic
- ```
+ ```azurepowershell
+ $publicIp = New-AzPublicIpAddress `
+ -ResourceGroupName "<myResourceGroup>" `
+ -Name "<myPublicIp>" `
+ -Location "<myAzureRegion>" `
+ -AllocationMethod Dynamic
+ ```
-2. Create a network interface with [New-AzNetworkInterface](/powershell/module/az.Network/New-azNetworkInterface) with accelerated networking enabled, and assign the public IP address to the network interface. The following example creates a network interface named *myNic* in the *mySubnet* subnet of the *myVnet* virtual network, assigning the *myPublicIp* public IP address to it:
+1. Use [New-AzNetworkInterface](/powershell/module/az.Network/New-azNetworkInterface) to create a network interface (NIC) with Accelerated Networking enabled, and assign the public IP address to the NIC.
- ```azurepowershell
- $nic = New-AzNetworkInterface `
- -ResourceGroupName "myResourceGroup" `
- -Name "myNic" `
- -Location "centralus" `
- -SubnetId $vnet.Subnets[0].Id `
- -PublicIpAddressId $publicIp.Id `
- -EnableAcceleratedNetworking
- ```
+ ```azurepowershell
+ $nic = New-AzNetworkInterface `
+ -ResourceGroupName "<myResourceGroup>" `
+ -Name "<myNic>" `
+ -Location "<myAzureRegion>" `
+ -SubnetId $vnet.Subnets[0].Id `
+ -PublicIpAddressId $publicIp.Id `
+ -EnableAcceleratedNetworking
+ ```
### Create a VM and attach the network interface
-1. Set your VM credentials to the `$cred` variable using [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential), which prompts you to sign in:
+1. Use [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential) to set a user name and password for the VM and store them in the `$cred` variable.
- ```azurepowershell
- $cred = Get-Credential
- ```
+ ```azurepowershell
+ $cred = Get-Credential
+ ```
-2. Define your VM with [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig). The following command defines a VM named *myVM* with a VM size that supports accelerated networking (*Standard_DS4_v2*):
+1. Use [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig) to define a VM with a VM size that supports accelerated networking, as listed in [Windows Accelerated Networking](https://azure.microsoft.com/updates/accelerated-networking-in-expanded-preview). For a list of all Windows VM sizes and characteristics, see [Windows VM sizes](/azure/virtual-machines/sizes).
- ```azurepowershell
- $vmConfig = New-AzVMConfig -VMName "myVm" -VMSize "Standard_DS4_v2"
- ```
+ ```azurepowershell
+ $vmConfig = New-AzVMConfig -VMName "<myVm>" -VMSize "Standard_DS4_v2"
+ ```
- For a list of all VM sizes and characteristics, see [Windows VM sizes](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-
-3. Create the rest of your VM configuration with [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem) and [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage). The following command creates a Windows Server 2016 VM:
-
- ```azurepowershell
- $vmConfig = Set-AzVMOperatingSystem -VM $vmConfig `
- -Windows `
- -ComputerName "myVM" `
- -Credential $cred `
- -ProvisionVMAgent `
- -EnableAutoUpdate
- $vmConfig = Set-AzVMSourceImage -VM $vmConfig `
- -PublisherName "MicrosoftWindowsServer" `
- -Offer "WindowsServer" `
- -Skus "2016-Datacenter" `
- -Version "latest"
- ```
+1. Use [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem) and [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage) to create the rest of the VM configuration. The following example creates a Windows Server 2019 Datacenter VM:
-4. Attach the network interface that you previously created with [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface):
+ ```azurepowershell
+ $vmConfig = Set-AzVMOperatingSystem -VM $vmConfig `
+ -Windows `
+ -ComputerName "<myVM>" `
+ -Credential $cred `
+ -ProvisionVMAgent `
+ -EnableAutoUpdate
+ $vmConfig = Set-AzVMSourceImage -VM $vmConfig `
+ -PublisherName "MicrosoftWindowsServer" `
+ -Offer "WindowsServer" `
+ -Skus "2019-Datacenter" `
+ -Version "latest"
+ ```
- ```azurepowershell
- $vmConfig = Add-AzVMNetworkInterface -VM $vmConfig -Id $nic.Id
- ```
+1. Use [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface) to attach the NIC that you previously created to the VM.
-5. Create your VM with [New-AzVM](/powershell/module/az.compute/new-azvm).
+ ```azurepowershell
+ $vmConfig = Add-AzVMNetworkInterface -VM $vmConfig -Id $nic.Id
+ ```
- ```azurepowershell
- New-AzVM -VM $vmConfig -ResourceGroupName "myResourceGroup" -Location "centralus"
- ```
+1. Use [New-AzVM](/powershell/module/az.compute/new-azvm) to create the VM with Accelerated Networking enabled.
-### Confirm the Ethernet controller is installed in the Windows VM
+ ```azurepowershell
+ New-AzVM -VM $vmConfig -ResourceGroupName "<myResourceGroup>" -Location "<myAzureRegion>"
+ ```
+
+## Confirm the Ethernet controller is installed
Once you create the VM in Azure, connect to the VM and confirm that the Ethernet controller is installed in Windows.
-1. Go to the [Azure portal](https://portal.azure.com) to manage your VMs. Search for and select **Virtual machines**.
+1. In the [Azure portal](https://portal.azure.com), search for and select *virtual machines*.
+
+1. On the **Virtual machines** page, select your new VM.
-2. In the virtual machine list, choose your new VM.
+1. On the VM's **Overview** page, select **Connect**.
-3. In the VM overview page, if the **Status** of the VM is listed as **Creating**, wait until Azure finishes creating the VM. The **Status** will be changed to **Running** after VM creation is complete.
+1. On the **Connect** screen, select **Native RDP**.
-4. From the VM overview toolbar, select **Connect** > **RDP** > **Download RDP File**.
+1. On the **Native RDP** screen, select **Download RDP file**.
-5. Open the .rdp file, and then sign in to the VM with the credentials you entered in the [Create a VM and attach the network interface](#create-a-vm-and-attach-the-network-interface) section. If you've never connected to a Windows VM in Azure, see [Connect to virtual machine](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json#connect-to-virtual-machine).
+1. Open the downloaded RDP file, and then sign in with the credentials you entered when you created the VM.
-6. After the remote desktop session for your VM appears, right-click the Windows Start button and choose **Device Manager**.
+1. On the remote VM, right-click **Start** and select **Device Manager**.
-7. In the **Device Manager** window, expand the **Network adapters** node.
+1. In the **Device Manager** window, expand the **Network adapters** node.
-8. Confirm that the **Mellanox ConnectX-3 Virtual Function Ethernet Adapter** appears, as shown in the following image:
+1. Confirm that the **Mellanox ConnectX-4 Lx Virtual Ethernet Adapter** appears, as shown in the following image:
- ![Mellanox ConnectX-3 Virtual Function Ethernet Adapter, new network adapter for accelerated networking, Device Manager](./media/create-vm-accelerated-networking/device-manager.png)
+ ![Mellanox ConnectX-3 Virtual Function Ethernet Adapter, new network adapter for accelerated networking, Device Manager](./media/create-vm-accelerated-networking/device-manager.png)
-Accelerated networking is now enabled for your VM.
+ The presence of the adapter confirms that Accelerated Networking is enabled for your VM.
> [!NOTE]
-> If the Mellanox adapter fails to start, open an administrator prompt in the remote desktop session and enter the following command:
+> If the Mellanox adapter fails to start, open an administrator command prompt on the remote VM and enter the following command:
> > `netsh int tcp set global rss = enabled`
-## Enable accelerated networking on existing VMs
+<a name="enable-accelerated-networking-on-existing-vms"></a>
+## Manage Accelerated Networking on existing VMs
-If you've created a VM without accelerated networking, you may enable this feature on an existing VM. The VM must support accelerated networking by meeting the following prerequisites, which are also outlined above:
+You can enable Accelerated Networking on an existing VM. The VM must meet the following requirements to support Accelerated Networking:
-* The VM must be a supported size for accelerated networking.
-* The VM must be a supported Azure Gallery image (and kernel version for Linux).
-* All VMs in an availability set or a virtual machine scale set must be stopped or deallocated before you enable accelerated networking on any NIC.
+- Be a supported size for Accelerated Networking.
+- Be a supported Azure Marketplace image.
+- Be stopped or deallocated before you can enable Accelerated Networking on any NIC. This requirement applies to all individual VMs or VMs in an availability set or Azure Virtual Machine Scale Sets.
-### Individual VMs and VMs in an availability set
+### Enable Accelerated Networking on individual VMs or VMs in availability sets
1. Stop or deallocate the VM or, if an availability set, all the VMs in the set:
- ```azurepowershell
- Stop-AzVM -ResourceGroup "myResourceGroup" -Name "myVM"
- ```
+ ```azurepowershell
+ Stop-AzVM -ResourceGroup "<myResourceGroup>" -Name "<myVM>"
+ ```
- > [!NOTE]
- > When you create a VM individually, without an availability set, you only need to stop or deallocate the individual VM to enable accelerated networking. If your VM was created with an availability set, you must stop or deallocate all VMs contained in the availability set before enabling accelerated networking on any of the NICs, so that the VMs end up on a cluster that supports accelerated networking. The stop or deallocate requirement is unnecessary if you disable accelerated networking, because clusters that support accelerated networking also work fine with NICs that don't use accelerated networking.
+ If you created your VM individually without an availability set, you must stop or deallocate only the individual VM to enable Accelerated Networking. If you created your VM with an availability set, you must stop or deallocate all VMs in the set, so the VMs end up on a cluster that supports Accelerated Networking.
-2. Enable accelerated networking on the NIC of your VM:
+ The stop or deallocate requirement is unnecessary to disable Accelerated Networking. Clusters that support Accelerated Networking also work fine with NICs that don't use Accelerated Networking.
- ```azurepowershell
- $nic = Get-AzNetworkInterface -ResourceGroupName "myResourceGroup" `
- -Name "myNic"
-
- $nic.EnableAcceleratedNetworking = $true
-
- $nic | Set-AzNetworkInterface
- ```
+1. Enable Accelerated Networking on the NIC of your VM:
-3. Restart your VM or, if in an availability set, all the VMs in the set, and confirm that accelerated networking is enabled:
+ ```azurepowershell
+ $nic = Get-AzNetworkInterface -ResourceGroupName "<myResourceGroup>" -Name "<myNic>"
+
+ $nic.EnableAcceleratedNetworking = $true
+
+ $nic | Set-AzNetworkInterface
+ ```
- ```azurepowershell
- Start-AzVM -ResourceGroup "myResourceGroup" `
- -Name "myVM"
- ```
+3. Restart your VM, or all the VMs in the availability set, and [confirm that Accelerated Networking is enabled](#confirm-the-ethernet-controller-is-installed).
+
+ ```azurepowershell
+ Start-AzVM -ResourceGroup "<myResourceGroup>" -Name "<myVM>"
+ ```
-### Virtual machine scale set
+### Enable Accelerated Networking on Virtual Machine Scale Sets
-A virtual machine scale set is slightly different, but it follows the same workflow.
+Azure Virtual Machine Scale Sets is slightly different but follows the same workflow.
1. Stop the VMs:
- ```azurepowershell
- Stop-AzVmss -ResourceGroupName "myResourceGroup" `
- -VMScaleSetName "myScaleSet"
+ ```azurepowershell
+ Stop-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myScaleSet>"
```
-2. Update the accelerated networking property under the network interface:
-
- ```azurepowershell
- $vmss = Get-AzVmss -ResourceGroupName "myResourceGroup" `
- -VMScaleSetName "myScaleSet"
-
- $vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].EnableAcceleratedNetworking = $true
-
- Update-AzVmss -ResourceGroupName "myResourceGroup" `
- -VMScaleSetName "myScaleSet" `
- -VirtualMachineScaleSet $vmss
- ```
+1. Update the Accelerated Networking property under the NIC:
-3. Set the applied updates to automatic so that the changes are immediately picked up:
+ ```azurepowershell
+ $vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myScaleSet>"
+
+ $vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].EnableAcceleratedNetworking = $true
+
+ Update-AzVmss
+ -ResourceGroupName "<myResourceGroup>" `
+ -VMScaleSetName "<myScaleSet>" `
+ -VirtualMachineScaleSet $vmss
+ ```
- ```azurepowershell
- $vmss.UpgradePolicy.Mode = "Automatic"
-
- Update-AzVmss -ResourceGroupName "myResourceGroup" `
- -VMScaleSetName "myScaleSet" `
- -VirtualMachineScaleSet $vmss
- ```
+1. Virtual Machine Scale Sets has an upgrade policy that applies updates by using automatic, rolling, or manual settings. Set the upgrade policy to automatic so that the changes are immediately picked up.
- > [!NOTE]
- > A scale set has VM upgrades that apply updates using three different settings: automatic, rolling, and manual. In these instructions, the policy is set to automatic, so the scale set picks up the changes immediately after it restarts.
+ ```azurepowershell
+ $vmss.UpgradePolicy.Mode = "Automatic"
+
+ Update-AzVmss
+ -ResourceGroupName "<myResourceGroup>" `
+ -VMScaleSetName "<myScaleSet>" `
+ -VirtualMachineScaleSet $vmss
+ ```
-4. Restart the scale set:
+1. Restart the scale set:
- ```azurepowershell
- Start-AzVmss -ResourceGroupName "myResourceGroup" `
- -VMScaleSetName "myScaleSet"
- ```
+ ```azurepowershell
+ Start-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myScaleSet>"
+ ```
+
+Once you restart and the upgrades finish, the virtual function (VF) appears inside VMs that use a supported OS and VM size.
-Once you restart, wait for the upgrades to finish. After the upgrades are done, the virtual function (VF) appears inside the VM. Make sure you're using a supported OS and VM size.
+### Resize existing VMs with Accelerated Networking
-### Resizing existing VMs with accelerated networking
+VMs with Accelerated Networking enabled can be resized only to sizes that also support Accelerated Networking. You can't resize a VM with Accelerated Networking to a VM instance that doesn't support Accelerated Networking by using the resize operation. Instead, use the following process to resize these VMs:
-If a VM has accelerated networking enabled, you're only able to resize it to a VM that supports accelerated networking.
+1. Stop and deallocate the VM or all the VMs in the availability set or Virtual Machine Scale Sets.
+1. Disable Accelerated Networking on the NIC of the VM or all the VMs in the availability set or Virtual Machine Scale Sets.
+1. Move the VM or VMs to a new size that doesn't support Accelerated Networking, and restart them.
-A VM with accelerated networking enabled can't be resized to a VM instance that doesn't support accelerated networking using the resize operation. Instead, to resize one of these VMs:
+### Manage Accelerated Networking through the portal
-1. Stop or deallocate the VM. For an availability set or scale set, stop or deallocate all the VMs in the availability set or scale set.
+When you [create a VM in the Azure portal](quick-create-portal.md), you can select the **Enable accelerated networking** checkbox on the **Networking** tab of the **Create a virtual machine** screen. If the VM uses a [supported operating system](./accelerated-networking-overview.md#supported-operating-systems) and [VM size](./accelerated-networking-overview.md#supported-vm-instances) for Accelerated Networking, the checkbox is automatically selected. If Accelerated Networking isn't supported, the checkbox isn't selected, and a message explains the reason.
-2. Disable accelerated networking on the NIC of the VM. For an availability set or scale set, disable accelerated networking on the NICs of all VMs in the availability set or scale set.
+>[!NOTE]
+>You can enable Accelerated Networking during portal VM creation only for Azure Marketplace supported operating systems. To create and enable Accelerated Networking for a VM with a custom OS image, you must use PowerShell or Azure CLI.
-3. After you disable accelerated networking, move the VM, availability set, or scale set to a new size that doesn't support accelerated networking, and then restart them.
+To enable or disable Accelerated Networking for an existing VM through the Azure portal:
+
+1. From the [Azure portal](https://portal.azure.com) page for the VM, select **Networking** from the left menu.
+1. On the **Networking** page, select the **Network Interface**.
+1. At the top of the NIC **Overview** page, select **Edit accelerated networking**.
+1. Select **Automatic**, **Enabled**, or **Disabled**, and then select **Save**.
+
+To confirm whether Accelerated Networking is enabled for an existing VM:
+
+1. From the [Azure portal](https://portal.azure.com) page for the VM, select **Networking** from the left menu.
+1. On the **Networking** page, select the **Network Interface**.
+1. On the NIC **Overview** page, under **Essentials**, note whether **Accelerated networking** is set to **Enabled** or **Disabled**.
## Next steps
-* Learn [how Accelerated Networking works](./accelerated-networking-how-it-works.md)
-* Learn how to [create a VM with Accerelated Networking using Azure CLI](./create-vm-accelerated-networking-cli.md)
-* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
+
+- [How Accelerated Networking works in Linux and FreeBSD VMs](accelerated-networking-how-it-works.md)
+- [Create a VM with Accelerated Networking by using Azure CLI](create-vm-accelerated-networking-cli.md)
+- [Proximity placement groups](../virtual-machines/co-location.md)
virtual-network Deploy Container Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking.md
Title: Deploy Azure virtual network container networking description: Learn how to deploy the Azure Virtual Network container network interface (CNI) plug-in for Kubernetes clusters.- -
-tags: azure-resource-manager
- Previously updated : 9/18/2018 Last updated : 03/24/2023
The ACS-Engine deploys a Kubernetes cluster with an Azure Resource Manager templ
| Setting | Description | |--| |
- | firstConsecutiveStaticIP | The IP address that is allocated to the Master node. This is a mandatory setting. |
+ | firstConsecutiveStaticIP | The IP address that is allocated to the main node. This setting is mandatory. |
| clusterSubnet under kubernetesConfig | CIDR of the virtual network subnet where the cluster is deployed, and from which IP addresses are allocated to Pods | | vnetSubnetId under masterProfile | Specifies the Azure Resource Manager resource ID of the subnet where the cluster is to be deployed | | vnetCidr | CIDR of the virtual network where the cluster is deployed |
The ACS-Engine deploys a Kubernetes cluster with an Azure Resource Manager templ
### Example configuration The json example that follows is for a cluster with the following properties:-- 1 Master node and 2 Agent nodes -- Is deployed in a subnet named *KubeClusterSubnet* (10.0.0.0/20), with both Master and Agent nodes residing in it.+
+- One main node and two agent nodes
+
+- Deployed in a subnet named *KubeClusterSubnet* (10.0.0.0/20), with both main and agent nodes residing in it.
```json {
The json example that follows is for a cluster with the following properties:
Complete the following steps to install the plug-in on every Azure virtual machine in a Kubernetes cluster: 1. [Download and install the plug-in](#download-and-install-the-plug-in).
-2. Pre-allocate a virtual network IP address pool on every virtual machine from which IP addresses will be assigned to Pods. Every Azure virtual machine comes with a primary virtual network private IP address on each network interface. The pool of IP addresses for Pods is added as secondary addresses (*ipconfigs*) on the virtual machine network interface, using one of the following options:
+
+2. Preallocate a virtual network IP address pool on every virtual machine from which IP addresses are assigned to Pods. Every Azure virtual machine comes with a primary virtual network private IP address on each network interface. The pool of IP addresses for Pods is added as secondary addresses (*ipconfigs*) on the virtual machine network interface, using one of the following options:
- **CLI**: [Assign multiple IP addresses using the Azure CLI](./ip-services/virtual-network-multiple-ip-addresses-cli.md)+ - **PowerShell**: [Assign multiple IP addresses using PowerShell](./ip-services/virtual-network-multiple-ip-addresses-powershell.md)+ - **Portal**: [Assign multiple IP addresses using the Azure portal](./ip-services/virtual-network-multiple-ip-addresses-portal.md)+ - **Azure Resource Manager template**: [Assign multiple IP addresses using templates](./template-samples.md) Ensure that you add enough IP addresses for all of the Pods that you expect to bring up on the virtual machine.
-3. Select the plug-in for providing networking for your cluster by passing Kubelet the `ΓÇônetwork-plugin=cni` command-line option during cluster creation. Kubernetes, by default, looks for the plug-in and the configuration file in the directories where they are already installed.
+3. Select the plug-in for providing networking for your cluster by passing Kubelet the `ΓÇônetwork-plugin=cni` command-line option during cluster creation. Kubernetes, by default, looks for the plug-in and the configuration file in the directories where they're already installed.
+ 4. If you want your Pods to access the internet, add the following *iptables* rule on your Linux virtual machines to source-NAT internet traffic. In the following example, the specified IP range is 10.0.0.0/8. ```bash
Complete the following steps to install the plug-in on every Azure virtual machi
addrtype ! --dst-type local ! -d 10.0.0.0/8 -j MASQUERADE ```
- The rules NAT traffic that is not destined to the specified IP ranges. The assumption is that all traffic outside the previous ranges is internet traffic. You can choose to specify the IP ranges of the virtual machine's virtual network, that of peered virtual networks, and on-premises networks.
+ The rules NAT traffic that isn't destined to the specified IP ranges. The assumption is that all traffic outside the previous ranges is internet traffic. You can choose to specify the IP ranges of the virtual machine's virtual network, that of peered virtual networks, and on-premises networks.
- Windows virtual machines automatically source NAT traffic that has a destination outside the subnet to which the virtual machine belongs. It is not possible to specify custom IP ranges.
+ Windows virtual machines automatically source NAT traffic that has a destination outside the subnet to which the virtual machine belongs. It isn't possible to specify custom IP ranges.
-After completing the previous steps, Pods brought up on the Kubernetes Agent virtual machines are automatically assigned private IP addresses from the virtual network.
+After completion of the previous steps, Pods brought up on the Kubernetes Agent virtual machines are automatically assigned private IP addresses from the virtual network.
## Deploy plug-in for Docker containers 1. [Download and install the plug-in](#download-and-install-the-plug-in).+ 2. Create Docker containers with the following command: ``` ./docker-run.sh \<container-name\> \<container-namespace\> \<image\> ```
-The containers automatically start receiving IP addresses from the allocated pool. If you want to load balance traffic to the Docker containers, they must be placed behind a software load balancer, and you must configure a load balancer probe, the same way you create a policy and probes for a virtual machine.
+The containers automatically start receiving IP addresses from the allocated pool. If you want to load balance traffic to the Docker containers, they must be placed behind a software load balancer with a load balancer probe.
### CNI network configuration file
The CNI network configuration file is described in JSON format. It is, by defaul
#### Settings explanation -- **cniVersion**: The Azure Virtual Network CNI plug-ins support versions 0.3.0 and 0.3.1 of the [CNI spec](https://github.com/containernetworking/cni/blob/master/SPEC.md).-- **name**: Name of the network. This property can be set to any unique value.-- **type**: Name of the network plug-in. Set to *azure-vnet*.-- **mode**: Operational mode. This field is optional. The only mode supported is "bridge". For more information, see [operational modes](https://github.com/Azure/azure-container-networking/blob/master/docs/network.md).-- **bridge**: Name of the bridge that will be used to connect containers to a virtual network. This field is optional. If omitted, the plugin automatically picks a unique name, based on the master interface index.-- **ipam type**: Name of the IPAM plug-in. Always set to *azure-vnet-ipam*.
+- **"cniVersion"**: The Azure Virtual Network CNI plug-ins support versions 0.3.0 and 0.3.1 of the [CNI spec](https://github.com/containernetworking/cni/blob/master/SPEC.md).
+
+- **"name"**: Name of the network. This property can be set to any unique value.
+
+- **"type"**: Name of the network plug-in. Set to **azure-vnet**.
+
+- **"mode"**: Operational mode. This field is optional. The only mode supported is "bridge". For more information, see [operational modes](https://github.com/Azure/azure-container-networking/blob/master/docs/network.md).
+
+- **"bridge"**: Name of the bridge that is used to connect containers to a virtual network. This field is optional. If omitted, the plugin automatically picks a unique name, based on the main interface index.
+
+- **"ipam"** - **"type"**: Name of the IPAM plug-in. Always set to **azure-vnet-ipam**.
## Download and install the plug-in Download the plug-in from [GitHub](https://github.com/Azure/azure-container-networking/releases). Download the latest version for the platform that you're using: - **Linux**: [azure-vnet-cni-linux-amd64-\<version no.\>.tgz](https://github.com/Azure/azure-container-networking/releases/download/v1.4.20/azure-vnet-cni-linux-amd64-v1.4.20.tgz)+ - **Windows**: [azure-vnet-cni-windows-amd64-\<version no.\>.zip](https://github.com/Azure/azure-container-networking/releases/download/v1.4.20/azure-vnet-cni-windows-amd64-v1.4.20.zip) Copy the install script for [Linux](https://github.com/Azure/azure-container-networking/blob/master/scripts/install-cni-plugin.sh) or [Windows](https://github.com/Azure/azure-container-networking/blob/master/scripts/Install-CniPlugin.ps1) to your computer. Save the script to a `scripts` directory on your computer and name the file `install-cni-plugin.sh` for Linux, or `install-cni-plugin.ps1` for Windows.
-To install the plug-in, run the appropriate script for your platform, specifying the version of the plug-in you are using. For example, you might specify *v1.4.20*. For the Linux install, you'll also need to provide an appropriate [CNI plugin version](https://github.com/containernetworking/plugins/releases), such as *v1.0.1*:
+To install the plug-in, run the appropriate script for your platform, specifying the version of the plug-in you're using. For example, you might specify *v1.4.20*. For the Linux install, provide an appropriate [CNI plugin version](https://github.com/containernetworking/plugins/releases), such as *v1.0.1*:
```bash scripts/install-cni-plugin.sh [azure-cni-plugin-version] [cni-plugin-version]
virtual-network Create Public Ip Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-portal.md
Title: 'Quickstart: Create a public IP address - Azure portal'
-description: In this quickstart, learn how to create a standard or basic SKU public IP address. You'll also learn about routing preference and tier.
+description: In this quickstart, you learn how to create a public IP address for a Standard SKU and a Basic SKU. You also learn about routing preferences and tiers.
Previously updated : 07/13/2022 Last updated : 03/24/2023 # Quickstart: Create a public IP address using the Azure portal
-In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices.
+In this quickstart, you learn how to create Azure public IP addresses, which you use for public connections to Azure resources. Public IP addresses are available in two SKUs: Basic and Standard. Two tiers of public IP addresses are available: regional and global. You can also set the routing preference of a public IP address when you create it: Microsoft network or Internet.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com).
# [**Standard SKU**](#tab/option-1-create-public-ip-standard)
->[!NOTE]
->Standard SKU public IP is recommended for production workloads. For more information about SKUs, see **[Public IP addresses](public-ip-addresses.md)**.
-
-## Create a standard SKU public IP address
+A public IP address with a Standard SKU is recommended for production workloads. For more information about SKUs, see [Public IP addresses](public-ip-addresses.md#sku).
-Use the following steps to create a standard public IPv4 address named **myStandardPublicIP**.
-
-> [!NOTE]
->To create an IPv6 address, choose **IPv6** for the **IP Version** parameter. If your deployment requires a dual stack configuration (IPv4 and IPv6 address), choose **Both**.
+## Create a Standard SKU public IP address
-1. In the search box at the top of the portal, enter **Public IP**.
+Follow these steps to create a public IPv4 address with a Standard SKU named myStandardPublicIP. To create an IPv6 address instead, choose **IPv6** for the **IP Version**:
-2. In the search results, select **Public IP addresses**.
+1. In the portal, search for and select **Public IP addresses**.
-3. Select **+ Create**.
+1. On the **Public IP addresses** page, select **Create**.
-4. In **Create public IP address**, enter, or select the following information:
+1. On the **Basics** tab of the **Create public IP address** screen, enter or select the following values:
- | Setting | Value |
- | | |
- | IP Version | Select IPv4 |
- | SKU | Select **Standard** |
- | Tier | Select **Regional** |
- | Name | Enter **myStandardPublicIP** |
- | IP address assignment | Locked as **Static** |
- | Routing Preference | Select **Microsoft network**. |
- | Idle Timeout (minutes) | Leave the default of **4**. |
- | DNS name label | Leave the value blank. |
- | Subscription | Select your subscription |
- | Resource group | Select **Create new**, enter **QuickStartCreateIP-rg**. </br> Select **OK**. |
- | Location | Select **(US) East US 2** |
- | Availability Zone | Select **No Zone** |
+ - **Subscription**: Keep the default or select a different subscription.
+ - **Resource group**: Select **Create new**, and then name the group *TestRG*.
+ - **Region**: Select **(US) East US 2**.
+ - **Name**: Enter *myStandardPublicIP*.
+ - **IP Version**: Select **IPv4**.
+ - **SKU**: Select **Standard**.
+ - **Availability zone**: Select **No Zone**.
+ - **Tier**: Select **Regional**.
+ - **IP address assignment**: Only option is **Static**.
+ - **Routing preference**: Select **Microsoft network**.
+ - **Idle timeout (minutes)**: Keep the default of **4**.
+ - **DNS name label**: Leave the value blank.
-5. Select **Create**.
+ :::image type="content" source="./media/create-public-ip-portal/create-standard-ip.png" alt-text="Screenshot that shows the Create public IP address Basics tab settings for a Standard SKU.":::
+1. Select **Review + create**. After validation succeeds, select **Create**.
> [!NOTE]
-> In regions with [Availability Zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../../availability-zones/az-overview.md).
+> In regions with [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select **No Zone** (default), a specific zone, or **Zone-redundant**. The choice depends on your specific domain failure requirements. In regions without availability zones, this field doesn't appear.
-You can associate the above created public IP address with a [Windows](../../virtual-machines/windows/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../../virtual-machines/linux/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. Use the CLI section on the tutorial page: [Associate a public IP address to a virtual machine](./associate-public-ip-address-vm.md#azure-cli) to associate the public IP to your VM. You can also associate the public IP address created above with an [Azure Load Balancer](../../load-balancer/load-balancer-overview.md), by assigning it to the load balancer **frontend** configuration. The public IP address serves as a load-balanced virtual IP address (VIP).
+You can associate the public IP address you created with a Windows or Linux [virtual machine](../../virtual-machines/overview.md). For more information, see [Associate a public IP address to a virtual machine](./associate-public-ip-address-vm.md#azure-cli). You can also associate a public IP address with an [Azure Load Balancer](../../load-balancer/load-balancer-overview.md) by assigning it to the load balancer front-end configuration. The public IP address serves as a load-balanced virtual IP address (VIP).
# [**Basic SKU**](#tab/option-1-create-public-ip-basic) >[!NOTE]
->Standard SKU public IP is recommended for production workloads. For more information about SKUs, see **[Public IP addresses](public-ip-addresses.md)**.
+>A public IP address with a Standard SKU is recommended for production workloads. For more information about SKUs, see [Public IP addresses](public-ip-addresses.md#sku). Basic SKU public IPs don't support availability zones. If it's acceptable for the IP address to change over time, you can set **IP address assignment** to **Dynamic** instead of **Static**.
-## Create a basic SKU public IP address
-
-In this section, create a basic public IPv4 address named **myBasicPublicIP**.
-
-> [!NOTE]
-> Basic public IPs don't support availability zones.
+## Create a Basic SKU public IP address
-1. In the search box at the top of the portal, enter **Public IP**.
+Follow these steps to create a public IPv4 address with a Basic SKU named myBasicPublicIP:
-2. In the search results, select **Public IP addresses**.
+1. In the portal, search for and select **Public IP addresses**.
-3. Select **+ Create**.
+1. On the **Public IP addresses** page, select **Create**.
-4. On the **Create public IP address** page enter, or select the following information:
+1. On the **Basics** tab of the **Create public IP address** screen, enter or select the following values:
- | Setting | Value |
- | | |
- | IP Version | Select **IPv4** |
- | SKU | Select **Basic** |
- | Name | Enter **myBasicPublicIP** |
- | IP address assignment | Select **Static** |
- | Idle Timeout (minutes) | Leave the default of **4**. |
- | DNS name label | Leave the value blank |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new**, enter **QuickStartCreateIP-rg**. </br> Select **OK**. |
- | Location | Select **(US) East US 2** |
+ - **Subscription**: Keep the default or select a different subscription.
+ - **Resource group**: Select **Create new**, and then name the group *TestRG*.
+ - **Region**: Select **(US) East US 2**.
+ - **Name**: Enter *myBasicPublicIP*.
+ - **IP Version**: Select **IPv4**.
+ - **SKU**: Select **Basic**.
+ - **IP address assignment**: Select **Static**.
+ - **Idle timeout (minutes)**: Keep the default of **4**.
+ - **DNS name label**: Leave the value blank.
-5. Select **Create**.
+ :::image type="content" source="./media/create-public-ip-portal/create-basic-ip.png" alt-text="Screenshot that shows the Create public IP address Basics tab settings for a Basic SKU.":::
+1. Select **Review + create**. After validation succeeds, select **Create**.
-If it's acceptable for the IP address to change over time, **Dynamic** IP assignment can be selected by changing the AllocationMethod to **Dynamic**.
+# [**Routing preference**](#tab/option-1-create-public-ip-routing-preference)
-# [**Routing Preference**](#tab/option-1-create-public-ip-routing-preference)
+This section shows you how to configure the [routing preference](routing-preference-overview.md) for an ISP network (**Internet** option) for a public IP address. After you create the public IP address, you can associate it with the following Azure resources:
-This section shows you how to configure [routing preference](routing-preference-overview.md) via ISP network (**Internet** option) for a public IP address. After you create the public IP address, you can associate it with the following Azure resources:
+- Azure Virtual Machines
+- Azure Virtual Machine Scale Set
+- Azure Kubernetes Service
+- Azure Load Balancer
+- Azure Application Gateway
+- Azure Firewall
-* Virtual machine
-* Virtual machine scale set
-* Azure Kubernetes Service (AKS)
-* Internet-facing load balancer
-* Application Gateway
-* Azure Firewall
-
-By default, the routing preference for public IP address is set to the Microsoft global network for all Azure services and can be associated with any Azure service.
+By default, the routing preference for a public IP address is set to the Microsoft global network for all Azure services and can be associated with any Azure service.
> [!NOTE]
->To create an IPv6 address, choose **IPv6** for the **IP Version** parameter. If your deployment requires a dual stack configuration (IPv4 and IPv6 address), choose **Both**.
-
-## Create a public IP with Internet routing
+> Although you can create a public IP address with either an IPv4 or IPv6 address, the **Internet** option of **Routing preference** supports only IPv4.
-1. In the search box at the top of the portal, enter **Public IP**.
+## Create a public IP with internet routing
-2. In the search results, select **Public IP addresses**.
+Follow these steps to create a public IPv4 address with a Standard SKU and routing preference of **Internet** named myStandardPublicIP-RP:
-3. Select **+ Create**.
+1. In the portal, search for and select **Public IP addresses**.
-4. In **Create public IP address**, enter, or select the following information:
+1. On the **Public IP addresses** page, select **Create**.
- | Setting | Value |
- | | |
- | IP Version | Select IPv4 |
- | SKU | Select **Standard** |
- | Tier | Select **Regional** |
- | Name | Enter **myStandardPublicIP-RP** |
- | IP address assignment | Locked as **Static** |
- | Routing Preference | Select **Internet**. |
- | Idle Timeout (minutes) | Leave the default of **4**. |
- | DNS name label | Leave the value blank. |
- | Subscription | Select your subscription |
- | Resource group | Select **Create new**, enter **QuickStartCreateIP-rg**. </br> Select **OK**. |
- | Location | Select **(US) East US 2** |
- | Availability Zone | Select **Zone redundant** |
+1. On the **Basics** tab of the **Create public IP address** screen, enter or select the following values:
-5. Select **Create**.
+ - **Subscription**: Keep the default or select a different subscription.
+ - **Resource group**: Select **Create new**, and then name the group *TestRG*.
+ - **Region**: Select **(US) East US 2**.
+ - **Name**: Enter *myStandardPublicIP-RP*.
+ - **IP Version**: Select **IPv4**.
+ - **SKU**: Select **Standard**.
+ - **Availability zone**: Select **Zone-redundant**.
+ - **Tier**: Select **Regional**.
+ - **IP address assignment**: Only option is **Static**.
+ - **Routing preference**: Select **Internet**.
+ - **Idle timeout (minutes)**: Keep the default of **4**.
+ - **DNS name label**: Leave the value blank.
+1. Select **Review + create**. After validation succeeds, select **Create**.
-> [!NOTE]
-> Public IP addresses are created with an IPv4 or IPv6 address. However, routing preference only supports IPV4 currently.
> [!NOTE]
-> In regions with [Availability Zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../../availability-zones/az-overview.md).
+> In regions with [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select **No Zone** (default), a specific zone, or **Zone-redundant**. The choice depends on your specific domain failure requirements. In regions without availability zones, this field doesn't appear.
-You can associate the above created public IP address with a [Windows](../../virtual-machines/windows/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../../virtual-machines/linux/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. Use the CLI section on the tutorial page: [Associate a public IP address to a virtual machine](./associate-public-ip-address-vm.md#azure-cli) to associate the public IP to your VM. You can also associate the public IP address created above with an [Azure Load Balancer](../../load-balancer/load-balancer-overview.md), by assigning it to the load balancer **frontend** configuration. The public IP address serves as a load-balanced virtual IP address (VIP).
+You can associate the public IP address you created with a Windows or Linux [virtual machine](../../virtual-machines/overview.md). For more information, see [Associate a public IP address to a virtual machine](./associate-public-ip-address-vm.md#azure-cli). You can also associate a public IP address with an [Azure Load Balancer](../../load-balancer/load-balancer-overview.md) by assigning it to the load balancer front-end configuration. The public IP address serves as a load-balanced virtual IP address (VIP).
# [**Tier**](#tab/option-1-create-public-ip-tier)
-Public IP addresses are associated with a single region. The **Global** tier spans an IP address across multiple regions. **Global** tier is required for the frontends of cross-region load balancers.
-
-For more information, see [Cross-region load balancer](../../load-balancer/cross-region-overview.md).
+Public IP addresses are associated with a single region. The **Global** tier spans an IP address across multiple regions and is required for the front ends of cross-region load balancers. For a **Global** tier, **Region** must be a home region. For more information, see [Cross-region load balancer](../../load-balancer/cross-region-overview.md) and [Home regions](/azure/load-balancer/cross-region-overview#home-regions).
## Create a global tier public IP
-1. In the search box at the top of the portal, enter **Public IP**.
+Follow these steps to create a public IPv4 address with a Standard SKU and a global tier named myStandardPublicIP-Global:
-2. In the search results, select **Public IP addresses**.
+1. In the portal, search for and select **Public IP addresses**.
-3. Select **+ Create**.
+1. On the **Public IP addresses** page, select **Create**.
-4. In **Create public IP address**, enter, or select the following information:
+1. On the **Basics** tab of the **Create public IP address** screen, enter or select the following values:
- | Setting | Value |
- | | |
- | IP Version | Select IPv4 |
- | SKU | Select **Standard** |
- | Tier | Select **Global** |
- | Name | Enter **myStandardPublicIP-Global** |
- | IP address assignment | Locked as **Static** |
- | Routing Preference | Select **Microsoft**. |
- | Idle Timeout (minutes) | Leave the default of **4**. |
- | DNS name label | Leave the value blank. |
- | Subscription | Select your subscription |
- | Resource group | Select **Create new**, enter **QuickStartCreateIP-rg**. </br> Select **OK**. |
- | Location | Select **(US) East US 2** |
- | Availability Zone | Select **Zone redundant** |
+ - **Subscription**: Keep the default or select a different subscription.
+ - **Resource group**: Select **Create new**, and then name the group *TestRG*.
+ - **Region**: Select **(US) East US 2**.
+ - **Name**: Enter *myStandardPublicIP-Global*.
+ - **IP Version**: Select **IPv4**.
+ - **SKU**: Select **Standard**.
+ - **Availability zone**: Select **Zone-redundant**.
+ - **Tier**: Select **Global**.
+ - **IP address assignment**: Only option is **Static**.
+ - **Routing preference**: Select **Microsoft network**.
+ - **Idle timeout (minutes)**: Keep the default of **4**.
+ - **DNS name label**: Leave the value blank.
-5. Select **Create**.
+1. Select **Review + create**. After validation succeeds, select **Create**.
-You can associate the above created IP address with a cross-region load balancer. For more information, see [Tutorial: Create a cross-region load balancer using the Azure portal](../../load-balancer/tutorial-cross-region-portal.md).
+You can associate the IP address you created with a cross-region load balancer. For more information, see [Tutorial: Create a cross-region load balancer using the Azure portal](../../load-balancer/tutorial-cross-region-portal.md).
## Clean up resources
-If you're not going to continue to use this application, delete the public IP address with the following steps:
-
-1. In the search box at the top of the portal, enter **Resource group**.
-
-2. In the search results, select **Resource groups**.
+When you're finished, delete the resource group and all of the resources it contains:
-3. Select **QuickStartCreateIP-rg**
+1. In the portal, search for and select **TestRG**.
-4. Select **Delete resource group**.
+1. From the **TestRG** screen, select **Delete resource group**.
-5. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
+1. Enter *TestRG* for **Enter resource group name to confirm deletion**, and then select **Delete**.
## Next steps Advance to the next article to learn how to create a public IP prefix: > [!div class="nextstepaction"]
-> [Create public IP prefix using the Azure portal](create-public-ip-prefix-portal.md)
+> [Create a public IP prefix using the Azure portal](create-public-ip-prefix-portal.md)
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
Title: Azure Kubernetes network policies description: Learn about Kubernetes network policies to secure your Kubernetes cluster.- -
-tags: azure-resource-manager
- Previously updated : 9/25/2018 Last updated : 03/25/2023
-# Azure Kubernetes Network Policies
+# Azure Kubernetes network policies
-## Overview
-Network Policies provides micro-segmentation for pods just like Network Security Groups (NSGs) provide micro-segmentation for VMs. The Azure Network Policy Manager (also known as Azure NPM) implementation supports the standard Kubernetes Network Policy specification. You can use labels to select a group of pods and define a list of ingress and egress rules to filter traffic to and from these pods. Learn more about the Kubernetes network policies in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
+Network policies provide micro-segmentation for pods just like Network Security Groups (NSGs) provide micro-segmentation for VMs. The Azure Network Policy Manager implementation supports the standard Kubernetes network policy specification. You can use labels to select a group of pods and define a list of ingress and egress rules to filter traffic to and from these pods. Learn more about the Kubernetes network policies in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
-![Kubernetes network policies overview](./media/kubernetes-network-policies/kubernetes-network-policies-overview.png)
-Azure NPM implementation works with the Azure CNI that provides VNet integration for containers. NPM is supported on Linux and Windows Server 2022. The implementation enforces traffic filtering by configuring allow and deny IP rules based on the defined policies in Linux IPTables or Host Network Service(HNS) ACLPolicies for Windows Server 2022.
+Azure Network Policy Management implementation works with the Azure CNI that provides virtual network integration for containers. Network Policy Manager is supported on Linux and Windows Server. The implementation enforces traffic filtering by configuring allow and deny IP rules based on the defined policies in Linux IPTables or Host Network Service(HNS) ACLPolicies for Windows Server.
## Planning security for your Kubernetes cluster
-When implementing security for your cluster, use network security groups (NSGs) to filter traffic entering and leaving your cluster subnet (North-South traffic). Use Azure NPM for traffic between pods in your cluster (East-West traffic).
-## Using Azure NPM
-Azure NPM can be used in the following ways to provide micro-segmentation for pods.
+When implementing security for your cluster, use network security groups (NSGs) to filter traffic entering and leaving your cluster subnet (North-South traffic). Use Azure Network Policy Manager for traffic between pods in your cluster (East-West traffic).
+
+## Using Azure Network Policy Manager
+
+Azure Network Policy Manager can be used in the following ways to provide micro-segmentation for pods.
### Azure Kubernetes Service (AKS)
-NPM is available natively in AKS and can be enabled at the time of cluster creation. Learn more about it in [Secure traffic between pods using network policies in Azure Kubernetes Service (AKS)](../aks/use-network-policies.md).
+
+Network Policy Manager is available natively in AKS and can be enabled at the time of cluster creation.
+
+For more information, see [Secure traffic between pods using network policies in Azure Kubernetes Service (AKS)](../aks/use-network-policies.md).
### Do it yourself (DIY) Kubernetes clusters in Azure
- For DIY clusters, first install the CNI plug-in and enable it on every virtual machine in a cluster. For detailed instructions, see [Deploy the plug-in for a Kubernetes cluster that you deploy yourself](deploy-container-networking.md#deploy-plug-in-for-a-kubernetes-cluster).
-Once the cluster is deployed run the following `kubectl` command to download and apply the Azure NPM *daemon set* to the cluster.
+For DIY clusters, first install the CNI plug-in and enable it on every virtual machine in a cluster. For detailed instructions, see [Deploy the plug-in for a Kubernetes cluster that you deploy yourself](deploy-container-networking.md#deploy-plug-in-for-a-kubernetes-cluster).
+
+Once the cluster is deployed run the following `kubectl` command to download and apply the Azure Network Policy Manager *daemon set* to the cluster.
For Linux:
For Windows:
The solution is also open source and the code is available on the [Azure Container Networking repository](https://github.com/Azure/azure-container-networking/tree/master/npm).
-## Monitor and Visualize Network Configurations with Azure NPM
-Azure NPM includes informative Prometheus metrics that allow you to monitor and better understand your configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. You can start collecting these metrics using either Azure Monitor or a Prometheus Server.
+## Monitor and visualize network configurations with Azure NPM
+
+Azure Network Policy Manager includes informative Prometheus metrics that allow you to monitor and better understand your configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. You can start collecting these metrics using either Azure Monitor or a Prometheus server.
+
+### Benefits of Azure Network Policy Manager metrics
-### Benefits of Azure NPM Metrics
-Users previously were only able to learn about their Network Configuration with `iptables` and `ipset` commands run inside a cluster node, which yields a verbose and difficult to understand output.
+Users previously were only able to learn about their network configuration with `iptables` and `ipset` commands run inside a cluster node, which yields a verbose and difficult to understand output.
Overall, the metrics provide:-- counts of policies, ACL rules, ipsets, ipset entries, and entries in any given ipset-- execution times for individual OS calls and for handling kubernetes resource events (median, 90th percentile, and 99th percentile)-- failure info for handling kubernetes resource events (these will fail when an OS call fails)
-#### Example Metrics Use Cases
+- Counts of policies, ACL rules, ipsets, ipset entries, and entries in any given ipset
+
+- Execution times for individual OS calls and for handling kubernetes resource events (median, 90th percentile, and 99th percentile)
+
+- Failure info for handling kubernetes resource events (these resource events fail when an OS call fails)
+
+#### Example metrics use cases
+ ##### Alerts via a Prometheus AlertManager
-See a [configuration for these alerts](#set-up-alerts-for-alertmanager) below.
-1. Alert when NPM has a failure with an OS call or when translating a Network Policy.
+
+See a [configuration for these alerts](#set-up-alerts-for-alertmanager) as follows.
+
+1. Alert when Network Policy Manager has a failure with an OS call or when translating a network policy.
+ 2. Alert when the median time to apply changes for a create event was more than 100 milliseconds.
-##### Visualizations and Debugging via our Grafana Dashboard or Azure Monitor Workbook
-1. See how many IPTables rules your policies create (having a massive amount of IPTables rules may increase latency slightly).
+##### Visualizations and debugging via our Grafana dashboard or Azure Monitor workbook
+
+1. See how many IPTables rules your policies create (having a massive number of IPTables rules may increase latency slightly).
+ 2. Correlate cluster counts (for example, ACLs) to execution times.
-3. Get the human-friendly name of an ipset in a given IPTables rule (for example, "azure-npm-487392" represents "podlabel-role:database").
+
+3. Get the human-friendly name of an ipset in a given IPTables rule (for example, `azure-npm-487392` represents `podlabel-role:database`).
### All supported metrics
-The following is the list of supported metrics. Any `quantile` label has possible values `0.5`, `0.9`, and `0.99`. Any `had_error` label has possible values `false` and `true`, representing whether the operation succeeded or failed.
+
+The following list is of supported metrics. Any `quantile` label has possible values `0.5`, `0.9`, and `0.99`. Any `had_error` label has possible values `false` and `true`, representing whether the operation succeeded or failed.
| Metric Name | Description | Prometheus Metric Type | Labels | | -- | -- | -- | -- |
There are also "exec_time_count" and "exec_time_sum" metrics for each "exec_time
The metrics can be scraped through Azure Monitor for containers or through Prometheus. ### Set up for Azure Monitor
-The first step is to enable Azure Monitor for containers for your Kubernetes cluster. Steps can be found in [Azure Monitor for containers Overview](../azure-monitor/containers/container-insights-overview.md). Once you have Azure Monitor for containers enabled, configure the [Azure Monitor for containers ConfigMap](https://aka.ms/container-azm-ms-agentconfig) to enable NPM integration and collection of Prometheus NPM metrics. Azure Monitor for containers ConfigMap has an ```integrations``` section with settings to collect NPM metrics. These settings are disabled by default in the ConfigMap. Enabling the basic setting ```collect_basic_metrics = true```, will collect basic NPM metrics. Enabling advanced setting ```collect_advanced_metrics = true``` will collect advanced metrics in addition to basic metrics.
+
+The first step is to enable Azure Monitor for containers for your Kubernetes cluster. Steps can be found in [Azure Monitor for containers Overview](../azure-monitor/containers/container-insights-overview.md). Once you have Azure Monitor for containers enabled, configure the [Azure Monitor for containers ConfigMap](https://aka.ms/container-azm-ms-agentconfig) to enable Network Policy Manager integration and collection of Prometheus Network Policy Manager metrics.
+
+Azure Monitor for containers ConfigMap has an ```integrations``` section with settings to collect Network Policy Manager metrics.
+
+These settings are disabled by default in the ConfigMap. Enablement the basic setting ```collect_basic_metrics = true```, collects basic Network Policy Manager metrics. Enablement of the advanced setting ```collect_advanced_metrics = true``` collects advanced metrics in addition to basic metrics.
After editing the ConfigMap, save it locally and apply the ConfigMap to your cluster as follows. `kubectl apply -f container-azm-ms-agentconfig.yaml`
-Below is a snippet from the [Azure Monitor for containers ConfigMap](https://aka.ms/container-azm-ms-agentconfig), which shows the NPM integration enabled with advanced metrics collection.
+The following snippet is from the [Azure Monitor for containers ConfigMap](https://aka.ms/container-azm-ms-agentconfig), which shows the Network Policy Manager integration enabled with advanced metrics collection.
+ ``` integrations: |- [integrations.azure_network_policy_manager] collect_basic_metrics = false collect_advanced_metrics = true ```
-Advanced metrics are optional, and turning them on will automatically turn on basic metrics collection. Advanced metrics currently include only `npm_ipset_counts`
-Learn more about [Azure Monitor for containers collection settings in config map](../azure-monitor/containers/container-insights-agent-config.md)
+Advanced metrics are optional, and turning them on automatically turns on basic metrics collection. Advanced metrics currently include only `Network Policy Manager_ipset_counts`.
+
+Learn more about [Azure Monitor for containers collection settings in config map](../azure-monitor/containers/container-insights-agent-config.md).
+
+### Visualization options for Azure Monitor
+
+Once Network Policy Manager metrics collection is enabled, you can view the metrics in the Azure portal using container insights or in Grafana.
-### Visualization Options for Azure Monitor
-Once NPM metrics collection is enabled, you can view the metrics in the Azure portal using Container Insights or in Grafana.
+#### Viewing in Azure portal under insights for the cluster
-#### Viewing in Azure portal under Insights for the cluster
-Open Azure portal. Once in your cluster's Insights, navigate to "Workbooks" and open "Network Policy Manager (NPM) Configuration".
+Open Azure portal. Once in your cluster's insights, navigate to **Workbooks** and open **Network Policy Manager (Network Policy Manager) Configuration**.
-Besides viewing the workbook (pictures below), you can also directly query the Prometheus metrics in "Logs" under the Insights section. For example, this query will return all the metrics being collected.
+Besides viewing the workbook, you can also directly query the Prometheus metrics in "Logs" under the insights section. For example, this query returns all the metrics being collected.
+
+```query
| where TimeGenerated > ago(5h) | where Name contains "npm_"
+```
+
+You can also query log analytics directly for the metrics. For more information, see [Getting Started with Log Analytics Queries](../azure-monitor/containers/container-insights-log-query.md).
+
+#### Viewing in Grafana dashboard
-You can also query Log Analytics directly for the metrics. Learn more about it with [Getting Started with Log Analytics Queries](../azure-monitor/containers/container-insights-log-query.md)
+Set up your Grafana Server and configure a log analytics data source as described [here](https://grafana.com/grafana/plugins/grafana-azure-monitor-datasource). Then, import [Grafana Dashboard with a Log Analytics backend](https://grafana.com/grafana/dashboards/10956) into your Grafana Labs.
-#### Viewing in Grafana Dashboard
-Set up your Grafana Server and configure a Log Analytics Data Source as described [here](https://grafana.com/grafana/plugins/grafana-azure-monitor-datasource). Then, import [Grafana Dashboard with a Log Analytics backend](https://grafana.com/grafana/dashboards/10956) into your Grafana Labs.
+The dashboard has visuals similar to the Azure Workbook. You can add panels to chart & visualize Network Policy Manager metrics from InsightsMetrics table.
-The dashboard has visuals similar to the Azure Workbook. You can add panels to chart & visualize NPM metrics from InsightsMetrics table.
+### Set up for Prometheus server
-### Set up for Prometheus Server
-Some users may choose to collect metrics with a Prometheus Server instead of Azure Monitor for containers. You merely need to add two jobs to your scrape config to collect NPM metrics.
+Some users may choose to collect metrics with a Prometheus server instead of Azure Monitor for containers. You merely need to add two jobs to your scrape config to collect Network Policy Manager metrics.
+
+To install a Prometheus server, add this helm repo on your cluster:
-To install a Prometheus Server, add this helm repo on your cluster
``` helm repo add stable https://kubernetes-charts.storage.googleapis.com helm repo update ```+ then add a server+ ``` helm install prometheus stable/prometheus -n monitoring \ --set pushgateway.enabled=false,alertmanager.enabled=false, \ --set-file extraScrapeConfigs=prometheus-server-scrape-config.yaml ```
-where `prometheus-server-scrape-config.yaml` consists of
+
+where `prometheus-server-scrape-config.yaml` consists of:
+ ``` - job_name: "azure-npm-node-metrics" metrics_path: /node-metrics
where `prometheus-server-scrape-config.yaml` consists of
action: drop ```
-You can also replace the `azure-npm-node-metrics` job with the content below or incorporate it into a pre-existing job for Kubernetes pods:
+You can also replace the `azure-npm-node-metrics` job with the following content or incorporate it into a pre-existing job for Kubernetes pods:
+ ``` - job_name: "azure-npm-node-metrics-from-pod-config" metrics_path: /node-metrics
You can also replace the `azure-npm-node-metrics` job with the content below or
- source_labels: [__meta_kubernetes_namespace] regex: kube-system action: keep
- - source_labels: [__meta_kubernetes_pod_annotationpresent_azure_npm_scrapeable]
+ - source_labels: [__meta_kubernetes_pod_annotationpresent_azure_Network Policy Manager_scrapeable]
action: keep - source_labels: [__address__] action: replace
You can also replace the `azure-npm-node-metrics` job with the content below or
target_label: __address__ ```
-#### Set up Alerts for AlertManager
-If you use a Prometheus Server, you can set up an AlertManager like so. Here's an example config for [the two alerting rules described above](#alerts-via-a-prometheus-alertmanager):
+#### Set up alerts for AlertManager
+
+If you use a Prometheus server, you can set up an AlertManager like so. Here's an example config for [the two alerting rules described previously](#alerts-via-a-prometheus-alertmanager):
+ ``` groups: - name: npm.rules rules:
- # fire when NPM has a new failure with an OS call or when translating a Network Policy (suppose there's a scraping interval of 5m)
- - alert: AzureNPMFailureCreatePolicy
+ # fire when Network Policy Manager has a new failure with an OS call or when translating a Network Policy (suppose there's a scraping interval of 5m)
+ - alert: AzureNetwork Policy ManagerFailureCreatePolicy
# this expression says to grab the current count minus the count 5 minutes ago, or grab the current count if there was no data 5 minutes ago expr: (npm_add_policy_exec_time_count{had_error='true'} - (npm_add_policy_exec_time_count{had_error='true'} offset 5m)) or npm_add_policy_exec_time_count{had_error='true'} labels: severity: warning addon: azure-npm annotations:
- summary: "Azure NPM failed to handle a policy create event"
- description: "Current failure count since NPM started: {{ $value }}"
+ summary: "Azure Network Policy Manager failed to handle a policy create event"
+ description: "Current failure count since Network Policy Manager started: {{ $value }}"
# fire when the median time to apply changes for a pod create event is more than 100 milliseconds.
- - alert: AzureNPMHighControllerPodCreateTimeMedian
+ - alert: AzurenpmHighControllerPodCreateTimeMedian
expr: topk(1, npm_controller_pod_exec_time{operation="create",quantile="0.5",had_error="false"}) > 100.0 labels: severity: warning
- addon: azure-npm
+ addon: azure-Network Policy Manager
annotations:
- summary: "Azure NPM controller pod create time median > 100.0 ms"
+ summary: "Azure Network Policy Manager controller pod create time median > 100.0 ms"
# could have a simpler description like the one for the alert above, # but this description includes the number of pod creates that were handled in the past 10 minutes, # which is the retention period for observations when calculating quantiles for a Prometheus Summary metric description: "value: [{{ $value }}] and observation count: [{{ printf `(npm_controller_pod_exec_time_count{operation='create',pod='%s',had_error='false'} - (npm_controller_pod_exec_time_count{operation='create',pod='%s',had_error='false'} offset 10m)) or npm_controller_pod_exec_time_count{operation='create',pod='%s',had_error='false'}` $labels.pod $labels.pod $labels.pod | query | first | value }}] for pod: [{{ $labels.pod }}]" ```
-### Visualization Options for Prometheus
-When using a Prometheus Server only Grafana Dashboard is supported.
+### Visualization options for Prometheus
-If you haven't already, set up your Grafana Server and configure a Prometheus Data Source. Then, import our [Grafana Dashboard with a Prometheus backend](https://grafana.com/grafana/dashboards/13000) into your Grafana Labs.
+When you use a Prometheus Server, only Grafana dashboard is supported.
-The visuals for this dashboard are identical to the dashboard with a Container Insights/Log Analytics backend.
+If you haven't already, set up your Grafana server and configure a Prometheus data source. Then, import our [Grafana Dashboard with a Prometheus backend](https://grafana.com/grafana/dashboards/13000) into your Grafana Labs.
-### Sample Dashboards
-Following are some sample dashboard for NPM metrics in Container Insights (CI) and Grafana
+The visuals for this dashboard are identical to the dashboard with a container insights/log analytics backend.
-#### CI Summary Counts
-![Azure Workbook summary counts](media/kubernetes-network-policies/workbook-summary-counts.png)
+### Sample dashboards
-#### CI Counts over Time
-[![Azure Workbook counts over time](media/kubernetes-network-policies/workbook-counts-over-time.png)](media/kubernetes-network-policies/workbook-counts-over-time.png#lightbox)
+Following are some sample dashboard for Network Policy Manager metrics in container insights (CI) and Grafana.
-#### CI IPSet Entries
-[![Azure Workbook IPSet entries](media/kubernetes-network-policies/workbook-ipset-entries.png)](media/kubernetes-network-policies/workbook-ipset-entries.png#lightbox)
+#### CI summary counts
-#### CI Runtime Quantiles
-![Azure Workbook runtime quantiles](media/kubernetes-network-policies/workbook-runtime-quantiles.png)
-#### Grafana Dashboard Summary Counts
-![Grafana Dashboard summary counts](media/kubernetes-network-policies/grafana-summary-counts.png)
+#### CI counts over time
-#### Grafana Dashboard Counts over Time
-[![Grafana Dashboard counts over time](media/kubernetes-network-policies/grafana-counts-over-time.png)](media/kubernetes-network-policies/grafana-counts-over-time.png#lightbox)
-#### Grafana Dashboard IPSet Entries
-[![Grafana Dashboard IPSet entries](media/kubernetes-network-policies/grafana-ipset-entries.png)](media/kubernetes-network-policies/grafana-ipset-entries.png#lightbox)
+#### CI IPSet entries
-#### Grafana Dashboard Runtime Quantiles
-[![Grafana Dashboard runtime quantiles](media/kubernetes-network-policies/grafana-runtime-quantiles.png)](media/kubernetes-network-policies/grafana-runtime-quantiles.png#lightbox)
+#### CI runtime quantiles
++
+#### Grafana dashboard summary counts
++
+#### Grafana dashboard counts over time
++
+#### Grafana dashboard IPSet entries
++
+#### Grafana dashboard runtime quantiles
+ ## Next steps+ - Learn about [Azure Kubernetes Service](../aks/intro-kubernetes.md).-- Learn about [container networking](container-networking-overview.md).+
+- Learn about [container networking](container-networking-overview.md).
+ - [Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers.
virtual-network Virtual Network Bandwidth Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-bandwidth-testing.md
Title: Testing Azure VM network throughput-
-description: Use NTTTCP to target the network for testing and minimize the use of other resources that could impact performance.
+ Title: Test VM network throughput by using NTTTCP
+description: Use the NTTTCP tool to test network bandwidth and throughput performance for Windows and Linux VMs on a virtual network.
Previously updated : 10/06/2020 Last updated : 03/23/2023
-# Bandwidth/Throughput testing (NTTTCP)
+# Test VM network throughput by using NTTTCP
-When testing network throughput performance in Azure, it's best to use a tool that targets the network for testing and minimizes the use of other resources that could impact performance. NTTTCP is recommended.
+This article describes how to use the free NTTTCP tool from Microsoft to test network bandwidth and throughput performance on Azure Windows or Linux virtual machines (VMs). A tool like NTTTCP targets the network for testing and minimizes the use of other resources that could affect performance.
-Copy the tool to two Azure VMs of the same size. One VM functions as SENDER
-and the other as RECEIVER.
+## Prerequisites
-#### Deploying VMs for testing
-For the purposes of this test, the two VMs should be in either the same [Proximity Placement Group](../virtual-machines/co-location.md) or the same Availability Set so that we can use their internal IPs and exclude the Load Balancers from the test. It is possible to test with the VIP but this kind of testing is outside the scope of this document.
+To test throughput, you need two VMs of the same size to function as *sender* and *receiver*. The two VMs should be in the same [proximity placement group](/azure/virtual-machines/co-location) or [availability set](/azure/virtual-machines/availability-set-overview), so you can use their internal IP addresses and exclude load balancers from the test.
-Make a note of the RECEIVER's IP address. Let's call that IP "a.b.c.r"
+Note the number of VM cores and the receiver VM IP address to use in the commands. Both the sender and receiver commands use the receiver's IP address.
-Make a note of the number of cores on the VM. Let's call this "\#num\_cores"
+>[!NOTE]
+>Testing by using a virtual IP (VIP) is possible, but is beyond the scope of this article.
-Run the NTTTCP test for 300 seconds (or 5 minutes) on the sender VM and receiver VM.
+## Test throughput with Windows VMs or Linux VMs
-Tip: When setting up this test for the first time, you might try a shorter test period to get feedback sooner. Once the tool is working as expected, extend the test period to 300 seconds for the most accurate results.
+You can test throughput from Windows VMs by using [NTTTCP](https://github.com/microsoft/ntttcp) or from Linux VMs by using [NTTTCP-for-Linux](https://github.com/Microsoft/ntttcp-for-linux).
-> [!NOTE]
-> The sender **and** receiver must specify **the same** test duration
-parameter (-t).
->
-> The IP address in both Sender and Receiver commands is the Receiver's IP address.
->
-> The -r and -s flags are no longer required for the receiver and sender parameters.
+# [Windows](#tab/windows)
-To test a single TCP stream for 10 seconds:
+### Set up NTTTPS and test configuration
-Receiver parameters: ntttcp -r -t 10 -P 1
+1. On both the sender and receiver VMs, [download the latest version of NTTTCP](https://github.com/microsoft/ntttcp/releases/latest) into a separate folder like *c:\\tools*.
-Sender parameters: ntttcp -s10.27.33.7 -t 10 -n 1 -P 1
+1. On the receiver VM, create a Windows Defender Firewall `allow` rule to allow the NTTTCP traffic to arrive. It's easier to allow *nttcp.exe* by name than to allow specific inbound TCP ports. Run the following command, replacing `c:\tools` with your download path for *ntttcp.exe* if different.
-> [!NOTE]
-> The preceding sample should only be used to confirm your configuration. Valid examples of testing are covered later in this document.
+ ```cmd
+ netsh advfirewall firewall add rule program=c:\tools\ntttcp.exe name="ntttcp" protocol=any dir=in action=allow enable=yes profile=ANY
+ ```
-## Testing VMs running WINDOWS:
+1. To confirm your configuration, test a single Transfer Control Protocol (TCP) stream for 10 seconds by running the following commands:
-#### Get NTTTCP onto the VMs.
+ - On the receiver VM, run `ntttcp -r -t 10 -P 1`.
+ - On the sender VM, run `ntttcp -s<receiver IP address> -t 10 -n 1 -P 1`.
-Download the latest version:
-https://github.com/microsoft/ntttcp/releases/latest
+ >[!NOTE]
+ >Use the preceding commands only to test configuration.
-Consider putting NTTTCP in separate folder, like c:\\tools
+ >[!TIP]
+ >When you run the test for the first time to verify setup, use a short test duration to get quick feedback. Once you verify the tool is working, extend the test duration to 300 seconds for the most accurate results.
-#### Allow NTTTCP through the Windows firewall
-On the RECEIVER, create an Allow rule on the Windows Firewall to allow the
-NTTTCP traffic to arrive. It's easiest to allow the entire NTTTCP program by
-name rather than to allow specific TCP ports inbound.
+### Run throughput tests
-Allow ntttcp through the Windows Firewall like this:
+Run *ntttcp.exe* from the Windows command line, not from PowerShell. Run the test for 300 seconds, or five minutes, on both the sender and receiver VMs. The sender and receiver must specify the same test duration for the `-t` parameter.
-netsh advfirewall firewall add rule program=\<PATH\>\\ntttcp.exe name="ntttcp" protocol=any dir=in action=allow enable=yes profile=ANY
+1. On the receiver VM, run the following command, replacing the `<number of VM cores>`, and `<receiver IP address>` placeholders with your own values.
-For example, if you copied ntttcp.exe to the "c:\\tools" folder, this would be the command: 
+ ```cmd
+ ntttcp -r -m [<number of VM cores> x 2],*,<receiver IP address> -t 300
+ ```
-netsh advfirewall firewall add rule program=c:\\tools\\ntttcp.exe name="ntttcp" protocol=any dir=in action=allow enable=yes profile=ANY
+ The following example shows a command for a VM with four cores and an IP address of `10.0.0.4`.
-#### Running NTTTCP tests
+ `ntttcp -r -m 8,*,10.0.0.4 -t 300`
-Start NTTTCP on the RECEIVER (**run from CMD**, not from PowerShell):
+1. On the sender VM, run the following command. The sender and receiver commands differ only in the `-s` or `-r` parameter that designates the sender or receiver VM.
-ntttcp -r -m [2\*\#num\_cores],\*,a.b.c.r -t 300
+ ```cmd
+ ntttcp -s -m [<number of VM cores> x 2],*,<receiver IP address> -t 300
+ ```
-If the VM has four cores and an IP address of 10.0.0.4, it would look like this:
+ The following example shows the sender command for a receiver IP address of `10.0.0.4`.
+
+ ```cmd
+ ntttcp -s -m 8,*,10.0.0.4 -t 300 
+ ```
-ntttcp -r -m 8,\*,10.0.0.4 -t 300
+1. Wait for the results.
+# [Linux](#tab/linux)
-Start NTTTCP on the SENDER (**run from CMD**, not from PowerShell):
+### Prepare VMs and install NTTTPS-for-Linux
-ntttcp -s -m 8,\*,10.0.0.4 -t 300 
+To measure throughput from Linux machines, use [NTTTCP-for-Linux](https://github.com/Microsoft/ntttcp-for-linux).
-Wait for the results.
+1. Prepare both the sender and receiver VMs for NTTTCP-for-Linux by running the following commands, depending on your distro:
+ - For **CentOS**, install `gcc` and `git`.
-## Testing VMs running LINUX:
+ ``` bash
+ yum install gcc -y
+ yum install git -y
+ ```
-Use nttcp-for-linux. It is available from <https://github.com/Microsoft/ntttcp-for-linux>
+ - For **Ubuntu**, install `build-essential` and `git`.
-On the Linux VMs (both SENDER and RECEIVER), run these commands to prepare ntttcp-for-linux on your VMs:
+ ``` bash
+ apt-get -y install build-essential
+ apt-get -y install git
+ ```
-CentOS - Install gcc and git:
-``` bash
-  yum install gcc -y
-  yum install git -y
-```
-Ubuntu - Install build-essential and git:
-``` bash
- apt-get -y install build-essential
- apt-get -y install git
-```
-SUSE - Install git-core, gcc, and make:
-``` bash
- zypper in -y git-core gcc make
-```
-Make and Install on both:
-``` bash
- git clone https://github.com/Microsoft/ntttcp-for-linux
- cd ntttcp-for-linux/src
- make && make install
-```
+ - For **SUSE**, install `git-core`, `gcc`, and `make`.
-As in the Windows example, we assume the Linux RECEIVER's IP is 10.0.0.4
+ ``` bash
+ zypper in -y git-core gcc make
+ ```
-Start NTTTCP-for-Linux on the RECEIVER:
+1. Make and install NTTTCP-for-Linux.
-``` bash
-ntttcp -r -t 300
-```
+ ``` bash
+ git clone https://github.com/Microsoft/ntttcp-for-linux
+ cd ntttcp-for-linux/src
+ make && make install
+ ```
-And on the SENDER, run:
+### Run throughput tests
-``` bash
-ntttcp -s10.0.0.4 -t 300
-```
- 
-Test length defaults to 60 seconds if no time parameter is given
+Run the NTTTCP test for 300 seconds, or five minutes, on both the sender VM and the receiver VM. The sender and receiver must specify the same test duration for the `-t` parameter. Test duration defaults to 60 seconds if you don't specify a time parameter.
-## Testing between VMs running Windows and LINUX:
+1. On the receiver VM, run the following command:
-On this scenarios we should enable the no-sync mode so the test can run. This is done by using the **-N flag** for Linux, and **-ns flag** for Windows.
+ ``` bash
+ ntttcp -r -t 300
+ ```
-#### From Linux to Windows:
+1. On the sender VM, run the following command. This example shows a sender command for a receiver IP address of `10.0.0.4`.
-Receiver \<Windows>:
+ ``` bash
+ ntttcp -s10.0.0.4 -t 300
+ ```
-``` bash
-ntttcp -r -m <2 x nr cores>,*,<Windows server IP>
-```
+
+## Test throughput between a Windows VM and a Linux VM
+
+To run NTTTCP throughput tests between a Windows VM and a Linux VM, enable no-sync mode by using the `-ns` flag on Windows or the `-N` flag on Linux.
-Sender \<Linux> :
+# [Windows](#tab/windows)
-``` bash
-ntttcp -s -m <2 x nr cores>,*,<Windows server IP> -N -t 300
+To test with the Windows VM as the receiver, run the following command:
+
+```cmd
+ntttcp -r -m [<number of VM cores> x 2],*,<Linux VM IP address> -t 300
+```
+To test with the Windows VM as the sender, run the following command:
+
+```cmd
+ntttcp -s -m [<number of VM cores> x 2],*,<Linux VM IP address> -ns -t 300
```
-#### From Windows to Linux:
+# [Linux](#tab/linux)
-Receiver \<Linux>:
+To test with the Linux VM as the receiver, run the following command:
-``` bash
-ntttcp -r -m <2 x nr cores>,*,<Linux server IP>
+```bash
+ntttcp -r -m [<number of VM cores> x 2],*,<Windows VM IP address> -t 300
```
-Sender \<Windows>:
+To test with the Linux VM as the sender, run the following command:
-``` bash
-ntttcp -s -m <2 x nr cores>,*,<Linux server IP> -ns -t 300
+```bash
+ntttcp -s -m [<number of VM cores> x 2],*,<Windows VM IP address> -N -t 300
```
-## Testing Cloud Service Instances:
-You need to add following section into your ServiceDefinition.csdef
++
+## Test Cloud Service instances
+
+Add the following section to *ServiceDefinition.csdef*:
+ ```xml <Endpoints> <InternalEndpoint name="Endpoint3" protocol="any" />
You need to add following section into your ServiceDefinition.csdef
``` ## Next steps
-* Depending on results, there may be room to [Optimize network throughput machines](virtual-network-optimize-network-bandwidth.md) for your scenario.
-* Read about how [bandwidth is allocated to virtual machines](virtual-machine-network-throughput.md)
-* Learn more with [Azure Virtual Network frequently asked questions (FAQ)](virtual-networks-faq.md)
+
+- [Optimize network throughput for Azure virtual machines](virtual-network-optimize-network-bandwidth.md).
+- [Virtual machine network bandwidth](virtual-machine-network-throughput.md).
+- [Test VM network latency](virtual-network-test-latency.md)
+- [Azure Virtual Network frequently asked questions (FAQ)](virtual-networks-faq.md)
virtual-network Virtual Network Manage Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-subnet.md
Title: Add, change, or delete an Azure virtual network subnet
+ Title: Add, change, or delete a subnet
-description: Learn where to find information about virtual networks and how to add, change, or delete a virtual network subnet in Azure.
+description: Learn how to add, change, or delete virtual network subnets by using the Azure portal, Azure CLI, or Azure PowerShell.
Previously updated : 06/27/2022 Last updated : 03/20/2023 # Add, change, or delete a virtual network subnet
-Learn how to add, change, or delete a virtual network subnet. All Azure resources deployed into a virtual network are deployed into a subnet within a virtual network. If you're new to virtual networks, you can learn more about them in the [Virtual network overview](virtual-networks-overview.md) or by completing a [quickstart](quick-create-portal.md). To learn more about managing a virtual network, see [Create, change, or delete a virtual network](manage-virtual-network.md).
+All Azure resources in a virtual network are deployed into subnets within the virtual network. This article explains how to add, change, or delete virtual network subnets by using the Azure portal, Azure CLI, or Azure PowerShell.
-## Before you begin
+## Prerequisites
-If you don't have one, set up an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Then complete one of these tasks before starting steps in any section of this article:
+# [Portal](#tab/azure-portal)
-- **Portal users**: Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure virtual network. To create one, see [Quickstart: Create a virtual network by using the Azure portal](quick-create-portal.md).
+- To run the procedures, sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then choose **PowerShell** if it isn't already selected.
+# [Azure CLI](#tab/azure-cli)
- If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). Also run `Connect-AzAccount` to create a connection with Azure.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure virtual network. To create one, see [Quickstart: Create a virtual network by using Azure CLI](quick-create-cli.md).
-- **Azure CLI users**: Run the commands via either the [Azure Cloud Shell](https://shell.azure.com/bash) the Azure CLI running locally. Use Azure CLI version 2.0.31 or later if you're running the Azure CLI locally. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Also run `az login` to create a connection with Azure.
+You can run the commands either in the [Azure Cloud Shell](/azure/cloud-shell/overview) or from Azure CLI on your computer.
-The account you sign in to, or connect to Azure with, must be assigned to the [Network contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that's assigned the appropriate actions listed in [Permissions](#permissions).
+- Azure Cloud Shell is a free interactive shell that has common Azure tools preinstalled and configured to use with your account. To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+- If you [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands, you need Azure CLI version 2.31.0 or later. Run [az version](/cli/azure/reference-index?#az-version) to find your installed version, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) to upgrade.
+
+ Run `az login` to connect to Azure.
+
+# [PowerShell](#tab/azure-powershell)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure virtual network. To create one, see [Quickstart: Create a virtual network by using Azure PowerShell](quick-create-powershell.md).
+
+You can run the commands either in the [Azure Cloud Shell](/azure/cloud-shell/overview) or from PowerShell on your computer.
+
+- Azure Cloud Shell is a free interactive shell that has common Azure tools preinstalled and configured to use with your account. To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+- If you [install Azure PowerShell locally](/powershell/azure/install-Az-ps) to run the commands, you need Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
+
+ Also make sure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use `Get-InstalledModule -Name Az.Network`. To update, use the command `Update-Module -Name Az.Network`.
+
+ Run `Connect-AzAccount` to connect to Azure.
++++
+### Permissions
+
+To do tasks on subnets, your account must be assigned to the [Network contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that's assigned the appropriate actions in the following list:
+
+|Action | Name |
+|-- | -- |
+|Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet. |
+|Microsoft.Network/virtualNetworks/subnets/write | Create or update a virtual network subnet. |
+|Microsoft.Network/virtualNetworks/subnets/delete | Delete a virtual network subnet. |
+|Microsoft.Network/virtualNetworks/subnets/join/action | Join a virtual network. |
+|Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action | Enable a service endpoint for a subnet. |
+|Microsoft.Network/virtualNetworks/subnets/virtualMachines/read | Get the virtual machines in a subnet. |
## Add a subnet
-1. Go to the [Azure portal](https://portal.azure.com) to view your virtual networks. Search for and select **Virtual networks**.
+# [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select *virtual networks*.
+1. On the **Virtual networks** page, select the virtual network you want to add a subnet to.
+1. On the virtual network page, select **Subnets** from the left navigation.
+1. On the **Subnets** page, select **+ Subnet**.
+1. On the **Add subnet** screen, enter or select values for the subnet settings.
+1. Select **Save**.
+
+# [Azure CLI](#tab/azure-cli)
+
+Run the [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) command with the options you want to configure.
+
+```azurecli-interactive
+az network vnet subnet create --name <subnetName> --resource-group <resourceGroupName> --vnet-name <virtualNetworkName>
+```
+
+# [PowerShell](#tab/azure-powershell)
-2. Select the name of the virtual network you want to add a subnet to.
+1. Use the [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig) command to configure the subnet.
-3. From **Settings**, select **Subnets** > **Subnet**.
+ ```azurepowershell-interactive
+ $vnet = Get-AzVirtualNetwork -Name <virtualNetworkName> -ResourceGroupName <resourceGroupName>
+ Add-AzVirtualNetworkSubnetConfig -Name <subnetName> -VirtualNetwork $vnet -AddressPrefix <String[]>
+ ```
-4. In the **Add subnet** dialog box, enter values for the following settings:
+1. Then associate the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork).
- | Setting | Description |
- | | |
- | **Name** | The name must be unique within the virtual network. For maximum compatibility with other Azure services, we recommend using a letter as the first character of the name. For example, Azure Application Gateway won't deploy into a subnet that has a name that starts with a number. |
- | **Subnet address range** | <p>The range must be unique within the address space for the virtual network. The range can't overlap with other subnet address ranges within the virtual network. The address space must be specified by using Classless Inter-Domain Routing (CIDR) notation.</p><p>For example, in a virtual network with address space *10.0.0.0/16*, you might define a subnet address space of *10.0.0.0/22*. The smallest range you can specify is */29*, which provides eight IP addresses for the subnet. Azure reserves the first and last address in each subnet for protocol conformance. Three more addresses are reserved for Azure service usage. As a result, defining a subnet with a */29* address range results in three usable IP addresses in the subnet.</p><p>If you plan to connect a virtual network to a VPN gateway, you must create a gateway subnet. Learn more about [specific address range considerations for gateway subnets](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub). You can change the address range after the subnet is added, under specific conditions. To learn how to change a subnet address range, see [Change subnet settings](#change-subnet-settings).</p> |
- | **Add IPv6 address space** | You can create a virtual network that's dual-stack (supports IPv4 and IPv6) by adding an existing IPv6 address space. You can also add IPv6 support later, after creating the virtual network. Currently, IPv6 isn't fully supported for all services in Azure. To learn more about IPv6 and its limitations, see [Overview of IPv6 for Azure Virtual Network](ip-services/ipv6-overview.md)|
- | **NAT gateway** | To provide Network Address Translation (NAT) to resources on a subnet, you may associate an existing NAT gateway to a subnet. The NAT gateway must exist in the same subscription and location as the virtual network. Learn more about [Virtual Network NAT](./nat-gateway/nat-overview.md) and [how to create a NAT gateway](./nat-gateway/quickstart-create-nat-gateway-portal.md)
- | **Network security group** | To filter inbound and outbound network traffic for the subnet, you may associate an existing network security group to a subnet. The network security group must exist in the same subscription and location as the virtual network. Learn more about [network security groups](./network-security-groups-overview.md) and [how to create a network security group](tutorial-filter-network-traffic.md). |
- | **Route table** | To control network traffic routing to other networks, you may optionally associate an existing route table to a subnet. The route table must exist in the same subscription and location as the virtual network. Learn more about [Azure routing](virtual-networks-udr-overview.md) and [how to create a route table](tutorial-create-route-table-portal.md). |
- | **Service endpoints** | <p>A subnet may optionally have one or more service endpoints enabled for it. To enable a service endpoint for a service, select the service or services that you want to enable service endpoints for from the **Services** list. Azure configures the location automatically for an endpoint. By default, Azure configures the service endpoints for the virtual network's region. To support regional failover scenarios, Azure automatically configures endpoints to [Azure paired regions](../availability-zones/cross-region-replication-azure.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for Azure Storage.</p><p>To remove a service endpoint, unselect the service you want to remove the service endpoint for. To learn more about service endpoints, and the services they can be enabled for, see [Virtual network service endpoints](virtual-network-service-endpoints-overview.md). Once you enable a service endpoint for a service, you must also enable network access for the subnet for a resource created with the service. For example, if you enable the service endpoint for **Microsoft.Storage**, you must also enable network access to all Azure Storage accounts you want to grant network access to. To enable network access to subnets that a service endpoint is enabled for, see the documentation for the individual service you enabled the service endpoint for.</p><p>To validate that a service endpoint is enabled for a subnet, view the [effective routes](diagnose-network-routing-problem.md) for any network interface in the subnet. When you configure an endpoint, you see a *default* route with the address prefixes of the service, and a next hop type of **VirtualNetworkServiceEndpoint**. To learn more about routing, see [Virtual network traffic routing](virtual-networks-udr-overview.md).</p> |
- | **Subnet delegation** | A subnet may optionally have one or more delegations enabled for it. Subnet delegation gives explicit permissions to the service to create service-specific resources in the subnet using a unique identifier during service deployment. To delegate for a service, select the service you want to delegate to from the **Services** list. |
- | **Network policy for private endpoints**| To control traffic going to a private endpoint, you can use network security groups, application security groups, or user defined routes. Set the private endpoint network policy to *Enabled* to use these controls on a subnet. Once enabled, network policy applies to all private endpoints on the subnet. To learn more, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md). |
+ ```azurepowershell-interactive
+ Set-AzVirtualNetwork -VirtualNetwork $vnet
+ ```
-5. To add the subnet to the virtual network that you selected, select **OK**.
+
-### Commands
+You can configure the following settings for a subnet:
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) |
-| PowerShell | [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig) |
+ | Setting | Description |
+ | | |
+ | **Name** | The name must be unique within the virtual network. For maximum compatibility with other Azure services, use a letter as the first character of the name. For example, Azure Application Gateway can't deploy into a subnet whose name starts with a number. |
+ | **Subnet address range** | The range must be unique within the address space and can't overlap with other subnet address ranges in the virtual network. You must specify the address space by using Classless Inter-Domain Routing (CIDR) notation.<br><br>For example, in a virtual network with address space `10.0.0.0/16`, you might define a subnet address space of `10.0.0.0/22`. The smallest range you can specify is `/29`, which provides eight IP addresses for the subnet. Azure reserves the first and last address in each subnet for protocol conformance, and three more addresses for Azure service usage. So defining a subnet with a */29* address range gives three usable IP addresses in the subnet.<br><br>If you plan to connect a virtual network to a virtual private network (VPN) gateway, you must create a gateway subnet. For more information, see [Gateway subnet](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub).|
+ | **Add IPv6 address space** | You can create a dual-stack virtual network that supports IPv4 and IPv6 by adding an existing IPv6 address space. Currently, IPv6 isn't fully supported for all services in Azure. For more information, see [Overview of IPv6 for Azure Virtual Network](ip-services/ipv6-overview.md)|
+ | **NAT gateway** | To provide network address translation (NAT) to resources on a subnet, you can associate an existing NAT gateway to a subnet. The NAT gateway must exist in the same subscription and location as the virtual network. For more information, see [Virtual network NAT](./nat-gateway/nat-overview.md) and [Quickstart: Create a NAT gateway by using the Azure portal](./nat-gateway/quickstart-create-nat-gateway-portal.md).|
+ | **Network security group** | To filter inbound and outbound network traffic for the subnet, you can associate an existing network security group (NSG) to a subnet. The NSG must exist in the same subscription and location as the virtual network. For more information, see [Network security groups](./network-security-groups-overview.md) and [Tutorial: Filter network traffic with a network security group by using the Azure portal](tutorial-filter-network-traffic.md). |
+ | **Route table** | To control network traffic routing to other networks, you can optionally associate an existing route table to a subnet. The route table must exist in the same subscription and location as the virtual network. For more information, see [Virtual network traffic routing](virtual-networks-udr-overview.md) and [Tutorial: Route network traffic with a route table by using the Azure portal](tutorial-create-route-table-portal.md). |
+ | **Service endpoints** | You can optionally enable one or more service endpoints for a subnet. To enable a service endpoint for a service during portal subnet setup, select the service or services that you want service endpoints for from the popup list under **Services**. Azure configures the location automatically for an endpoint. To remove a service endpoint, deselect the service you want to remove the service endpoint for. For more information, see [Virtual network service endpoints](virtual-network-service-endpoints-overview.md).<br><br>By default, Azure configures the service endpoints for the virtual network's region. To support regional failover scenarios, Azure automatically configures endpoints to [Azure paired regions](../availability-zones/cross-region-replication-azure.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for Azure Storage.<br><br>Once you enable a service endpoint, you must also enable subnet access for resources the service creates. For example, if you enable the service endpoint for **Microsoft.Storage**, you must also enable network access to all Azure Storage accounts you want to grant network access to. To enable network access to subnets that a service endpoint is enabled for, see the documentation for the individual service.<br><br>To validate that a service endpoint is enabled for a subnet, view the [effective routes](diagnose-network-routing-problem.md) for any network interface in the subnet. When you configure an endpoint, you see a default route with the address prefixes of the service, and a next hop type of **VirtualNetworkServiceEndpoint**. For more information, see [Virtual network traffic routing](virtual-networks-udr-overview.md).|
+ | **Subnet delegation** | You can optionally enable one or more delegations for a subnet. Subnet delegation gives explicit permissions to the service to create service-specific resources in the subnet by using a unique identifier during service deployment. To delegate for a service during portal subnet setup, select the service you want to delegate to from the popup list. |
+ | **Network policy for private endpoints**| To control traffic going to a private endpoint, you can use **Network security groups** or **Route tables**. During portal subnet setup, select either or both of these options under **Private endpoint network policy** to use these controls on a subnet. Once enabled, network policy applies to all private endpoints on the subnet. For more information, see [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md). |
## Change subnet settings
-1. Go to the [Azure portal](https://portal.azure.com) to view your virtual networks. Search for and select **Virtual networks**.
+# [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select *virtual networks*.
+1. On the **Virtual networks** page, select the virtual network you want to change subnet settings for.
+1. On the virtual network's page, select **Subnets** from the left navigation.
+1. On the **Subnets** page, select the subnet you want to change settings for.
+1. On the subnet screen, change the subnet settings, and then select **Save**.
-2. Select the name of the virtual network containing the subnet you want to change.
+# [Azure CLI](#tab/azure-cli)
-3. From **Settings**, select **Subnets**.
+Run the [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) command with the options you want to change.
-4. In the list of subnets, select the subnet you want to change settings for.
+```azurecli-interactive
+az network vnet subnet update
+```
-5. In the subnet page, change any of the following settings:
+# [PowerShell](#tab/azure-powershell)
- | Setting | Description |
- | | |
- | **Subnet address range** | If no resources are deployed within the subnet, you can change the address range. If any resources exist in the subnet, you must either move the resources to another subnet, or delete them from the subnet first. The steps you take to move or delete a resource vary depending on the resource. To learn how to move or delete resources that are in subnets, read the documentation for each of those resource types. See the constraints for **Address range** in step 4 of [Add a subnet](#add-a-subnet). |
- | **Add IPv6 address space**, **NAT Gateway**, **Network security group**, and **Route table** | See step 4 of [Add a subnet](#add-a-subnet). |
- | **Service endpoints** | <p>See service endpoints in step 4 of [Add a subnet](#add-a-subnet). When enabling a service endpoint for an existing subnet, ensure that no critical tasks are running on any resource in the subnet. Service endpoints switch routes on every network interface in the subnet. The service endpoints go from using the default route with the *0.0.0.0/0* address prefix and next hop type of *Internet*, to using a new route with the address prefixes of the service and a next hop type of *VirtualNetworkServiceEndpoint*.</p><p>During the switch, any open TCP connections may be terminated. The service endpoint isn't enabled until traffic flows to the service for all network interfaces are updated with the new route. To learn more about routing, see [Virtual network traffic routing](virtual-networks-udr-overview.md).</p> |
- | **Subnet delegation** | Subnet delegation can be modified to zero or multiple delegations enabled for it. If a resource for a service is already deployed in the subnet, subnet delegation can't be added or removed until all the resources for the service are removed. To delegate for a different service, select the service you want to delegate to from the **Services** list. |
- | **Network policy for private endpoints**| See step 4 of [Add a subnet](#add-a-subnet). |
+Run the [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) command with the options you want to change. Then set the configuration with `Set-AzVirtualNetwork`.
-6. Select **Save**.
+
-### Commands
+You can change the following subnet settings after the subnet is created:
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) |
-| PowerShell | [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) |
+| Setting | Description |
+| | |
+| **Subnet address range** | If no resources are deployed within the subnet, you can change the address range. If any resources exist in the subnet, you must first either move the resources to another subnet or delete them from the subnet. The steps you take to move or delete a resource vary depending on the resource. To learn how to move or delete resources that are in subnets, read the documentation for each resource type.|
+| **Add IPv6 address space**, **NAT gateway**, **Network security group**, and **Route table** | You can add IPv6, NAT gateway, NSG, or route table support after you create the subnet.|
+| **Service endpoints** | To enable a service endpoint for an existing subnet, ensure that no critical tasks are running on any resource in the subnet. Service endpoints switch routes on every network interface in the subnet. The service endpoints change from using the default route with the `0.0.0.0/0` address prefix and next hop type of `Internet` to using a new route with the address prefix of the service and a next hop type of `VirtualNetworkServiceEndpoint`.<br><br>During the switch, any open TCP connections may be terminated. The service endpoint isn't enabled until traffic to the service for all network interfaces updates with the new route. For more information, see [Virtual network traffic routing](virtual-networks-udr-overview.md).|
+| **Subnet delegation** | You can modify subnet delegation to enable zero or multiple delegations. If a resource for a service is already deployed in the subnet, you can't add or remove subnet delegations until you remove all the resources for the service. To delegate for a different service in the portal, select the service you want to delegate to from the popup list. |
+| **Network policy for private endpoints**| You can change private endpoint network policy after subnet creation.|
## Delete a subnet
-You can delete a subnet only if there are no resources in the subnet. If resources are in the subnet, you must delete those resources before you can delete the subnet. The steps you take to delete a resource vary depending on the resource. To learn how to delete resources that are in subnets, read the documentation for each of those resource types.
-
-1. Go to the [Azure portal](https://portal.azure.com) to view your virtual networks. Search for and select **Virtual networks**.
+# [Portal](#tab/azure-portal)
-2. Select the name of the virtual network containing the subnet you want to delete.
+You can delete a subnet only if there are no resources in the subnet. If resources are in the subnet, you must delete those resources before you can delete the subnet. The steps you take to delete a resource vary depending on the resource. To learn how to delete the resources, see the documentation for each resource type.
-3. From **Settings**, select **Subnets**.
+1. In the [Azure portal](https://portal.azure.com), search for and select *virtual networks*.
+1. On the **Virtual networks** page, select the virtual network you want to delete a subnet from.
+1. On the virtual network's page, select **Subnets** from the left navigation.
+1. On the **Subnets** page, select the subnet you want to delete.
+1. Select **Delete**, and then select **Yes** in the confirmation dialog box.
-4. In the list of subnets, select the subnet you want to delete.
+# [Azure CLI](#tab/azure-cli)
-5. Select **Delete**, and then select **Yes** in the confirmation dialog box.
+Run the [az network vnet subnet delete](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-delete) command.
-### Commands
+```azurecli-interactive
+az network vnet subnet delete --name <subnetName> --resource-group <resourceGroupName> --vnet-name <virtualNetworkName>
+```
-| Tool | Command |
-| - | - |
-| Azure CLI | [az network vnet subnet delete](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-delete) |
-| PowerShell | [Remove-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/remove-azvirtualnetworksubnetconfig?toc=%2fazure%2fvirtual-network%2ftoc.json) |
+# [PowerShell](#tab/azure-powershell)
-## Permissions
+Run the [Remove-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/remove-azvirtualnetworksubnetconfig?toc=%2fazure%2fvirtual-network%2ftoc.json) command and then set the configuration.
-To do tasks on subnets, your account must be assigned to the [Network contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) or to a [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that's assigned the appropriate actions in the following table:
+```azurepowershell-interactive
+Remove-AzVirtualNetworkSubnetConfig -Name <subnetName> -VirtualNetwork $vnet | Set-AzVirtualNetwork
+```
-|Action | Name |
-|-- | -- |
-|Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet |
-|Microsoft.Network/virtualNetworks/subnets/write | Create or update a virtual network subnet |
-|Microsoft.Network/virtualNetworks/subnets/delete | Delete a virtual network subnet |
-|Microsoft.Network/virtualNetworks/subnets/join/action | Join a virtual network |
-|Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action | Enable a service endpoint for a subnet |
-|Microsoft.Network/virtualNetworks/subnets/virtualMachines/read | Get the virtual machines in a subnet |
+ ## Next steps -- Create a virtual network and subnets using [PowerShell](powershell-samples.md) or [Azure CLI](cli-samples.md) sample scripts, or using Azure [Resource Manager templates](template-samples.md)-- Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks
+- [Create, change, or delete a virtual network](manage-virtual-network.md).
+- [PowerShell sample scripts](powershell-samples.md)
+- [Azure CLI sample scripts](cli-samples.md)
+- [Azure Resource Manager template samples](template-samples.md)
+- [Azure Policy built-in definitions for Azure Virtual Network](./policy-reference.md)
virtual-network Virtual Network Network Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface.md
Title: Create, change, or delete an Azure network interface
-description: Learn what a network interface is and how to create, change settings for, and delete one.
+description: Learn how to create, delete, and view and change settings for network interfaces by using the Azure portal, Azure PowerShell, or Azure CLI.
Previously updated : 09/15/2022 Last updated : 03/20/2023 # Create, change, or delete a network interface
-Learn how to create, change settings for, and delete a network interface. A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. A virtual machine created with the Azure portal, has one network interface with default settings. You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface.
+A network interface (NIC) enables an Azure virtual machine (VM) to communicate with internet, Azure, and on-premises resources. This article explains how to create, view and change settings for, and delete a NIC.
-This article explains how to create a network interface with custom settings and change the following existing settings:
+A VM you create in the Azure portal has one NIC with default settings. You can create NICs with custom settings instead, and add one or more NICs to a VM when or after you create it. You can also change settings for an existing NIC.
-* [DNS server settings](#change-dns-servers)
+## Prerequisites
-* [IP Forwarding](#enable-or-disable-ip-forwarding)
+# [Portal](#tab/azure-portal)
-* [Subnet assignment](#change-subnet-assignment)
+You need the following prerequisites:
-* [Application security group](#add-or-remove-from-application-security-groups)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure virtual network. To create one, see [Quickstart: Create a virtual network by using the Azure portal](quick-create-portal.md).
-* [Network security group](#associate-or-dissociate-a-network-security-group)
+To run the procedures in this article, sign in to the [Azure portal](https://portal.azure.com) with your Azure account. You can replace the placeholders in the examples with your own values.
-* [Network interface deletion](#delete-a-network-interface)
+# [Azure CLI](#tab/azure-cli)
-If you need to add, change, or remove IP addresses for a network interface, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md). If you need to add network interfaces to, or remove network interfaces from virtual machines, see [Add or remove network interfaces](virtual-network-network-interface-vm.md).
-
-## Prerequisites
+To run the commands in this article, you need the following prerequisites:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure virtual network. To create one, see [Quickstart: Create a virtual network by using Azure CLI](quick-create-cli.md).
-- An existing Azure Virtual Network. For information about creating an Azure Virtual Network, see [Quickstart: Create a virtual network using the Azure portal](./quick-create-portal.md).
-
- - The example virtual network used in this article is named **myVNet**. Replace the example value with the name of your virtual network.
-
- - The example subnet used in this article is named **myBackendSubnet**. Replace the example value with the name of your subnet.
-
- - The example network interface name used in this article is **myNIC**. Replace the example value with the name of your network interface.
-
+You can run the commands either in the [Azure Cloud Shell](/azure/cloud-shell/overview) or from Azure CLI on your computer.
-- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- Azure Cloud Shell is a free interactive shell that has common Azure tools preinstalled and configured to use with your account. To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
-- Azure PowerShell installed locally or Azure Cloud Shell.
+- If you [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands, you need Azure CLI version 2.31.0 or later. Run [az version](/cli/azure/reference-index?#az-version) to find your installed version, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) to upgrade.
+
+ If you're prompted, install the Azure CLI extension on first use. For more information, [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
-- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+ Run [az login](/cli/azure/reference-index#az-login) to connect to Azure. For more information, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
-- Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary.
+In the following procedures, you can replace the example placeholder names with your own values.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+# [PowerShell](#tab/azure-powershell)
-Your account must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that is assigned the appropriate actions listed in [Permissions](#permissions).
+To run the commands in this article, you need the following prerequisites:
-## Create a network interface
-
-A virtual machine created with the Azure portal is created with a network interface with default settings. To create a network interface with custom settings and attach to a virtual machine, use PowerShell or the Azure CLI. You can also create a network interface and add it to an existing virtual machine with PowerShell or the Azure CLI.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure virtual network. To create one, see [Quickstart: Create a virtual network by using Azure PowerShell](quick-create-powershell.md).
-For more information on how to create a virtual machine with an existing network interface or how to add or remove from an existing virtual machine, see [Add or remove network interfaces](virtual-network-network-interface-vm.md).
+You can run the commands either in the [Azure Cloud Shell](/azure/cloud-shell/overview) or from PowerShell on your computer.
-# [**Portal**](#tab/network-interface-portal)
+- Azure Cloud Shell is a free interactive shell that has common Azure tools preinstalled and configured to use with your account. To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+- If you [install Azure PowerShell locally](/powershell/azure/install-Az-ps) to run the commands, you need Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
-2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+ Also make sure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use `Get-InstalledModule -Name "Az.Network"`. To update, use the command `Update-Module -Name Az.Network`.
-3. Select **+ Create**.
+ Then run `Connect-AzAccount` to connect to Azure. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
-4. Enter or select the following information in **Create network interface**.
+In the following procedures, you can replace the example placeholder names with your own values.
-| Setting | Value | Details |
-| - | | - |
-| **Project details** | | |
-| Subscription | Select your subscription. | You can only assign a network interface to a virtual network that exists in the same subscription and location as the network interface. |
-| Resource group | Select your resource group or create a new one. The example used in this article is **myResourceGroup**. | A resource group is a logical container for grouping Azure resources. A network interface can exist in the same, or different resource group, than the virtual machine you attach it to, or the virtual network you connect it to.|
-| **Instance details** | | |
-| Name | Enter **myNIC**. | The name must be unique within the resource group you select. Over time, you'll likely have several network interfaces in your Azure subscription. For suggestions when creating a naming convention to make managing several network interfaces easier, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). The name can't be changed after the network interface is created. |
-| Region | Select your region. The example used in this article is **East US 2**. | The Azure region where the network interface is created. |
-| Virtual network | Select **myVNet** or your virtual network. | You can only assign a network interface to a virtual network that exists in the same subscription and location as the network interface. Once a network interface is created, you can't change the virtual network it's assigned to. The virtual machine you add the network interface to must also exist in the same location and subscription as the network interface. |
-| Subnet | Select **myBackendSubnet**. | A subnet within the virtual network you selected. You can change the subnet the network interface is assigned to after it's created. |
-| IP Version | Select **IPv4** or **IPv4 and IPv6**. | You can choose to create the network interface with an IPv4 address or an IPv4 and IPv6 address. The network and subnet used for the virtual network must also have an IPv6 and IPv6 subnet for the IPv6 address to be assigned. An IPv6 configuration is assigned to a secondary IP configuration for the network interface. To learn more about IP configurations, see [View network interface settings](#view-network-interface-settings).|
-| Private IP address assignment | Select **Dynamic** or **Static**. | **Dynamic:** If dynamic is selected, Azure automatically assigns the next available address from the address space of the subnet you selected. </br> **Static:** When selecting this option, you must manually assign an available IP address from within the address space of the subnet you selected. Static and dynamic addresses don't change until you change them or the network interface is deleted. You can change the assignment method after the network interface is created. The Azure DHCP server assigns this address to the network interface within the operating system of the virtual machine. |
+
+### Permissions
-5. Select **Review + create**.
+To work with NICs, your account must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that's assigned the appropriate actions from the following list:
-6. Select **Create**.
+| Action | Name |
+| | - |
+| Microsoft.Network/networkInterfaces/read | Get network interface |
+| Microsoft.Network/networkInterfaces/write | Create or update network interface |
+| Microsoft.Network/networkInterfaces/join/action | Attach a network interface to a virtual machine |
+| Microsoft.Network/networkInterfaces/delete | Delete network interface |
+| Microsoft.Network/networkInterfaces/joinViaPrivateIp/action | Join a resource to a network interface via private ip |
+| Microsoft.Network/networkInterfaces/effectiveRouteTable/action | Get network interface effective route table |
+| Microsoft.Network/networkInterfaces/effectiveNetworkSecurityGroups/action | Get network interface effective security groups |
+| Microsoft.Network/networkInterfaces/loadBalancers/read | Get network interface load balancers |
+| Microsoft.Network/networkInterfaces/serviceAssociations/read | Get service association |
+| Microsoft.Network/networkInterfaces/serviceAssociations/write | Create or update a service association |
+| Microsoft.Network/networkInterfaces/serviceAssociations/delete | Delete service association |
+| Microsoft.Network/networkInterfaces/serviceAssociations/validate/action | Validate service association |
+| Microsoft.Network/networkInterfaces/ipconfigurations/read | Get network interface IP configuration |
-The portal doesn't provide the option to assign a public IP address to the network interface when you create it. The portal does create a public IP address and assign it to a network interface when you create a virtual machine in the portal. To learn how to add a public IP address to the network interface after creating it, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md). If you want to create a network interface with a public IP address, you must use the Azure CLI, or PowerShell to create the network interface.
+## Create a network interface
-The portal doesn't provide the option to assign the network interface to application security groups when creating a network interface, but the Azure CLI and PowerShell do. You can assign an existing network interface to an application security group using the portal however, as long as the network interface is attached to a virtual machine. To learn how to assign a network interface to an application security group, see [Add to or remove from application security groups](#add-or-remove-from-application-security-groups).
+You can create a NIC in the Azure portal or by using Azure CLI or Azure PowerShell.
-# [**PowerShell**](#tab/network-interface-powershell)
+- The portal doesn't provide the option to assign a public IP address to a NIC when you create it. If you want to create a NIC with a public IP address, use Azure CLI or PowerShell. To add a public IP address to a NIC after you create it, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md).
-In this example, you'll create an Azure Public IP address and associate it with the network interface.
+- The portal does create a NIC with default settings and a public IP address when you create a VM. To create a NIC with custom settings and attach it to a VM, or to add a NIC to an existing VM, use PowerShell or Azure CLI.
-Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a primary public IP address.
+- The portal doesn't provide the option to assign a NIC to application security groups when you create the NIC, but Azure CLI and PowerShell do. However, if an existing NIC is attached to a VM, you can use the portal to assign that NIC to an application security group. For more information, see [Add to or remove from application security groups](#add-or-remove-from-application-security-groups).
-```azurepowershell-interactive
-$ip = @{
- Name = 'myPublicIP'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus2'
- Sku = 'Standard'
- AllocationMethod = 'Static'
- IpAddressVersion = 'IPv4'
- Zone = 1,2,3
-}
-New-AzPublicIpAddress @ip
-```
+To create a NIC, use the following procedure.
-Use [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) and [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) to create the network interface for the virtual machine. To create a network interface without the public IP address, omit the **`-PublicIpAddress`** parameter for **`New-AzNetworkInterfaceIPConfig`**.
+# [Portal](#tab/azure-portal)
-```azurepowershell-interactive
-## Place the virtual network into a variable. ##
-$net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
-}
-$vnet = Get-AzVirtualNetwork @net
+1. In the [Azure portal](https://portal.azure.com), search for and select *network interfaces*.
+1. On the **Network interfaces** page, select **Create**.
+1. On the **Create network interface** screen, enter or select values for the NIC settings.
-## Place the primary public IP address into a variable. ##
-$pub = @{
- Name = 'myPublicIP'
- ResourceGroupName = 'myResourceGroup'
-}
-$pubIP = Get-AzPublicIPAddress @pub
-
-## Create primary configuration for NIC. ##
-$IP1 = @{
- Name = 'ipconfig1'
- Subnet = $vnet.Subnets[0]
- PrivateIpAddressVersion = 'IPv4'
- PublicIPAddress = $pubIP
-}
-$IP1Config = New-AzNetworkInterfaceIpConfig @IP1 -Primary
+ :::image type="content" source="./media/virtual-network-network-interface/create-network-interface.png" alt-text="Screenshot of the Create network interface screen in the Azure portal.":::
-## Command to create network interface for VM ##
-$nic = @{
- Name = 'myNIC'
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus2'
- IpConfiguration = $IP1Config
-}
-New-AzNetworkInterface @nic
-```
+1. Select **Review + create**, and when validation passes, select **Create**.
-# [**Azure CLI**](#tab/network-interface-cli)
+# [Azure CLI](#tab/azure-cli)
-In this example, you'll create an Azure Public IP address and associate it with the network interface.
+The following example creates an Azure public IP address and associates it with the NIC.
-Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a primary public IP address.
+1. Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a primary public IP address.
-```azurecli-interactive
- az network public-ip create \
- --resource-group myResourceGroup \
- --name myPublicIP \
- --sku Standard \
- --version IPv4 \
- --zone 1 2 3
-```
+ ```azurecli-interactive
+ az network public-ip create \
+ --resource-group myResourceGroup \
+ --name myPublicIP \
+ --sku Standard \
+ --version IPv4 \
+ --zone 1 2 3
+ ```
-Use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface. To create a network interface without the public IP address, omit the **`--public-ip-address`** parameter for **`az network nic create`**.
+1. Use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the NIC. To create a NIC without a public IP address, omit the `--public-ip-address` parameter for `az network nic create`.
```azurecli-interactive az network nic create \
Use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to cre
--public-ip-address myPublicIP ```
+# [PowerShell](#tab/azure-powershell)
+
+The following example creates an Azure public IP address and associates it with the NIC.
+
+1. Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a primary public IP address.
+
+ ```azurepowershell-interactive
+ $ip = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ IpAddressVersion = 'IPv4'
+ Zone = 1,2,3
+ }
+ New-AzPublicIpAddress @ip
+ ```
+
+1. Use [New-AzNetworkInterfaceIpConfig](/powershell/module/az.network/new-aznetworkinterfaceipconfig) and [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) to create the NIC. To create a NIC without a public IP address, omit the `-PublicIpAddress` parameter for `New-AzNetworkInterfaceIPConfig`.
+
+ ```azurepowershell-interactive
+ ## Place the virtual network into a variable. ##
+ $net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroup'
+ }
+ $vnet = Get-AzVirtualNetwork @net
+
+ ## Place the primary public IP address into a variable. ##
+ $pub = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'myResourceGroup'
+ }
+ $pubIP = Get-AzPublicIPAddress @pub
+
+ ## Create primary configuration for NIC. ##
+ $IP1 = @{
+ Name = 'ipconfig1'
+ Subnet = $vnet.Subnets[0]
+ PrivateIpAddressVersion = 'IPv4'
+ PublicIPAddress = $pubIP
+ }
+ $IP1Config = New-AzNetworkInterfaceIpConfig @IP1 -Primary
+
+ ## Command to create network interface for VM ##
+ $nic = @{
+ Name = 'myNIC'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus2'
+ IpConfiguration = $IP1Config
+ }
+ New-AzNetworkInterface @nic
+ ```
+
+You can configure the following settings for a NIC:
+
+| Setting | Value | Details |
+| - | | - |
+| **Subscription** | Select your subscription. | You can assign a NIC only to a virtual network in the same subscription and location.|
+| **Resource group** | Select your resource group or create a new one. | A resource group is a logical container for grouping Azure resources. A NIC can exist in the same or a different resource group from the VM you attach it to or the virtual network you connect it to.|
+| **Name** | Enter a name for the NIC. | The name must be unique within the resource group. For information about creating a naming convention to make managing several NICs easier, see [Resource naming](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). You can't change the name after you create the NIC. |
+| **Region** | Select your region.| The Azure region where you create the NIC. |
+| **Virtual network** | Select your virtual network. | You can assign a NIC only to a virtual network in the same subscription and location as the NIC. Once you create a NIC, you can't change the virtual network it's assigned to. The VM you add the NIC to must also be in the same location and subscription as the NIC. |
+| **Subnet** | Select a subnet within the virtual network you selected. | You can change the subnet the NIC is assigned to after you create the NIC. |
+| **IP version** | Select **IPv4** or<br>**IPv4 and IPv6**. | You can choose to create the NIC with an IPv4 address or IPv4 and IPv6 addresses. To assign an IPv6 address, the network and subnet you use for the NIC must also have an IPv6 address space. An IPv6 configuration is assigned to a secondary IP configuration for the NIC.|
+| **Private IP address assignment** | Select **Dynamic** or **Static**. | The Azure DHCP server assigns the private IP address to the NIC in the VM's operating system.<br><br>- If you select **Dynamic**, Azure automatically assigns the next available address from the address space of the subnet you selected. <br><br>- If you select **Static**, you must manually assign an available IP address from within the address space of the subnet you selected.<br><br>Static and dynamic addresses don't change until you change them or delete the NIC. You can change the assignment method after the NIC is created. |
+ >[!NOTE]
-> Azure assigns a MAC address to the network interface only after the network interface is attached to a virtual machine and the virtual machine is started the first time. You cannot specify the MAC address that Azure assigns to the network interface. The MAC address remains assigned to the network interface until the network interface is deleted or the private IP address assigned to the primary IP configuration of the primary network interface is changed. To learn more about IP addresses and IP configurations, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md)
+>Azure assigns a MAC address to the NIC only after the NIC is attached to a VM and the VM starts for the first time. You can't specify the MAC address that Azure assigns to the NIC.
+>
+>The MAC address remains assigned to the NIC until the NIC is deleted or the private IP address assigned to the primary IP configuration of the primary NIC changes. For more information, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md).
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)] ## View network interface settings
-You can view and change most settings for a network interface after it's created. The portal doesn't display the DNS suffix or application security group membership for the network interface. You can use Azure PowerShell or Azure CLI to view the DNS suffix and application security group membership.
-
-# [**Portal**](#tab/network-interface-portal)
+You can view most settings for a NIC after you create it. The portal doesn't display the DNS suffix or application security group membership for the NIC. You can use Azure PowerShell or Azure CLI to view the DNS suffix and application security group membership.
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+# [Portal](#tab/azure-portal)
-2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+1. In the [Azure portal](https://portal.azure.com), search for and select **Network interfaces**.
+1. On the **Network interfaces** page, select the NIC you want to view.
+1. On the **Overview** page for the NIC, view essential information such as IPv4 and IPv6 IP addresses and network security group (NSG) membership.
-3. Select the network interface you want to view or change settings for from the list.
+ You can select **Edit accelerated networking** to set accelerated networking for NICs. For more information about accelerated networking, see [What is Accelerated Networking?](accelerated-networking-overview.md)
-4. The following items are listed for the network interface you selected:
+ :::image type="content" source="./media/virtual-network-network-interface/nic-overview.png" alt-text="Screenshot of network interface Overview.":::
- - **Overview:** The overview provides essential information about the network interface. IP addresses for IPv4 and IPv6 and network security group membership are displayed. The accelerated networking feature for network interfaces can be set in the overview. For more information about accelerated networking, see [What is Accelerated Networking?](accelerated-networking-overview.md)
-
- The following screenshot displays the overview settings for a network interface named **myNIC**:
+1. Select **IP configurations** in the left navigation, and on the **IP configurations** page, view the **IP forwarding**, **Subnet**, and public and private IPv4 and IPv6 IP configurations. For more information about IP configurations and how to add and remove IP addresses, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md).
- :::image type="content" source="./media/virtual-network-network-interface/nic-overview.png" alt-text="Screenshot of network interface overview.":::
+ :::image type="content" source="./media/virtual-network-network-interface/ip-configurations.png" alt-text="Screenshot of network interface IP configurations.":::
- - **IP configurations:** Public and private IPv4 and IPv6 address assigned to IP configurations are listed. To learn more about IP configurations and how to add and remove IP addresses, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md). IP forwarding and subnet assignment are also configured in this section. To learn more about these settings, see [Enable or disable IP forwarding](#enable-or-disable-ip-forwarding) and [Change subnet assignment](#change-subnet-assignment).
+1. Select **DNS servers** in the left navigation, and on the **DNS servers** page, view any DNS server that Azure DHCP assigns the NIC to. Also note whether the NIC inherits the setting from the virtual network or has a custom setting that overrides the virtual network setting.
- :::image type="content" source="./media/virtual-network-network-interface/ip-configurations.png" alt-text="Screenshot of network interface IP configurations.":::
+ :::image type="content" source="./media/virtual-network-network-interface/dns-servers.png" alt-text="Screenshot of DNS server configuration.":::
- - **DNS servers:** You can specify which DNS server a network interface is assigned by the Azure DHCP servers. The network interface can inherit the setting from the virtual network or have a custom setting that overrides the setting for the virtual network it's assigned to. To modify what's displayed, see [Change DNS servers](#change-dns-servers).
-
- :::image type="content" source="./media/virtual-network-network-interface/dns-servers.png" alt-text="Screenshot of DNS server configuration.":::
+1. Select **Network security group** from the left navigation, and on the **Network security group** page, see any NSG that's associated to the NIC. An NSG contains inbound and outbound rules to filter network traffic for the NIC.
- - **Network security group (NSG):** Displays which NSG is associated to the network interface. An NSG contains inbound and outbound rules to filter network traffic for the network interface. If an NSG is associated to the network interface, the name of the associated NSG is displayed. To modify what's displayed, see [Associate or dissociate a network security group](#associate-or-dissociate-a-network-security-group).
-
- :::image type="content" source="./media/virtual-network-network-interface/network-security-group.png" alt-text="Screenshot of network security group configuration.":::
+ :::image type="content" source="./media/virtual-network-network-interface/network-security-group.png" alt-text="Screenshot of network security group configuration.":::
- - **Properties:** Displays settings about the network interface, MAC address, and the subscription it exists in. The MAC address is blank if the network interface isn't attached to a virtual machine.
-
- :::image type="content" source="./media/virtual-network-network-interface/nic-properties.png" alt-text="Screenshot of network interface properties.":::
+1. Select **Properties** in the left navigation. On the **Properties** page, view settings for the NIC, such as the MAC address and subscription information. The MAC address is blank if the NIC isn't attached to a VM.
- - **Effective security rules:** Security rules are listed if the network interface is attached to a running virtual machine and associated with a network security group. The network security group can be assigned to the subnet the network interface is assigned to, or both. To learn more about what's displayed, see [View effective security rules](#view-effective-security-rules). To learn more about NSGs, see [Network security groups](./network-security-groups-overview.md).
-
- :::image type="content" source="./media/virtual-network-network-interface/effective-security-rules.png" alt-text="Screenshot of effective security rules.":::
+ :::image type="content" source="./media/virtual-network-network-interface/nic-properties.png" alt-text="Screenshot of network interface properties.":::
- - **Effective routes:** Routes are listed if the network interface is attached to a running virtual machine. The routes are a combination of the Azure default routes, any user-defined routes, and any BGP routes that may exist for the subnet the network interface is assigned to. To learn more about what's displayed, see [View effective routes](#view-effective-routes). To learn more about Azure default routes and user-defined routes, see [Routing overview](virtual-networks-udr-overview.md).
-
- :::image type="content" source="./media/virtual-network-network-interface/effective-routes.png" alt-text="Screenshot of effective routes.":::
+1. Select **Effective security rules** in the left navigation. The **Effective security rules** page lists security rules if the NIC is attached to a running VM and associated with an NSG. For more information about NSGs, see [Network security groups](./network-security-groups-overview.md).
-# [**PowerShell**](#tab/network-interface-powershell)
+ :::image type="content" source="./media/virtual-network-network-interface/effective-security-rules.png" alt-text="Screenshot of effective security rules.":::
-Use [Get-AzNetworkInterface](/powershell/module/az.network/get-aznetworkinterface) to view network interfaces in the subscription or view settings for a network interface.
+1. Select **Effective routes** in the left navigation. The **Effective routes** page lists routes if the NIC is attached to a running VM.
->[!NOTE]
-> Removal of the parameters **`-Name`** and **`-ResourceGroupName`** will return all of the network interfaces in the subscription.
+ The routes are a combination of the Azure default routes, any user-defined routes, and any Border Gateway Protocol (BGP) routes that exist for the subnet the NIC is assigned to. For more information about Azure default routes and user-defined routes, see [Virtual network traffic routing](virtual-networks-udr-overview.md).
-```azurepowershell
-Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
-```
+ :::image type="content" source="./media/virtual-network-network-interface/effective-routes.png" alt-text="Screenshot of effective routes.":::
-# [**Azure CLI**](#tab/network-interface-cli)
+# [Azure CLI](#tab/azure-cli)
-Use [az network nic list](/cli/azure/network/nic#az-network-nic-list) to view network interfaces in the subscription.
+Use [az network nic list](/cli/azure/network/nic#az-network-nic-list) to view all NICs in the subscription.
-```azurecli
+```azurecli-interactive
az network nic list ```
-Use [az network nic show](/cli/azure/network/nic#az-network-nic-show) to view the settings for a network interface.
+Use [az network nic show](/cli/azure/network/nic#az-network-nic-show) to view the settings for a NIC.
-```azurecli
+```azurecli-interactive
az network nic show --name myNIC --resource-group myResourceGroup ``` -
+# [PowerShell](#tab/azure-powershell)
-## Change DNS servers
+Use [Get-AzNetworkInterface](/powershell/module/az.network/get-aznetworkinterface) to view NICs in the subscription or view settings for a NIC.
-The DNS server is assigned by the Azure DHCP server to the network interface within the virtual machine operating system. To learn more about name resolution settings for a network interface, see [Name resolution for virtual machines](virtual-networks-name-resolution-for-vms-and-role-instances.md). The network interface can inherit the settings from the virtual network, or use its own unique settings that override the setting for the virtual network.
+>[!NOTE]
+> Remove the `-Name` and `-ResourceGroupName` parameters to return all the NICs in the subscription.
-# [**Portal**](#tab/network-interface-portal)
+```azurepowershell-interactive
+Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+```
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+
-2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+## Change network interface settings
-3. Select the network interface you want to view or change settings for from the list.
+You can change most settings for a NIC after you create it.
-4. In **Settings**, select **DNS servers**.
+<a name="change-dns-servers"></a>
+### Add or change DNS servers
-5. Select either:
+Azure DHCP assigns the DNS server to the NIC within the VM operating system. The NIC can inherit the settings from the virtual network, or use its own unique settings that override the setting for the virtual network. For more information about name resolution settings for a NIC, see [Name resolution for virtual machines](virtual-networks-name-resolution-for-vms-and-role-instances.md).
+
+# [Portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Network interfaces**.
+1. On the **Network interfaces** page, select the NIC you want to change from the list.
+1. On the NIC's page, select **DNS servers** from the left navigation.
+1. On the **DNS servers** page, select one of the following settings:
- - **Inherit from virtual network**: Choose this option to inherit the DNS server setting defined for the virtual network the network interface is assigned to. At the virtual network level, either a custom DNS server or the Azure-provided DNS server is defined. The Azure-provided DNS server can resolve hostnames for resources assigned to the same virtual network. FQDN must be used to resolve for resources assigned to different virtual networks.
+ - **Inherit from virtual network**: Choose this option to inherit the DNS server setting from the virtual network the NIC is assigned to. Either a custom DNS server or the Azure-provided DNS server is defined at the virtual network level.
- - **Custom**: You can configure your own DNS server to resolve names across multiple virtual networks. Enter the IP address of the server you want to use as a DNS server. The DNS server address you specify is assigned only to this network interface and overrides any DNS setting for the virtual network the network interface is assigned to.
-
- >[!NOTE]
- >If the VM uses a NIC that's part of an availability set, all the DNS servers that are specified for each of the VMs from all NICs that are part of the availability set are inherited.
+ The Azure-provided DNS server can resolve hostnames for resources assigned to the same virtual network. The fully qualified domain name (FQDN) must be used for resources assigned to different virtual networks.
+
+ >[!NOTE]
+ >If a VM uses a NIC that's part of an availability set, the DNS servers for all NICs for all VMs that are part of the availability set are inherited.
+
+ - **Custom**: You can configure your own DNS server to resolve names across multiple virtual networks. Enter the IP address of the server you want to use as a DNS server. The DNS server address you specify is assigned only to this NIC and overrides any DNS setting for the virtual network the NIC is assigned to.
-5. Select **Save**.
+1. Select **Save**.
-# [**PowerShell**](#tab/network-interface-powershell)
+# [Azure CLI](#tab/azure-cli)
+
+Use [az network nic update](/cli/azure/network/nic#az-network-nic-update) to change the DNS server setting from inherited to a custom setting. Replace the DNS server IP addresses with your custom IP addresses.
+
+```azurecli-interactive
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --dns-servers 192.168.1.100 192.168.1.101
+```
+
+To remove the DNS servers and change the setting to virtual network setting inheritance, use the following command:
+
+```azurecli-interactive
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --dns-servers null
+```
+
+# [PowerShell](#tab/azure-powershell)
Use [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface) to change the DNS server setting from inherited to a custom setting. Replace the DNS server IP addresses with your custom IP addresses.
-```azurepowershell
+```azurepowershell-interactive
## Place the network interface configuration into a variable. ## $nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
$nic | Set-AzNetworkInterface
To remove the DNS servers and change the setting to inherit from the virtual network, use the following command. Replace the DNS server IP addresses with your custom IP addresses.
-```azurepowershell
+```azurepowershell-interactive
## Place the network interface configuration into a variable. ## $nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
$nic.DnsSettings.DnsServers.Remove("192.168.1.101")
$nic | Set-AzNetworkInterface ```
-# [**Azure CLI**](#tab/network-interface-cli)
-
-Use [az network nic update](/cli/azure/network/nic#az-network-nic-update) to change the DNS server setting from inherited to a custom setting. Replace the DNS server IP addresses with your custom IP addresses.
-
-```azurecli
-az network nic update \
- --name myNIC \
- --resource-group myResourceGroup \
- --dns-servers 192.168.1.100 192.168.1.101
-```
-
-To remove the DNS servers and change the setting to virtual network setting inheritance, use the following command.
-
-```azurecli
-az network nic update \
- --name myNIC \
- --resource-group myResourceGroup \
- --dns-servers ""
-```
-
-## Enable or disable IP forwarding
+### Enable or disable IP forwarding
-IP forwarding enables the virtual machine network interface to:
+IP forwarding enables a NIC attached to a VM to:
-- Receive network traffic not destined for one of the IP addresses assigned to any of the IP configurations assigned to the network interface.
+- Receive network traffic not destined for any of the IP addresses assigned in any of the NIC's IP configurations.
+- Send network traffic with a different source IP address than is assigned in any of the NIC's IP configurations.
-- Send network traffic with a different source IP address than the one assigned to one of a network interface's IP configurations.
+You must enable IP forwarding for every NIC attached to the VM that needs to forward traffic. A VM can forward traffic whether it has multiple NICs or a single NIC attached to it.
-The setting must be enabled for every network interface that is attached to the virtual machine that receives traffic that the virtual machine needs to forward. A virtual machine can forward traffic whether it has multiple network interfaces or a single network interface attached to it. While IP forwarding is an Azure setting, the virtual machine must also run an application able to forward the traffic, such as firewall, WAN optimization, and load balancing applications.
+IP forwarding is typically used with user-defined routes. For more information, see [User-defined routes](virtual-networks-udr-overview.md).
-When a virtual machine is running network applications, the virtual machine is often referred to as a network virtual appliance. You can view a list of ready to deploy network virtual appliances in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances). IP forwarding is typically used with user-defined routes. To learn more about user-defined routes, see [User-defined routes](virtual-networks-udr-overview.md).
+While IP forwarding is an Azure setting, the VM must also run an application that's able to forward the traffic, such as a firewall, WAN optimization, or load balancing application. A VM that runs network applications is often called a network virtual appliance (NVA). You can view a list of ready-to-deploy NVAs in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=network%20virtual%20appliances).
-# [**Portal**](#tab/network-interface-portal)
+# [Portal](#tab/azure-portal)
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. On the NIC's page, select **IP configurations** in the left navigation.
+1. On the **IP configurations** page, under **IP forwarding settings**, select **Enabled** or **Disabled**, the default, to change the setting.
+1. Select **Save**.
-2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+# [Azure CLI](#tab/azure-cli)
-3. Select the network interface you want to view or change settings for from the list.
+Use [az network nic update](/cli/azure/network/nic#az-network-nic-update) to enable or disable the IP forwarding setting.
-4. In **Settings**, select **IP configurations**.
+To enable IP forwarding, use the following command:
-5. Select **Enabled** or **Disabled** (default setting) to change the setting.
+```azurecli-interactive
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --ip-forwarding true
+```
+
+To disable IP forwarding, use the following command:
-6. Select **Save**.
+```azurecli-interactive
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --ip-forwarding false
+```
-# [**PowerShell**](#tab/network-interface-powershell)
+# [PowerShell](#tab/azure-powershell)
Use [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface) to enable or disable the IP forwarding setting. To enable IP forwarding, use the following command:
-```azurepowershell
+```azurepowershell-interactive
## Place the network interface configuration into a variable. ## $nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
$nic | Set-AzNetworkInterface
To disable IP forwarding, use the following command:
-```azurepowershell
+```azurepowershell-interactive
## Place the network interface configuration into a variable. ## $nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
$nic | Set-AzNetworkInterface
```
-# [**Azure CLI**](#tab/network-interface-cli)
-
-Use [az network nic update](/cli/azure/network/nic#az-network-nic-update) to enable or disable the IP forwarding setting.
-
-To enable IP forwarding, use the following command:
-
-```azurecli
-az network nic update \
- --name myNIC \
- --resource-group myResourceGroup \
- --ip-forwarding true
-```
-
-To disable IP forwarding, use the following command:
-
-```azurecli
-az network nic update \
- --name myNIC \
- --resource-group myResourceGroup \
- --ip-forwarding false
-```
-
-## Change subnet assignment
+### Change subnet assignment
-You can change the subnet, but not the virtual network, that a network interface is assigned to.
+You can change the subnet, but not the virtual network, that a NIC is assigned to.
-# [**Portal**](#tab/network-interface-portal)
+# [Portal](#tab/azure-portal)
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. On the NIC's page, select **IP configurations** in the left navigation.
+1. On the **IP configurations** page, under **IP configurations**, if any private IP addresses listed have **(Static)** next to them, change the IP address assignment method to dynamic. All private IP addresses must be assigned with the dynamic assignment method to change the subnet assignment for the NIC.
-2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+ To change the assignment method to dynamic:
-3. Select the network interface you want to view or change settings for from the list.
+ 1. Select the IP configuration you want to change from the list of IP configurations.
+ 1. On the IP configuration page, select **Dynamic** under **Assignment**.
+ 1. Select **Save**.
-4. In **Settings**, select **IP configurations**.
+1. When all private IP addresses are set to **Dynamic**, under **Subnet**, select the subnet you want to move the NIC to.
+1. Select **Save**. New dynamic addresses are assigned from the new subnet's address range.
-5. If any private IP addresses for any IP configurations listed have **(Static)** next to them, you must change the IP address assignment method to dynamic. All private IP addresses must be assigned with the dynamic assignment method to change the subnet assignment for the network interface. Skip to step 6 if your private IPs are set to dynamic.
+After assigning the NIC to a new subnet, you can assign a static IPv4 address from the new subnet address range if you choose. For more information about adding, changing, and removing IP addresses for a NIC, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md).
- Complete the following steps to change the assignment method to dynamic:
-
- - Select the IP configuration you want to change the IPv4 address assignment method for from the list of IP configurations.
-
- - Select **Dynamic** for the private IP address in **Assignment**.
-
- - Select **Save**.
+# [Azure CLI](#tab/azure-cli)
-6. Select the subnet you want to move the network interface to from the **Subnet** drop-down list.
+Use [az network nic ip-config update](/cli/azure/network/nic#az-network-nic-ip-config-update) to change the subnet of the NIC.
-5. Select **Save**.
-
-New dynamic addresses are assigned from the subnet address range for the new subnet. After assigning the network interface to a new subnet, you can assign a static IPv4 address from the new subnet address range if you choose. To learn more about adding, changing, and removing IP addresses for a network interface, see [Manage IP addresses](./ip-services/virtual-network-network-interface-addresses.md).
+```azurecli-interactive
+az network nic ip-config update \
+ --name ipv4config \
+ --nic-name myNIC \
+ --resource-group myResourceGroup \
+ --subnet mySubnet \
+ --vnet-name myVNet
+```
-# [**PowerShell**](#tab/network-interface-powershell)
+# [PowerShell](#tab/azure-powershell)
-Use [Set-AzNetworkInterfaceIpConfig](/powershell/module/az.network/set-aznetworkinterfaceipconfig) to change the subnet of the network interface.
+Use [Set-AzNetworkInterfaceIpConfig](/powershell/module/az.network/set-aznetworkinterfaceipconfig) to change the subnet of the NIC.
-```azurepowershell
+```azurepowershell-interactive
## Place the virtual network into a variable. ## $net = @{ Name = 'myVNet'
$nic | Set-AzNetworkInterface
```
-# [**Azure CLI**](#tab/network-interface-cli)
-
-Use [az network nic ip-config update](/cli/azure/network/nic#az-network-nic-ip-config-update) to change the subnet of the network interface.
-
-```azurecli
-az network nic ip-config update \
- --name ipv4config \
- --nic-name myNIC \
- --resource-group myResourceGroup \
- --subnet mySubnet \
- --vnet-name myVNet
-```
-
-## Add or remove from application security groups
-
-You can only add a network interface, or remove a network interface from an application security group using the portal if the network interface is attached to a virtual machine.
+### Add or remove from application security groups
-You can use PowerShell or the Azure CLI to add a network interface to, or remove a network interface from an application security group regardless of virtual machine configuration. Learn more about [Application security groups](./network-security-groups-overview.md#application-security-groups) and how to [create an application security group](manage-network-security-group.md).
+You can add NICs only to application security groups in the same virtual network and location as the NIC.
-# [**Portal**](#tab/network-interface-portal)
+You can use the portal to add or remove a NIC for an application security group only if the NIC is attached to a VM. Otherwise, use PowerShell or Azure CLI. For more information, see [Application security groups](./network-security-groups-overview.md#application-security-groups) and [How to create an application security group](manage-network-security-group.md).
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+# [Portal](#tab/azure-portal)
-2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+To add or remove a NIC for an application security group on a VM, follow this procedure:
-3. Select the virtual machine you want to view or change settings for from the list.
+1. In the [Azure portal](https://portal.azure.com), search for and select *virtual machines*.
+1. On the **Virtual machines** page, select the VM you want to configure from the list.
+1. On the VM's page, select **Networking** from the left navigation.
+1. On the **Networking** page, under the **Application security groups** tab, select **Configure the application security groups**.
-4. In **Settings**, select **Networking**.
+ :::image type="content" source="./media/virtual-network-network-interface/application-security-group.png" alt-text="Screenshot of application security group configuration.":::
-5. Select the **Application security groups** tab.
+1. Select the application security groups you want to add the NIC to, or deselect the application security groups you want to remove the NIC from.
+1. Select **Save**.
-6. Select **Configure the application security groups**.
+# [Azure CLI](#tab/azure-cli)
- :::image type="content" source="./media/virtual-network-network-interface/application-security-group.png" alt-text="Screenshot of application security group configuration.":::
-
-7. Select the application security groups that you want to add the network interface to, or unselect the application security groups that you want to remove the network interface from.
+Use [az network nic ip-config update](/cli/azure/network/nic#az-network-nic-ip-config-update) to set the application security group.
-8. Select **Save**.
+```azurecli-interactive
+az network nic ip-config update \
+ --name ipv4config \
+ --nic-name myNIC \
+ --resource-group myResourceGroup \
+ --application-security-groups myASG
+```
-# [**PowerShell**](#tab/network-interface-powershell)
+# [PowerShell](#tab/azure-powershell)
Use [Set-AzNetworkInterfaceIpConfig](/powershell/module/az.network/set-aznetworkinterfaceipconfig) to set the application security group.
-```azurepowershell
+```azurepowershell-interactive
## Place the virtual network into a variable. ## $net = @{ Name = 'myVNet'
$nic | Set-AzNetworkInterfaceIpConfig @IP
$nic | Set-AzNetworkInterface ```
-# [**Azure CLI**](#tab/network-interface-cli)
-
-Use [az network nic ip-config update](/cli/azure/network/nic#az-network-nic-ip-config-update) to set the application security group.
-
-```azurecli
-az network nic ip-config update \
- --name ipv4config \
- --nic-name myNIC \
- --resource-group myResourceGroup \
- --application-security-groups myASG
-```
-
-Only network interfaces that exist in the same virtual network can be added to the same application security group. The application security group must exist in the same location as the network interface.
+### Associate or dissociate a network security group
-## Associate or dissociate a network security group
+# [Portal](#tab/azure-portal)
-# [**Portal**](#tab/network-interface-portal)
+1. On the NIC's page, select **Network security group** in the left navigation.
+1. On the **Network security group** page, select the network security group you want to associate, or select **None** to dissociate the NSG.
+1. Select **Save**.
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+# [Azure CLI](#tab/azure-cli)
-2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+Use [az network nic update](/cli/azure/network/nic#az-network-nic-update) to set the network security group for the NIC.
-3. Select the network interface you want to view or change settings for from the list.
-
-4. In **Settings**, select **Network security group**.
-
-5. Select the network security group in the pull-down box.
-
-6. Select **Save**.
+```azurecli-interactive
+az network nic update \
+ --name myNIC \
+ --resource-group myResourceGroup \
+ --network-security-group myNSG
+```
-# [**PowerShell**](#tab/network-interface-powershell)
+# [PowerShell](#tab/azure-powershell)
-Use [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface) to set the network security group for the network interface.
+Use [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface) to set the network security group for the NIC.
-```azurepowershell
+```azurepowershell-interactive
## Place the network interface configuration into a variable. ## $nic = Get-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
$nic.NetworkSecurityGroup = $nsg
$nic | Set-AzNetworkInterface ```
-# [**Azure CLI**](#tab/network-interface-cli)
-
-Use [az network nic update](/cli/azure/network/nic#az-network-nic-update) to set the network security group for the network interface.
-
-```azurecli
-az network nic update \
- --name myNIC \
- --resource-group myResourceGroup \
- --network-security-group myNSG
-```
- ## Delete a network interface
-You can delete a network interface if it't not attached to a virtual machine. If a network interface is attached to a virtual machine, you must first place the virtual machine in the stopped (deallocated) state, then detach the network interface from the virtual machine.
-
-To detach a network interface from a virtual machine, complete the steps in [Detach a network interface from a virtual machine](virtual-network-network-interface-vm.md#remove-a-network-interface-from-a-vm). You can't detach a network interface from a virtual machine if it's the only network interface attached to the virtual machine however. A virtual machine must always have at least one network interface attached to it.
-
-# [**Portal**](#tab/network-interface-portal)
-
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+You can delete a NIC if it's not attached to a VM. If the NIC is attached to a VM, you must first stop and deallocate the VM, then detach the NIC.
-2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+To detach the NIC from the VM, complete the steps in [Remove a network interface from a VM](virtual-network-network-interface-vm.md#remove-a-network-interface-from-a-vm). A VM must always have at least one NIC attached to it, so you can't delete the only NIC from a VM.
-3. Select the network interface you want to view or change settings for from the list.
+# [Portal](#tab/azure-portal)
-4. In **Overview**, select **Delete**.
+To delete a NIC, on the **Overview** page for the NIC you want to delete, select **Delete** from the top menu bar, and then select **Yes**.
-# [**PowerShell**](#tab/network-interface-powershell)
+# [Azure CLI](#tab/azure-cli)
-Use [Remove-AzNetworkInterface](/powershell/module/az.network/remove-aznetworkinterface) to delete the network interface.
+Use [az network nic delete](/cli/azure/network/nic#az-network-nic-delete) to delete the NIC.
-```azurepowershell
-Remove-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
+```azurecli-interactive
+az network nic delete --name myNIC --resource-group myResourceGroup
```
-# [**Azure CLI**](#tab/network-interface-cli)
+# [PowerShell](#tab/azure-powershell)
-Use [az network nic delete](/cli/azure/network/nic#az-network-nic-delete) to delete the network interface.
+Use [Remove-AzNetworkInterface](/powershell/module/az.network/remove-aznetworkinterface) to delete the NIC.
-```azurecli
-az network nic delete --name myNIC --resource-group myResourceGroup
+```azurepowershell-interactive
+Remove-AzNetworkInterface -Name myNIC -ResourceGroupName myResourceGroup
``` ## Resolve connectivity issues
-If you're experiencing communication problems with a virtual machine, network security group rules or effective routes may be causing the problem. You have the following options to help resolve the issue:
+If you have communication problems with a VM, network security group rules or effective routes might be causing the problems. Use the following options to help resolve the issue.
### View effective security rules
-The effective security rules for each network interface attached to a virtual machine are a combination of the rules you've created in a network security group and [default security rules](./network-security-groups-overview.md#default-security-rules). Understanding the effective security rules for a network interface may help you determine why you're unable to communicate to or from a virtual machine. You can view the effective rules for any network interface that is attached to a running virtual machine.
-
-# [**Portal**](#tab/network-interface-portal)
-
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+The effective security rules for each NIC attached to a VM are a combination of the rules you created in an NSG and [default security rules](./network-security-groups-overview.md#default-security-rules). Understanding the effective security rules for a NIC might help you determine why you're unable to communicate to or from a VM. You can view the effective rules for any NIC that's attached to a running VM.
-2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+# [Portal](#tab/azure-portal)
-3. Select the virtual machine you want to view or change settings for from the list.
+1. In the [Azure portal](https://portal.azure.com), search for and select *virtual machines*.
+1. On the **Virtual machines** page, select the VM you want to view settings for.
+1. On the VM page, select **Networking** from the left navigation.
+1. On the **Networking** page, select the **Network Interface**.
+1. On the NIC's page, select **Effective security rules** under **Help** in the left navigation.
+1. Review the list of effective security rules to determine if the rules are correct for your required inbound and outbound communications. For more information about security rules, see [Network security group overview](network-security-groups-overview.md).
-4. In **Settings**, select **Networking**.
+# [Azure CLI](#tab/azure-cli)
-5. Select the name of the network interface.
-
-6. Select **Effective security rules**.
+Use [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) to view the list of effective security rules.
-7. Review the list of effective security rules to determine if the correct rules exist for your required inbound and outbound communication. For more information about security rules, see [Network security group overview](./network-security-groups-overview.md).
+```azurecli-interactive
+az network nic list-effective-nsg --name myNIC --resource-group myResourceGroup
+```
-# [**PowerShell**](#tab/network-interface-powershell)
+# [PowerShell](#tab/azure-powershell)
Use [Get-AzEffectiveNetworkSecurityGroup](/powershell/module/az.network/get-azeffectivenetworksecuritygroup) to view the list of effective security rules.
-```azurepowershell
+```azurepowershell-interactive
Get-AzEffectiveNetworkSecurityGroup -NetworkInterfaceName myNIC -ResourceGroupName myResourceGroup ```
-# [**Azure CLI**](#tab/network-interface-cli)
-
-Use [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) to view the list of effective security rules.
-
-```azurecli
-az network nic list-effective-nsg --name myNIC --resource-group myResourceGroup
-```
- ### View effective routes
-The effective routes for the network interface or interfaces attached to a virtual machine are a combination of:
+The effective routes for the NIC or NICs attached to a VM are a combination of:
- Default routes
+- User-defined routes
+- Routes propagated from on-premises networks via BGP through an Azure virtual network gateway.
-- User created routes
+Understanding the effective routes for a NIC might help you determine why you can't communicate with a VM. You can view the effective routes for any NIC that's attached to a running VM.
-- Routes propagated from on-premises networks via BGP through an Azure virtual network gateway.
+# [Portal](#tab/azure-portal)
-Understanding the effective routes for a network interface may help you determine why you're unable to communicate to or from a virtual machine. You can view the effective routes for any network interface that is attached to a running virtual machine.
+1. On the page for the NIC that's attached to the VM, select **Effective routes** under **Help** in the left navigation.
+1. Review the list of effective routes to see if the routes are correct for your required inbound and outbound communications. For more information about routing, see [Routing overview](virtual-networks-udr-overview.md).
-# [**Portal**](#tab/network-interface-portal)
+# [Azure CLI](#tab/azure-cli)
-1. Sign-in to the [Azure portal](https://portal.azure.com).
-
-2. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
-
-3. Select the network interface you want to view or change settings for from the list.
-
-4. In **Help**, select **Effective routes**.
+Use [az network nic show-effective-route-table](/cli/azure/network/nic#az-network-nic-show-effective-route-table) to view a list of the effective routes.
-5. Review the list of effective routes to determine if the correct routes exist for your required inbound and outbound communication. For more information about routing, see [Routing overview](virtual-networks-udr-overview.md).
+```azurecli-interactive
+az network nic show-effective-route-table --name myNIC --resource-group myResourceGroup
+```
-# [**PowerShell**](#tab/network-interface-powershell)
+# [PowerShell](#tab/azure-powershell)
Use [Get-AzEffectiveRouteTable](/powershell/module/az.network/get-azeffectiveroutetable) to view a list of the effective routes.
-```azurepowershell
+```azurepowershell-interactive
Get-AzEffectiveRouteTable -NetworkInterfaceName myNIC -ResourceGroupName myResourceGroup ```
-# [**Azure CLI**](#tab/network-interface-cli)
-
-Use [az network nic show-effective-route-table](/cli/azure/network/nic#az-network-nic-show-effective-route-table) to view a list of the effective routes.
-
-```azurecli
-az network nic show-effective-route-table --name myNIC --resource-group myResourceGroup
-```
-
-The next hop feature of Azure Network Watcher can also help you determine if routes are preventing communication between a virtual machine and an endpoint. To learn more, see [Next hop](../network-watcher/diagnose-vm-network-routing-problem.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-
-## Permissions
-
-To perform tasks on network interfaces, your account must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [custom](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) role that is assigned the appropriate permissions listed in the following table:
-
-| Action | Name |
-| | - |
-| Microsoft.Network/networkInterfaces/read | Get network interface |
-| Microsoft.Network/networkInterfaces/write | Create or update network interface |
-| Microsoft.Network/networkInterfaces/join/action | Attach a network interface to a virtual machine |
-| Microsoft.Network/networkInterfaces/delete | Delete network interface |
-| Microsoft.Network/networkInterfaces/joinViaPrivateIp/action | Join a resource to a network interface via private ip |
-| Microsoft.Network/networkInterfaces/effectiveRouteTable/action | Get network interface effective route table |
-| Microsoft.Network/networkInterfaces/effectiveNetworkSecurityGroups/action | Get network interface effective security groups |
-| Microsoft.Network/networkInterfaces/loadBalancers/read | Get network interface load balancers |
-| Microsoft.Network/networkInterfaces/serviceAssociations/read | Get service association |
-| Microsoft.Network/networkInterfaces/serviceAssociations/write | Create or update a service association |
-| Microsoft.Network/networkInterfaces/serviceAssociations/delete | Delete service association |
-| Microsoft.Network/networkInterfaces/serviceAssociations/validate/action | Validate service association |
-| Microsoft.Network/networkInterfaces/ipconfigurations/read | Get network interface IP configuration |
+The next hop feature of Azure Network Watcher can also help you determine if routes are preventing communication between a VM and an endpoint. For more information, see [Tutorial: Diagnose a virtual machine network routing problem by using the Azure portal](/azure/network-watcher/diagnose-vm-network-routing-problem).
## Next steps -- Create a VM with multiple NICs using the [Azure CLI](../virtual-machines/linux/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [PowerShell](../virtual-machines/windows/multiple-nics.md?toc=%2fazure%2fvirtual-network%2ftoc.json)--- Create a single NIC VM with multiple IPv4 addresses using the [Azure CLI](./ip-services/virtual-network-multiple-ip-addresses-cli.md) or [PowerShell](./ip-services/virtual-network-multiple-ip-addresses-powershell.md)
+For other network interface tasks, see the following articles:
-- Create a single NIC VM with a private IPv6 address (behind an Azure Load Balancer) using the [Azure CLI](../load-balancer/load-balancer-ipv6-internet-cli.md?toc=%2fazure%2fvirtual-network%2ftoc.json), [PowerShell](../load-balancer/load-balancer-ipv6-internet-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json), or [Azure Resource Manager template](../load-balancer/load-balancer-ipv6-internet-template.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+|Task|Article|
+|-|-|
+|Add, change, or remove IP addresses for a network interface.|[Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md)|
+|Add or remove network interfaces for VMs.|[Add network interfaces to or remove network interfaces from virtual machines](virtual-network-network-interface-vm.md)|
+|Create a VM with multiple NICs|- [How to create a Linux virtual machine in Azure with multiple network interface cards](/azure/virtual-machines/linux/multiple-nics?toc=%2fazure%2fvirtual-network%2ftoc.json)<br>- [Create and manage a Windows virtual machine that has multiple NICs](/azure/virtual-machines/windows/multiple-nics)|
+|Create a single NIC VM with multiple IPv4 addresses.|- [Assign multiple IP addresses to virtual machines by using the Azure CLI](./ip-services/virtual-network-multiple-ip-addresses-cli.md)<br>- [Assign multiple IP addresses to virtual machines by using Azure PowerShell](./ip-services/virtual-network-multiple-ip-addresses-powershell.md)|
+|Create a single NIC VM with a private IPv6 address behind Azure Load Balancer.|- [Create a public load balancer with IPv6 by using Azure CLI](/azure/load-balancer/load-balancer-ipv6-internet-cli?toc=%2fazure%2fvirtual-network%2ftoc.json)<br>- [Create an internet facing load balancer with IPv6 by using PowerShell](/azure/load-balancer/load-balancer-ipv6-internet-ps?toc=%2fazure%2fvirtual-network%2ftoc.json)<br>- [Deploy an internet-facing load-balancer solution with IPv6 by using a template](/azure/load-balancer/load-balancer-ipv6-internet-template?toc=%2fazure%2fvirtual-network%2ftoc.json)|
virtual-network Virtual Network Nsg Manage Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-nsg-manage-log.md
Title: Diagnostic resource logging for a network security group-+ description: Learn how to enable event and rule counter diagnostic resource logs for an Azure network security group.
Previously updated : 06/04/2018 Last updated : 03/22/2023 ms.devlang: azurecli
ms.devlang: azurecli
# Resource logging for a network security group
-A network security group (NSG) includes rules that allow or deny traffic to a virtual network subnet, network interface, or both.
+A network security group (NSG) includes rules that allow or deny traffic to a virtual network subnet, network interface, or both.
When you enable logging for an NSG, you can gather the following types of resource log information:
-* **Event:** Entries are logged for which NSG rules are applied to VMs, based on MAC address.
-* **Rule counter:** Contains entries for how many times each NSG rule is applied to deny or allow traffic. The status for these rules is collected every 300 seconds.
+- **Event:** Entries are logged for which NSG rules are applied to virtual machines, based on MAC address.
+- **Rule counter:** Contains entries for how many times each NSG rule is applied to allow or deny traffic. The status for these rules is collected every 300 seconds.
-Resource logs are only available for NSGs deployed through the Azure Resource Manager deployment model. You cannot enable resource logging for NSGs deployed through the classic deployment model. For a better understanding of the two models, see [Understanding Azure deployment models](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+Resource logs are only available for NSGs deployed through the Azure Resource Manager deployment model. You can't enable resource logging for NSGs deployed through the classic deployment model. For more information, see [Understand deployment models](../azure-resource-manager/management/deployment-models.md).
-Resource logging is enabled separately for *each* NSG you want to collect diagnostic data for. If you're interested in activity (operational) logs instead, see Azure [activity logging](../azure-monitor/essentials/platform-logs-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). If you're interested in IP traffic flowing through NSGs see Azure Network Watcher [NSG Flow logs](../network-watcher/network-watcher-nsg-flow-logging-overview.md)
+Resource logging is enabled separately for *each* NSG for which to collect diagnostic data. If you're interested in *activity*, or *operational*, logs instead, see [Overview of Azure platform logs](../azure-monitor/essentials/platform-logs-overview.md). If you're interested in IP traffic flowing through NSGs, see [Flow logs for network security groups](../network-watcher/network-watcher-nsg-flow-logging-overview.md).
## Enable logging
-You can use the [Azure portal](#azure-portal), [PowerShell](#powershell), or the [Azure CLI](#azure-cli) to enable resource logging.
+You can use the [Azure portal](#azure-portal), [Azure PowerShell](#azure-powershell), or the [Azure CLI](#azure-cli) to enable resource logging.
### Azure portal
-1. Sign in to the [portal](https://portal.azure.com).
-2. Select **All services**, then type *network security groups*. When **Network security groups** appear in the search results, select it.
-3. Select the NSG you want to enable logging for.
-4. Under **MONITORING**, select **Diagnostics logs**, and then select **Turn on diagnostics**, as shown in the following picture:
+1. Sign in to [the Azure portal](https://portal.azure.com).
+1. In the search box at the top of the Azure portal, enter *network security groups*. Select **Network security groups** in the search results.
+1. Select the NSG for which to enable logging.
+1. Under **Monitoring**, select **Diagnostic settings**, and then select **Add diagnostic setting**:
- ![Turn on diagnostics](./media/virtual-network-nsg-manage-log/turn-on-diagnostics.png)
+ :::image type="content" source="./media/virtual-network-nsg-manage-log/turn-on-diagnostics.png" alt-text="Screenshot shows the diagnostic settings for an NSG with Add diagnostic setting highlighted." lightbox="./media/virtual-network-nsg-manage-log/turn-on-diagnostics.png":::
-5. Under **Diagnostics settings**, enter, or select the following information, and then select **Save**:
+1. In **Diagnostic setting**, enter a name, such as *myNsgDiagnostic*.
+1. For **Logs**, select **allLogs** or select individual categories of logs. For more information about each category, see [Log categories](#log-categories).
+1. Under **Destination details**, select one or more destinations:
- | Setting | Value |
- | | |
- | Name | A name of your choosing. For example: *myNsgDiagnostics* |
- | **Archive to a storage account**, **Stream to an event hub**, and **Send to Log Analytics** | You can select as many destinations as you choose. To learn more about each, see [Log destinations](#log-destinations). |
- | LOG | Select either, or both log categories. To learn more about the data logged for each category, see [Log categories](#log-categories). |
-6. View and analyze logs. For more information, see [View and analyze logs](#view-and-analyze-logs).
+ - Send to Log Analytics workspace
+ - Archive to a storage account
+ - Stream to an event hub
+ - Send to partner solution
-### PowerShell
+ For more information, see [Log destinations](#log-destinations).
+
+1. Select **Save**.
+
+1. View and analyze logs. For more information, see [View and analyze logs](#view-and-analyze-logs).
+
+### Azure PowerShell
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to sign in to Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
+You can run the commands that in this section in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account.
+
+If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you run PowerShell locally, you also need to run the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet to sign in to Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
-To enable resource logging, you need the ID of an existing NSG. If you don't have an existing NSG, you can create one with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup).
+To enable resource logging, you need the ID of an existing NSG. If you don't have an existing NSG, create one by using the [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) cmdlet.
-Retrieve the network security group that you want to enable resource logging for with [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup). For example, to retrieve an NSG named *myNsg* that exists in a resource group named *myResourceGroup*, enter the following command:
+Get the network security group that you want to enable resource logging for by using the [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup) cmdlet. Store the NSG in a variable for later use. For example, to retrieve an NSG named *myNsg* that exists in a resource group named *myResourceGroup*, enter the following command:
```azurepowershell-interactive $Nsg=Get-AzNetworkSecurityGroup `
$Nsg=Get-AzNetworkSecurityGroup `
-ResourceGroupName myResourceGroup ```
-You can write resource logs to three destination types. For more information, see [Log destinations](#log-destinations). In this article, logs are sent to the *Log Analytics* destination, as an example. Retrieve an existing Log Analytics workspace with [Get-AzOperationalInsightsWorkspace](/powershell/module/az.operationalinsights/get-azoperationalinsightsworkspace). For example, to retrieve an existing workspace named *myWorkspace* in a resource group named *myWorkspaces*, enter the following command:
+You can write resource logs to different destination types. For more information, see [Log destinations](#log-destinations). In this article, logs are sent to a *Log Analytics workspace* destination. If you don't have an existing workspace, you can create one by using the [New-AzOperationalInsightsWorkspace](/powershell/module/az.operationalinsights/new-azoperationalinsightsworkspace) cmdlet.
+
+Retrieve an existing Log Analytics workspace with the [Get-AzOperationalInsightsWorkspace](/powershell/module/az.operationalinsights/get-azoperationalinsightsworkspace) cmdlet. For example, to get and store an existing workspace named *myWorkspace* in a resource group named *myWorkspaces*, enter the following command:
```azurepowershell-interactive $Oms=Get-AzOperationalInsightsWorkspace `
$Oms=Get-AzOperationalInsightsWorkspace `
-Name myWorkspace ```
-If you don't have an existing workspace, you can create one with [New-AzOperationalInsightsWorkspace](/powershell/module/az.operationalinsights/new-azoperationalinsightsworkspace).
-
-There are two categories of logging you can enable logs for. For more information, see [Log categories](#log-categories). Enable resource logging for the NSG with [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting). The following example logs both event and counter category data to the workspace for an NSG, using the IDs for the NSG and workspace you retrieved previously:
+There are two categories of logging that you can enable. For more information, see [Log categories](#log-categories). Enable resource logging for the NSG with the [New-AzDiagnosticSetting](/powershell/module/az.monitor/new-azdiagnosticsetting) cmdlet. The following example logs both event and counter category data to the workspace for an NSG. It uses the IDs for the NSG and workspace that you got with the previous commands:
```azurepowershell-interactive
-Set-AzDiagnosticSetting `
- -ResourceId $Nsg.Id `
- -WorkspaceId $Oms.ResourceId `
- -Enabled $true
+New-AzDiagnosticSetting `
+ -Name myDiagnosticSetting `
+ -ResourceId $Nsg.Id `
+ -WorkspaceId $Oms.ResourceId
```
-If you only want to log data for one category or the other, rather than both, add the `-Categories` option to the previous command, followed by *NetworkSecurityGroupEvent* or *NetworkSecurityGroupRuleCounter*. If you want to log to a different [destination](#log-destinations) than a Log Analytics workspace, use the appropriate parameters for an Azure [Storage account](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage) or [Event Hubs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
+If you want to log to a different [destination](#log-destinations) than a Log Analytics workspace, use an appropriate parameter in the command. For more information, see [Azure resource logs](../azure-monitor/essentials/resource-logs.md).
+
+For more information about settings, see [New-AzDiagnosticSetting](/powershell/module/az.monitor/new-azdiagnosticsetting).
View and analyze logs. For more information, see [View and analyze logs](#view-and-analyze-logs). ### Azure CLI
-You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/bash), or by running the Azure CLI from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run the CLI from your computer, you need version 2.0.38 or later. Run `az --version` on your computer, to find the installed version. If you need to upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If you are running the CLI locally, you also need to run `az login` to sign in to Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
+You can run the commands in this section in the [Azure Cloud Shell](https://shell.azure.com/bash), or by running the Azure CLI from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account.
+
+If you run the CLI from your computer, you need version 2.0.38 or later. Run `az --version` on your computer, to find the installed version. If you need to upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If you run the CLI locally, you also need to run `az login` to sign in to Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
-To enable resource logging, you need the ID of an existing NSG. If you don't have an existing NSG, you can create one with [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create).
+To enable resource logging, you need the ID of an existing NSG. If you don't have an existing NSG, create one by using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create).
-Retrieve the network security group that you want to enable resource logging for with [az network nsg show](/cli/azure/network/nsg#az-network-nsg-show). For example, to retrieve an NSG named *myNsg* that exists in a resource group named *myResourceGroup*, enter the following command:
+Get and store the network security group that you want to enable resource logging for with [az network nsg show](/cli/azure/network/nsg#az-network-nsg-show). For example, to retrieve an NSG named *myNsg* that exists in a resource group named *myResourceGroup*, enter the following command:
```azurecli-interactive nsgId=$(az network nsg show \
nsgId=$(az network nsg show \
--output tsv) ```
-You can write resource logs to three destination types. For more information, see [Log destinations](#log-destinations). In this article, logs are sent to the *Log Analytics* destination, as an example. For more information, see [Log categories](#log-categories).
+You can write resource logs to different destination types. For more information, see [Log destinations](#log-destinations). In this article, logs are sent to a *Log Analytics workspace* destination, as an example. For more information, see [Log categories](#log-categories).
-Enable resource logging for the NSG with [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create). The following example logs both event and counter category data to an existing workspace named *myWorkspace*, which exists in a resource group named *myWorkspaces*, and the ID of the NSG you retrieved previously:
+Enable resource logging for the NSG with [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create). The following example logs both event and counter category data to an existing workspace named *myWorkspace*, which exists in a resource group named *myWorkspaces*. It uses the ID of the NSG that you saved by using the previous command.
```azurecli-interactive az monitor diagnostic-settings create \
az monitor diagnostic-settings create \
--resource-group myWorkspaces ```
-If you don't have an existing workspace, you can create one using the [Azure portal](../azure-monitor/logs/quick-create-workspace.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [PowerShell](/powershell/module/az.operationalinsights/new-azoperationalinsightsworkspace). There are two categories of logging you can enable logs for.
+If you don't have an existing workspace, create one using the [Azure portal](../azure-monitor/logs/quick-create-workspace.md) or [Azure PowerShell](/powershell/module/az.operationalinsights/new-azoperationalinsightsworkspace). There are two categories of logging for which you can enable logs.
-If you only want to log data for one category or the other, remove the category you don't want to log data for in the previous command. If you want to log to a different [destination](#log-destinations) than a Log Analytics workspace, use the appropriate parameters for an Azure [Storage account](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage) or [Event Hubs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs).
+If you only want to log data for one category or the other, remove the category you don't want to log data for in the previous command. If you want to log to a different [destination](#log-destinations) than a Log Analytics workspace, use an appropriate parameter. For more information, see [Azure resource logs](../azure-monitor/essentials/resource-logs.md).
View and analyze logs. For more information, see [View and analyze logs](#view-and-analyze-logs). ## Log destinations
-Diagnostics data can be:
-- [Written to an Azure Storage account](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage), for auditing or manual inspection. You can specify the retention time (in days) using resource diagnostic settings.-- [Streamed to an Event hub](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs) for ingestion by a third-party service, or custom analytics solution, such as Power BI.-- [Written to Azure Monitor logs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage).
+You can send diagnostics data to the following options:
+
+- [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace)
+- [Azure Event Hubs](../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs)
+- [Azure Storage](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage)
+- [Azure Monitor partner integrations](../azure-monitor/essentials/resource-logs.md#azure-monitor-partner-integrations)
## Log categories
-JSON-formatted data is written for the following log categories:
+JSON-formatted data is written for the following log categories: event and rule counter.
### Event
-The event log contains information about which NSG rules are applied to VMs, based on MAC address. The following data is logged for each event. In the following example, the data is logged for a virtual machine with the IP address 192.168.1.4 and a MAC address of 00-0D-3A-92-6A-7C:
+The event log contains information about which NSG rules are applied to virtual machines, based on MAC address. The following data is logged for each event. In the following example, the data is logged for a virtual machine with the IP address 192.168.1.4 and a MAC address of 00-0D-3A-92-6A-7C:
```json {
- "time": "[DATE-TIME]",
- "systemId": "[ID]",
- "category": "NetworkSecurityGroupEvent",
- "resourceId": "/SUBSCRIPTIONS/[SUBSCRIPTION-ID]/RESOURCEGROUPS/[RESOURCE-GROUP-NAME]/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/[NSG-NAME]",
- "operationName": "NetworkSecurityGroupEvents",
- "properties": {
- "vnetResourceGuid":"[ID]",
- "subnetPrefix":"192.168.1.0/24",
- "macAddress":"00-0D-3A-92-6A-7C",
- "primaryIPv4Address":"192.168.1.4",
- "ruleName":"[SECURITY-RULE-NAME]",
- "direction":"[DIRECTION-SPECIFIED-IN-RULE]",
- "priority":"[PRIORITY-SPECIFIED-IN-RULE]",
- "type":"[ALLOW-OR-DENY-AS-SPECIFIED-IN-RULE]",
- "conditions":{
- "protocols":"[PROTOCOLS-SPECIFIED-IN-RULE]",
- "destinationPortRange":"[PORT-RANGE-SPECIFIED-IN-RULE]",
- "sourcePortRange":"[PORT-RANGE-SPECIFIED-IN-RULE]",
- "sourceIP":"[SOURCE-IP-OR-RANGE-SPECIFIED-IN-RULE]",
- "destinationIP":"[DESTINATION-IP-OR-RANGE-SPECIFIED-IN-RULE]"
- }
- }
+ "time": "[DATE-TIME]",
+ "systemId": "[ID]",
+ "category": "NetworkSecurityGroupEvent",
+ "resourceId": "/SUBSCRIPTIONS/[SUBSCRIPTION-ID]/RESOURCEGROUPS/[RESOURCE-GROUP-NAME]/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/[NSG-NAME]",
+ "operationName": "NetworkSecurityGroupEvents",
+ "properties": {
+ "vnetResourceGuid":"[ID]",
+ "subnetPrefix":"192.168.1.0/24",
+ "macAddress":"00-0D-3A-92-6A-7C",
+ "primaryIPv4Address":"192.168.1.4",
+ "ruleName":"[SECURITY-RULE-NAME]",
+ "direction":"[DIRECTION-SPECIFIED-IN-RULE]",
+ "priority":"[PRIORITY-SPECIFIED-IN-RULE]",
+ "type":"[ALLOW-OR-DENY-AS-SPECIFIED-IN-RULE]",
+ "conditions":{
+ "protocols":"[PROTOCOLS-SPECIFIED-IN-RULE]",
+ "destinationPortRange":"[PORT-RANGE-SPECIFIED-IN-RULE]",
+ "sourcePortRange":"[PORT-RANGE-SPECIFIED-IN-RULE]",
+ "sourceIP":"[SOURCE-IP-OR-RANGE-SPECIFIED-IN-RULE]",
+ "destinationIP":"[DESTINATION-IP-OR-RANGE-SPECIFIED-IN-RULE]"
+ }
+ }
} ```
The rule counter log contains information about each rule applied to resources.
```json {
- "time": "[DATE-TIME]",
- "systemId": "[ID]",
- "category": "NetworkSecurityGroupRuleCounter",
- "resourceId": "/SUBSCRIPTIONS/[SUBSCRIPTION ID]/RESOURCEGROUPS/[RESOURCE-GROUP-NAME]/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/[NSG-NAME]",
- "operationName": "NetworkSecurityGroupCounters",
- "properties": {
- "vnetResourceGuid":"[ID]",
- "subnetPrefix":"192.168.1.0/24",
- "macAddress":"00-0D-3A-92-6A-7C",
- "primaryIPv4Address":"192.168.1.4",
- "ruleName":"[SECURITY-RULE-NAME]",
- "direction":"[DIRECTION-SPECIFIED-IN-RULE]",
- "type":"[ALLOW-OR-DENY-AS-SPECIFIED-IN-RULE]",
- "matchedConnections":125
- }
+ "time": "[DATE-TIME]",
+ "systemId": "[ID]",
+ "category": "NetworkSecurityGroupRuleCounter",
+ "resourceId": "/SUBSCRIPTIONS/[SUBSCRIPTION ID]/RESOURCEGROUPS/[RESOURCE-GROUP-NAME]/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/[NSG-NAME]",
+ "operationName": "NetworkSecurityGroupCounters",
+ "properties": {
+ "vnetResourceGuid":"[ID]",
+ "subnetPrefix":"192.168.1.0/24",
+ "macAddress":"00-0D-3A-92-6A-7C",
+ "primaryIPv4Address":"192.168.1.4",
+ "ruleName":"[SECURITY-RULE-NAME]",
+ "direction":"[DIRECTION-SPECIFIED-IN-RULE]",
+ "type":"[ALLOW-OR-DENY-AS-SPECIFIED-IN-RULE]",
+ "matchedConnections":125
+ }
} ``` > [!NOTE]
-> The source IP address for the communication is not logged. You can enable [NSG flow logging](../network-watcher/network-watcher-nsg-flow-logging-portal.md) for an NSG however, which logs all of the rule counter information, as well as the source IP address that initiated the communication. NSG flow log data is written to an Azure Storage account. You can analyze the data with the [traffic analytics](../network-watcher/traffic-analytics.md) capability of Azure Network Watcher.
+> The source IP address for the communication is not logged. You can enable [NSG flow logging](../network-watcher/network-watcher-nsg-flow-logging-portal.md) for an NSG, which logs all of the rule counter information and the source IP address that initiated the communication. NSG flow log data is written to an Azure Storage account. You can analyze the data with the [traffic analytics](../network-watcher/traffic-analytics.md) capability of Azure Network Watcher.
## View and analyze logs
-To learn how to view resource log data, see [Azure platform logs overview](../azure-monitor/essentials/platform-logs-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). If you send diagnostics data to:
-- **Azure Monitor logs**: You can use the [network security group analytics](../azure-monitor/insights/azure-networking-analytics.md?toc=%2fazure%2fvirtual-network%2ftoc.json
-) solution for enhanced insights. The solution provides visualizations for NSG rules that allow or deny traffic, per MAC address, of the network interface in a virtual machine.
-- **Azure Storage account**: Data is written to a PT1H.json file. You can find the:
- - Event log in the following path: `insights-logs-networksecuritygroupevent/resourceId=/SUBSCRIPTIONS/[ID]/RESOURCEGROUPS/[RESOURCE-GROUP-NAME-FOR-NSG]/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/[NSG NAME]/y=[YEAR]/m=[MONTH/d=[DAY]/h=[HOUR]/m=[MINUTE]`
- - Rule counter log in the following path: `insights-logs-networksecuritygrouprulecounter/resourceId=/SUBSCRIPTIONS/[ID]/RESOURCEGROUPS/[RESOURCE-GROUP-NAME-FOR-NSG]/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/[NSG NAME]/y=[YEAR]/m=[MONTH/d=[DAY]/h=[HOUR]/m=[MINUTE]`
+If you send diagnostics data to:
+
+- **Azure Monitor logs**: You can use the [network security group analytics](../azure-monitor/insights/azure-networking-analytics.md?toc=%2fazure%2fvirtual-network%2ftoc.json) solution for enhanced insights. The solution provides visualizations for NSG rules that allow or deny traffic, per MAC address, of the network interface in a virtual machine.
+- **Azure Storage account**: Data is written to a *PT1H.json* file. You can find the:
+
+ - Event log that is in the following path: *insights-logs-networksecuritygroupevent/resourceId=/SUBSCRIPTIONS/[ID]/RESOURCEGROUPS/[RESOURCE-GROUP-NAME-FOR-NSG]/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/[NSG NAME]/y=[YEAR]/m=[MONTH/d=[DAY]/h=[HOUR]/m=[MINUTE]*
+ - Rule counter log that is in the following path: *insights-logs-networksecuritygrouprulecounter/resourceId=/SUBSCRIPTIONS/[ID]/RESOURCEGROUPS/[RESOURCE-GROUP-NAME-FOR-NSG]/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/[NSG NAME]/y=[YEAR]/m=[MONTH/d=[DAY]/h=[HOUR]/m=[MINUTE]*
+
+To learn how to view resource log data, see [Azure platform logs overview](../azure-monitor/essentials/platform-logs-overview.md).
## Next steps -- Learn more about [Activity logging](../azure-monitor/essentials/platform-logs-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). Activity logging is enabled by default for NSGs created through either Azure deployment model. To determine which operations were completed on NSGs in the activity log, look for entries that contain the following resource types:
+- For more information about Activity logging, see [Overview of Azure platform logs](../azure-monitor/essentials/platform-logs-overview.md).
+
+ Activity logging is enabled by default for NSGs created through either Azure deployment model. To determine which operations were completed on NSGs in the activity log, look for entries that contain the following resource types:
+ - Microsoft.ClassicNetwork/networkSecurityGroups - Microsoft.ClassicNetwork/networkSecurityGroups/securityRules - Microsoft.Network/networkSecurityGroups - Microsoft.Network/networkSecurityGroups/securityRules-- To learn how to log diagnostic information, to include the source IP address for each flow, see [NSG flow logging](../network-watcher/network-watcher-nsg-flow-logging-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json).+
+- To learn how to log diagnostic information, see [Log network traffic to and from a virtual machine using the Azure portal](../network-watcher/network-watcher-nsg-flow-logging-portal.md).
virtual-network Virtual Network Optimize Network Bandwidth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-optimize-network-bandwidth.md
Title: Optimize VM network throughput
-description: Optimize network throughput for Microsoft Azure Windows and Linux VMs, including major distributions such as Ubuntu, CentOS, and Red Hat.
+ Title: Optimize Azure VM network throughput
+description: Optimize network throughput for Microsoft Azure Windows and Linux virtual machines, including major distributions such as Ubuntu, CentOS, and Red Hat.
Previously updated : 10/06/2020 Last updated : 03/24/2023 # Optimize network throughput for Azure virtual machines
-Azure virtual machines (VM) have default network settings that can be further optimized for network throughput. This article describes how to optimize network throughput for Microsoft Azure Windows and Linux VMs, including major distributions such as Ubuntu, CentOS, and Red Hat.
+Azure Virtual Machines (VMs) have default network settings that can be further optimized for network throughput. This article describes how to optimize network throughput for Microsoft Azure Windows and Linux VMs, including major distributions such as Ubuntu, CentOS, and Red Hat.
-## Windows VM
+## Windows virtual machines
-If your Windows VM supports [Accelerated Networking](create-vm-accelerated-networking-powershell.md), enabling that feature would be the optimal configuration for throughput. For all other Windows VMs, using Receive Side Scaling (RSS) can reach higher maximal throughput than a VM without RSS. RSS may be disabled by default in a Windows VM. To determine whether RSS is enabled, and enable it if it's currently disabled, complete the following steps:
+If your Windows virtual machine supports *accelerated networking*, enable that feature for optimal throughput. For more information, see [Create a Windows VM with accelerated networking](create-vm-accelerated-networking-powershell.md).
-1. See if RSS is enabled for a network adapter with the `Get-NetAdapterRss` PowerShell command. In the following example output returned from the `Get-NetAdapterRss`, RSS is not enabled.
+For all other Windows virtual machines, using Receive Side Scaling (RSS) can reach higher maximal throughput than a VM without RSS. RSS might be disabled by default in a Windows VM. To determine whether RSS is enabled, and enable it if it's currently disabled, complete the following steps:
- ```powershell
- Name : Ethernet
- InterfaceDescription : Microsoft Hyper-V Network Adapter
- Enabled : False
- ```
-2. To enable RSS, enter the following command:
+1. See if RSS is enabled for a network adapter with the [Get-NetAdapterRss](/powershell/module/netadapter/get-netadapterrss) PowerShell command. In the following example output returned from the `Get-NetAdapterRss`, RSS isn't enabled.
- ```powershell
- Get-NetAdapter | % {Enable-NetAdapterRss -Name $_.Name}
- ```
- The previous command does not have an output. The command changed NIC settings, causing temporary connectivity loss for about one minute. A Reconnecting dialog box appears during the connectivity loss. Connectivity is typically restored after the third attempt.
-3. Confirm that RSS is enabled in the VM by entering the `Get-NetAdapterRss` command again. If successful, the following example output is returned:
+ ```powershell
+ Name : Ethernet
+ InterfaceDescription : Microsoft Hyper-V Network Adapter
+ Enabled : False
+ ```
- ```powershell
- Name : Ethernet
- InterfaceDescription : Microsoft Hyper-V Network Adapter
- Enabled : True
- ```
+1. To enable RSS, enter the following command:
-## Linux VM
+ ```powershell
+ Get-NetAdapter | % {Enable-NetAdapterRss -Name $_.Name}
+ ```
+
+ This command doesn't have an output. The command changes NIC settings. It causes temporary connectivity loss for about one minute. A *Reconnecting* dialog appears during the connectivity loss. Connectivity is typically restored after the third attempt.
+
+1. Confirm that RSS is enabled in the VM by entering the `Get-NetAdapterRss` command again. If successful, the following example output is returned:
+
+ ```powershell
+ Name : Ethernet
+ InterfaceDescription : Microsoft Hyper-V Network Adapter
+ Enabled : True
+ ```
+
+## Linux virtual machines
RSS is always enabled by default in an Azure Linux VM. Linux kernels released since October 2017 include new network optimizations options that enable a Linux VM to achieve higher network throughput.
After the creation is complete, enter the following commands to get the latest u
```bash #run as root or preface with sudo
-apt-get -y update
-apt-get -y upgrade
-apt-get -y dist-upgrade
+sudo apt-get -y update
+sudo apt-get -y upgrade
+sudo apt-get -y dist-upgrade
```
-The following optional command set may be helpful for existing Ubuntu deployments that already have the Azure kernel but that have failed to further updates with errors.
+If an existing Ubuntu deployment already has the Azure kernel but fails to update with errors, this optional command set might be helpful.
```bash
-#optional steps may be helpful in existing deployments with the Azure kernel
+#optional steps might be helpful in existing deployments with the Azure kernel
#run as root or preface with sudo
-apt-get -f install
-apt-get --fix-missing install
-apt-get clean
-apt-get -y update
-apt-get -y upgrade
-apt-get -y dist-upgrade
+sudo apt-get -f install
+sudo apt-get --fix-missing install
+sudo apt-get clean
+sudo apt-get -y update
+sudo apt-get -y upgrade
+sudo apt-get -y dist-upgrade
``` #### Ubuntu Azure kernel upgrade for existing VMs
-Significant throughput performance can be achieved by upgrading to the Azure Linux kernel. To verify whether you have this kernel, check your kernel version. It should be the same or later than the example.
+You can get significant throughput performance by upgrading to the Azure Linux kernel. To verify whether you have this kernel, check your kernel version. It should be the same or later than the example.
```bash #Azure kernel name ends with "-azure"
uname -r
#4.13.0-1007-azure ```
-If your VM does not have the Azure kernel, the version number usually begins with "4.4." If the VM does not have the Azure kernel, run the following commands as root:
+If your virtual machine doesn't have the Azure kernel, the version number usually begins with "4.4." If the VM doesn't have the Azure kernel, run the following commands as root:
```bash #run as root or preface with sudo
-apt-get update
-apt-get upgrade -y
-apt-get dist-upgrade -y
-apt-get install "linux-azure"
-reboot
+sudo apt-get update
+sudo apt-get upgrade -y
+sudo apt-get dist-upgrade -y
+sudo apt-get install "linux-azure"
+sudo reboot
``` ### CentOS
-In order to get the latest optimizations, it is best to create a VM with the latest supported version by specifying the following parameters:
+In order to get the latest optimizations, we recommend that you create a virtual machine with the latest supported version by specifying the following parameters:
```json "Publisher": "OpenLogic",
In order to get the latest optimizations, it is best to create a VM with the lat
"Version": "latest" ```
-New and existing VMs can benefit from installing the latest Linux Integration Services (LIS). The throughput optimization is in LIS, starting from 4.2.2-2, although later versions contain further improvements. Enter the following
+Both new and existing VMs can benefit from installing the latest Linux Integration Services (LIS). The throughput optimization is in LIS, starting from 4.2.2-2. Later versions contain further improvements. Enter the following
commands to install the latest LIS: ```bash
sudo yum install microsoft-hyper-v
### Red Hat
-In order to get the optimizations, it is best to create a VM with the latest supported version by specifying the following parameters:
+In order to get the optimizations, we recommend that you create a virtual machine with the latest supported version by specifying the following parameters:
```json "Publisher": "RedHat"
In order to get the optimizations, it is best to create a VM with the latest sup
"Version": "latest" ```
-New and existing VMs can benefit from installing the latest Linux Integration Services (LIS). The throughput optimization is in LIS, starting from 4.2. Enter the following commands to download and install LIS:
+Both new and existing VMs can benefit from installing the latest LIS. The throughput optimization is in LIS, starting from 4.2. Enter the following commands to download and install LIS:
```bash wget https://aka.ms/lis
cd LISISO
sudo ./install.sh #or upgrade.sh if prior LIS was previously installed ```
-Learn more about Linux Integration Services Version 4.2 for Hyper-V by viewing the [download page](https://www.microsoft.com/download/details.aspx?id=55106).
+Learn more about Linux Integration Services Version 4.3 for Hyper-V by viewing the [download page](https://www.microsoft.com/download/details.aspx?id=55106).
## Next steps
-* Deploy VMs close to each other for low latency with [Proximity Placement Group](../virtual-machines/co-location.md)
-* See the optimized result with [Bandwidth/Throughput testing Azure VM](virtual-network-bandwidth-testing.md) for your scenario.
-* Read about how [bandwidth is allocated to virtual machines](virtual-machine-network-throughput.md)
-* Learn more with [Azure Virtual Network frequently asked questions (FAQ)](virtual-networks-faq.md)
+
+- Deploy VMs close to each other for low latency with [proximity placement groups](../virtual-machines/co-location.md).
+- See the optimized result with [Bandwidth/Throughput testing](virtual-network-bandwidth-testing.md) for your scenario.
+- Read about how [bandwidth is allocated to virtual machines](virtual-machine-network-throughput.md).
+- Learn more with [Azure Virtual Network frequently asked questions](virtual-networks-faq.md).
virtual-network Virtual Network Test Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-test-latency.md
Title: Test Azure virtual machine network latency in an Azure virtual network
-description: Learn how to test network latency between Azure virtual machines on a virtual network
+ Title: Test network latency between Azure VMs
+description: Learn how to test network latency between Azure virtual machines on a virtual network.
Previously updated : 10/29/2019 Last updated : 03/23/2023
-# Test VM network latency
+# Test network latency between Azure VMs
-To achieve the most accurate results, measure your Azure virtual machine (VM) network latency with a tool that's designed for the task. Publicly available tools such as SockPerf (for Linux) and latte.exe (for Windows) can isolate and measure network latency while excluding other types of latency, such as application latency. These tools focus on the kind of network traffic that affects application performance (namely, Transmission Control Protocol [TCP] and User Datagram Protocol [UDP] traffic).
+This article describes how to test network latency between Azure virtual machines (VMs) by using the publicly available tools [Latte](https://github.com/microsoft/latte) for Windows or [SockPerf](https://github.com/mellanox/sockperf) for Linux.
-Other common connectivity tools, such as Ping, might measure latency, but their results might not represent the network traffic that's used in real workloads. That's because most of these tools employ the Internet Control Message Protocol (ICMP), which can be treated differently from application traffic and whose results might not apply to workloads that use TCP and UDP.
+For the most accurate results, you should measure VM network latency with a tool that's designed for the task and excludes other types of latency, such as application latency. Latte and SockPerf provide the most relevant network latency results by focusing on Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) traffic. Most applications use these protocols, and this traffic has the largest effect on application performance.
-For accurate network latency testing of the protocols used by most applications, SockPerf (for Linux) and latte.exe (for Windows) produce the most relevant results. This article covers both of these tools.
+Many other common network latency test tools, such as Ping, don't measure TCP or UDP traffic. Tools like Ping use Internet Control Message Protocol (ICMP), which applications don't use. ICMP traffic can be treated differently from application traffic and doesn't directly affect application performance. ICMP test results don't directly apply to workloads that use TCP and UDP.
-## Overview
+Latte and SockPerf measure only TCP or UDP payload delivery times. These tools use the following approach to measure network latency between two physical or virtual computers:
-By using two VMs, one as sender and one as receiver, you create a two-way communications channel. With this approach, you can send and receive packets in both directions and measure the round-trip time (RTT).
+1. Create a two-way communications channel between the computers by designating one as sender and one as receiver.
+1. Send and receive packets in both directions and measure the round-trip time (RTT).
-You can use this approach to measure network latency between two VMs or even between two physical computers. Latency measurements can be useful for the following scenarios:
+## Tips and best practices to optimize network latency
-- Establish a benchmark for network latency between the deployed VMs.-- Compare the effects of changes in network latency after related changes are made to:
- - Operating system (OS) or network stack software, including configuration changes.
- - A VM deployment method, such as deploying to an availability zone or proximity placement group (PPG).
- - VM properties, such as Accelerated Networking or size changes.
- - A virtual network, such as routing or filtering changes.
+To optimize VMs for network latency, observe the following recommendations when you create the VMs:
-### Tools for testing
-To measure latency, you have two different tool options:
-
-* For Windows-based systems: [latte.exe (Windows)](https://github.com/microsoft/latte/releases/download/v0/latte.exe)
-* For Linux-based systems: [SockPerf (Linux)](https://github.com/mellanox/sockperf)
-
-By using these tools, you help ensure that only TCP or UDP payload delivery times are measured and not ICMP (Ping) or other packet types that aren't used by applications and don't affect their performance.
-
-### Tips for creating an optimal VM configuration
-
-When you create your VM configuration, keep in mind the following recommendations:
- Use the latest version of Windows or Linux.-- Enable Accelerated Networking for best results.-- Deploy VMs with an [Azure proximity placement group](../virtual-machines/co-location.md).-- Larger VMs generally perform better than smaller VMs.
+- Enable [Accelerated Networking](accelerated-networking-overview.md) for increased performance.
+- Deploy VMs within an [Azure proximity placement group](/azure/virtual-machines/co-location).
+- Create larger VMs for better performance.
-### Tips for analysis
+Use the following best practices to test and analyze network latency:
-As you're analyzing test results, keep in mind the following recommendations:
+1. As soon as you finish deploying, configuring, and optimizing network VMs, take baseline network latency measurements between deployed VMs to establish benchmarks.
-- Establish a baseline early, as soon as deployment, configuration, and optimizations are complete.-- Always compare new results to a baseline or, otherwise, from one test to another with controlled changes.-- Repeat tests whenever changes are observed or planned.
+1. Test the effects on network latency of changing any of the following components:
+ - Operating system (OS) or network stack software, including configuration changes.
+ - VM deployment method, such as deploying to an availability zone or proximity placement group (PPG).
+ - VM properties, such as Accelerated Networking or size changes.
+ - Virtual network configuration, such as routing or filtering changes.
+1. Always compare new test results to the baseline or to the latest test results before controlled changes.
-## Test VMs that are running Windows
+1. Repeat tests whenever you observe or deploy changes.
-### Get latte.exe onto the VMs
+## Test VMs with Latte or SockPerf
-Download the [latest version of latte.exe](https://github.com/microsoft/latte/releases/download/v0/latte.exe).
+Use the following procedures to install and test network latency with [Latte](https://github.com/mellanox/sockperf) for Windows or [SockPerf](https://github.com/mellanox/sockperf) for Linux.
-Consider putting latte.exe in separate folder, such as *c:\tools*.
+# [Windows](#tab/windows)
-### Allow latte.exe through Windows Defender Firewall
+### Install Latte and configure VMs
-On the *receiver*, create an Allow rule on Windows Defender Firewall to allow the latte.exe traffic to arrive. It's easiest to allow the entire latte.exe program by name rather than to allow specific TCP ports inbound.
+1. [Download the latest version of latte.exe](https://github.com/microsoft/latte/releases/download/v0/latte.exe) to both VMs, into a separate folder such as *c:\\tools*.
-Allow latte.exe through Windows Defender Firewall by running the following command:
+1. On the *receiver* VM, create a Windows Defender Firewall `allow` rule to allow the Latte traffic to arrive. It's easier to allow the *latte.exe* program by name than to allow specific inbound TCP ports. In the command, replace the `<path>` placeholder with the path you downloaded *latte.exe* to, such as *c:\\tools\\*.
-```cmd
-netsh advfirewall firewall add rule program=<path>\latte.exe name="Latte" protocol=any dir=in action=allow enable=yes profile=ANY
-```
-
-For example, if you copied latte.exe to the *c:\tools* folder, this would be the command:
+ ```cmd
+ netsh advfirewall firewall add rule program=<path>latte.exe name="Latte" protocol=any dir=in action=allow enable=yes profile=ANY
+ ```
-`netsh advfirewall firewall add rule program=c:\tools\latte.exe name="Latte" protocol=any dir=in action=allow enable=yes profile=ANY`
+### Run Latte on the VMs
-### Run latency tests
+Run *latte.exe* from the Windows command line, not from PowerShell.
-* On the *receiver*, start latte.exe (run it from the CMD window, not from PowerShell):
+1. On the receiver VM, run the following command, replacing the `<receiver IP address>`, `<port>`, and `<iterations>` placeholders with your own values.
- ```cmd
- latte -a <Receiver IP address>:<port> -i <iterations>
- ```
+ ```cmd
+ latte -a <receiver IP address>:<port> -i <iterations>
+ ```
- Around 65,000 iterations is long enough to return representative results.
+ - Around 65,000 iterations are enough to return representative results.
+ - Any available port number is fine.
- Any available port number is fine.
+ The following example shows the command for a VM with an IP address of `10.0.0.4`:<br><br>`latte -a 10.0.0.4:5005 -i 65100`
- If the VM has an IP address of 10.0.0.4, the command would look like this:
+1. On the *sender* VM, run the same command as on the receiver, except with `-c` added to indicate the *client* or sender VM. Again replace the `<receiver IP address>`, `<port>`, and `<iterations>` placeholders with your own values.
- `latte -a 10.0.0.4:5005 -i 65100`
+ ```cmd
+ latte -c -a <receiver IP address>:<port> -i <iterations>
+ ```
-* On the *sender*, start latte.exe (run it from the CMD window, not from PowerShell):
+ For example:
+
+ `latte -c -a 10.0.0.4:5005 -i 65100`
- ```cmd
- latte -c -a <Receiver IP address>:<port> -i <iterations>
- ```
+1. Wait for the results. Depending on how far apart the VMs are, the test could take a few minutes to finish. Consider starting with fewer iterations to test for success before running longer tests.
- The resulting command is the same as on the receiver, except with the addition of&nbsp;*-c* to indicate that this is the *client*, or *sender*:
+# [Linux](#tab/linux)
- `latte -c -a 10.0.0.4:5005 -i 65100`
+### Prepare VMs
-Wait for the results. Depending on how far apart the VMs are, the test could take a few minutes to finish. Consider starting with fewer iterations to test for success before running longer tests.
+On both the *sender* and *receiver* Linux VMs, run the following commands to prepare for SockPerf, depending on your Linux distro.
-## Test VMs that are running Linux
+- Red Hat Enterprise Linux (RHEL) or CentOS:
-To test VMs that are running Linux, use [SockPerf](https://github.com/mellanox/sockperf).
+ ```bash
+ #RHEL/CentOS - Install Git and other helpful tools
+ sudo yum install gcc -y -q
+ sudo yum install git -y -q
+ sudo yum install gcc-c++ -y
+ sudo yum install ncurses-devel -y
+ sudo yum install -y automake
+ sudo yum install -y autoconf
+ sudo yum install -y libtool
+ ```
-### Install SockPerf on the VMs
+- Ubuntu:
-On the Linux VMs, both *sender* and *receiver*, run the following commands to prepare SockPerf on the VMs. Commands are provided for the major distros.
+ ```bash
+ #Ubuntu - Install Git and other helpful tools
+ sudo apt-get install build-essential -y
+ sudo apt-get install git -y -q
+ sudo apt-get install -y autotools-dev
+ sudo apt-get install -y automake
+ sudo apt-get install -y autoconf
+ sudo apt-get install -y libtool
+ sudo apt update
+ sudo apt upgrade
+ ```
-#### For Red Hat Enterprise Linux (RHEL)/CentOS
+### Copy, compile, and install SockPerf
-Run the following commands:
-
-```bash
-#RHEL/CentOS - Install Git and other helpful tools
- sudo yum install gcc -y -q
- sudo yum install git -y -q
- sudo yum install gcc-c++ -y
- sudo yum install ncurses-devel -y
- sudo yum install -y automake
- sudo yum install -y autoconf
- sudo yum install -y libtool
-```
-
-#### For Ubuntu
-
-Run the following commands:
-
-```bash
-#Ubuntu - Install Git and other helpful tools
- sudo apt-get install build-essential -y
- sudo apt-get install git -y -q
- sudo apt-get install -y autotools-dev
- sudo apt-get install -y automake
- sudo apt-get install -y autoconf
- sudo apt-get install -y libtool
- sudo apt update
- sudo apt upgrade
-```
-
-#### For all distros
-
-Copy, compile, and install SockPerf according to the following steps:
+Copy, compile, and install SockPerf by running the following commands:
```bash #Bash - all distros
cd sockperf/
./autogen.sh ./configure --prefix=
-#make is slower, may take several minutes
+#make is slow, may take several minutes
make #make install is fast
sudo make install
### Run SockPerf on the VMs
-After the SockPerf installation is complete, the VMs are ready to run the latency tests.
-
-First, start SockPerf on the *receiver*.
-
-Any available port number is fine. In this example, we use port 12345:
-
-```bash
-#Server/Receiver - assumes server's IP is 10.0.0.4:
-sudo sockperf sr --tcp -i 10.0.0.4 -p 12345
-```
+1. After the SockPerf installation is complete, start SockPerf on the *receiver* VM. Any available port number is fine. The following example uses port `12345`. Replace the example IP address of `10.0.0.4` with the IP address of your receiver VM.
-Now that the server is listening, the client can begin sending packets to the server on the port on which it is listening (in this case, 12345).
+ ```bash
+ #Server/Receiver for IP 10.0.0.4:
+ sudo sockperf sr --tcp -i 10.0.0.4 -p 12345
+ ```
-About 100 seconds is long enough to return representative results, as shown in the following example:
+1. Now that the receiver is listening, run the following command on the *sender* or client computer to send packets to the receiver on the listening port, in this case `12345`.
-```bash
-#Client/Sender - assumes server's IP is 10.0.0.4:
-sockperf ping-pong -i 10.0.0.4 --tcp -m 350 -t 101 -p 12345 --full-rtt
-```
+ ```bash
+ #Client/Sender for IP 10.0.0.4:
+ sockperf ping-pong -i 10.0.0.4 --tcp -m 350 -t 101 -p 12345 --full-rtt
+ ```
-Wait for the results. Depending on how far apart the VMs are, the number of iterations will vary. To test for success before you run longer tests, consider starting with shorter tests of about 5 seconds.
+ - The `-t` option sets testing time in seconds. About 100 seconds is long enough to return representative results.
+ - The `-m` denotes message size in bytes. A 350-byte message size is typical for an average packet. You can adjust the size to more accurately represent your VM's workloads.
-This SockPerf example uses a 350-byte message size, which is typical for an average packet. You can adjust the size higher or lower to achieve results that more accurately represent the workload that's running on your VMs.
+1. Wait for the results. Depending on how far apart the VMs are, the number of iterations varies. To test for success before you run longer tests, consider starting with shorter tests of about five seconds.
+ ## Next steps
-* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md).
-* Learn how to [Optimize networking for VMs](../virtual-network/virtual-network-optimize-network-bandwidth.md) for your scenario.
-* Read about [how bandwidth is allocated to virtual machines](../virtual-network/virtual-machine-network-throughput.md).
-* For more information, see [Azure Virtual Network FAQ](../virtual-network/virtual-networks-faq.md).
+
+- Reduce latency with an [Azure proximity placement group](/azure/virtual-machines/co-location).
+- [Optimize network throughput for Azure virtual machines](virtual-network-optimize-network-bandwidth.md).
+- Allocate [virtual machine network bandwidth](virtual-machine-network-throughput.md).
+- [Test bandwidth and throughput](virtual-network-bandwidth-testing.md).
+- For more information about Azure virtual networking, see [Azure Virtual Network FAQ](virtual-networks-faq.md).
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
No. Multicast and broadcast are not supported.
You can use TCP, UDP, and ICMP TCP/IP protocols within VNets. Unicast is supported within VNets. Multicast, broadcast, IP-in-IP encapsulated packets, and Generic Routing Encapsulation (GRE) packets are blocked within VNets. You cannot use Dynamic Host Configuration Protocol (DHCP) via Unicast (source port UDP/68 / destination port UDP/67). UDP source port 65330 which is reserved for the host. See ["Can I deploy a DHCP server in a VNet"](#can-i-deploy-a-dhcp-server-in-a-vnet) for more detail what is and is not supported for DHCP. ### Can I deploy a DHCP server in a VNet?
-Azure VNets provide DHCP service and DNS to VMs and client/server DHCP (source port UDP/68, destination port UDP/67) not supported in a VNet. You cannot deploy your own DHCP service to receive and provide unicast/broadcast client/server DHCP traffic for endpoints inside a VNet. You can deploy a DHCP server on a VM with the intent to receive unicast DHCP relay (source port UDP/67, destination port UDP/67) DHCP traffic. A possible scenario is configuring DHCP relay from devices on-premises to an Azure VM running a DHCP server. Customer is responsible for configuring on-premise devices (for example, router configuration) to create this DHCP relay traffic to the VM's IP in Azure.
+Azure VNets provide DHCP service and DNS to VMs and client/server DHCP (source port UDP/68, destination port UDP/67) not supported in a VNet. You cannot deploy your own DHCP service to receive and provide unicast/broadcast client/server DHCP traffic for endpoints inside a VNet. It is also an *unsupported* scenario to deploy a DHCP server VM with the intent to receive unicast DHCP relay (source port UDP/67, destination port UDP/67) DHCP traffic.
### Can I ping default gateway within a VNet? No. Azure provided default gateway does not respond to ping. But you can use ping in your VNets to check connectivity and troubleshooting between VMs.
virtual-wan About Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing.md
Each connection is associated to one route table. Associating a connection to a
By default, all connections are associated to a **Default route table** in a virtual hub. Each virtual hub has its own Default route table, which can be edited to add a static route(s). Routes added statically take precedence over dynamically learned routes for the same prefixes. ### <a name="propagation"></a>Propagation
Connections dynamically propagate routes to a route table. With a VPN connection
A **None route table** is also available for each virtual hub. Propagating to the None route table implies that no routes are required to be propagated from the connection. VPN, ExpressRoute, and User VPN connections propagate routes to the same set of route tables. ### <a name="labels"></a>Labels
web-application-firewall Waf Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-geo-filtering.md
You can configure a geo-filtering policy for your Front Door by using [Azure Pow
| TG | Togo| | TH | Thailand| | TN | Tunisia|
-| TR | Turkey|
+| TR | T├╝rkiye|
| TT | Trinidad and Tobago| | TW | Taiwan| | TZ | Tanzania, United Republic of|
web-application-firewall Geomatch Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/geomatch-custom-rules.md
If you are using the Geomatch operator, the selectors can be any of the followin
| TG | Togo| | TH | Thailand| | TN | Tunisia|
-| TR | Turkey|
+| TR | T├╝rkiye |
| TT | Trinidad and Tobago| | TW | Taiwan| | TZ | Tanzania, United Republic of|