Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Configure Authentication In Sample Node Web App With Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md | |
active-directory-b2c | Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md | Data resides in the **United States** for the following locations: Data resides in **Europe** for the following locations: -> Algeria (DZ), Austria (AT), Azerbaijan (AZ), Bahrain (BH), Belarus (BY), Belgium (BE), Bulgaria (BG), Croatia (HR), Cyprus (CY), Czech Republic (CZ), Denmark (DK), Egypt (EG), Estonia (EE), Finland (FT), France (FR), Germany (DE), Greece (GR), Hungary (HU), Iceland (IS), Ireland (IE), Israel (IL), Italy (IT), Jordan (JO), Kazakhstan (KZ), Kenya (KE), Kuwait (KW), Latvia (LV), Lebanon (LB), Liechtenstein (LI), Lithuania (LT), Luxembourg (LU), North Macedonia (ML), Malta (MT), Montenegro (ME), Morocco (MA), Netherlands (NL), Nigeria (NG), Norway (NO), Oman (OM), Pakistan (PK), Poland (PL), Portugal (PT), Qatar (QA), Romania (RO), Russia (RU), Saudi Arabia (SA), Serbia (RS), Slovakia (SK), Slovenia (ST), South Africa (ZA), Spain (ES), Sweden (SE), Switzerland (CH), Tunisia (TN), Turkey (TR), Ukraine (UA), United Arab Emirates (AE) and United Kingdom (GB) +> Algeria (DZ), Austria (AT), Azerbaijan (AZ), Bahrain (BH), Belarus (BY), Belgium (BE), Bulgaria (BG), Croatia (HR), Cyprus (CY), Czech Republic (CZ), Denmark (DK), Egypt (EG), Estonia (EE), Finland (FT), France (FR), Germany (DE), Greece (GR), Hungary (HU), Iceland (IS), Ireland (IE), Israel (IL), Italy (IT), Jordan (JO), Kazakhstan (KZ), Kenya (KE), Kuwait (KW), Latvia (LV), Lebanon (LB), Liechtenstein (LI), Lithuania (LT), Luxembourg (LU), North Macedonia (ML), Malta (MT), Montenegro (ME), Morocco (MA), Netherlands (NL), Nigeria (NG), Norway (NO), Oman (OM), Pakistan (PK), Poland (PL), Portugal (PT), Qatar (QA), Romania (RO), Russia (RU), Saudi Arabia (SA), Serbia (RS), Slovakia (SK), Slovenia (ST), South Africa (ZA), Spain (ES), Sweden (SE), Switzerland (CH), Tunisia (TN), T├╝rkiye (TR), Ukraine (UA), United Arab Emirates (AE) and United Kingdom (GB) Data resides in **Asia Pacific** for the following locations: |
active-directory-b2c | Enable Authentication Angular Spa App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app-options.md | |
active-directory-b2c | Enable Authentication Angular Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app.md | |
active-directory-b2c | Enable Authentication Ios App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-ios-app-options.md | |
active-directory-b2c | Enable Authentication Ios App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-ios-app.md | Review the prerequisites and integration instructions in [Configure authenticati ## Create an iOS Swift app project -If you don't already have an iOS Swift application, set up a new project by doing the following: +If you don't already have an iOS Swift application, set up a new project by doing the following steps: 1. Open [Xcode](https://developer.apple.com/xcode/), and then select **File** > **New** > **Project**. 1. For iOS apps, select **iOS** > **App**, and then select **Next**. If you don't already have an iOS Swift application, set up a new project by doin ## Step 1: Install the MSAL library -1. Use [CocoaPods](https://cocoapods.org/) to install the MSAL library. In the same folder as your project's *.xcodeproj* file, if the *podfile* file doesn't exist, create an empty file called *podfile*. Add the following code to the *podfile* file: +1. Use [CocoaPods](https://cocoapods.org/) to install the MSAL library. In the same folder as your project's *.xcodeproj* file, if the *podfile* file doesn't exist, create an empty file and name it *podfile*. Add the following code to the *podfile* file: ``` use_frameworks! The [sample code](configure-authentication-sample-ios-app.md#step-4-get-the-ios- - Contains information about your Azure AD B2C identity provider. The app uses this information to establish a trust relationship with Azure AD B2C. - Contains the authentication code to authenticate users, acquire tokens, and validate them. -Choose a `UIViewController` where users will authenticate. In your `UIViewController`, merge the code with the [code that's provided in GitHub](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/blob/vNext/MSALiOS/ViewController.swift). +Choose a `UIViewController` where users authenticate. In your `UIViewController`, merge the code with the [code that's provided in GitHub](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal/blob/vNext/MSALiOS/ViewController.swift). ## Step 4: Configure your iOS Swift app Authorization: Bearer <access-token> When users [authenticate interactively](#step-62-start-an-interactive-authorization-request), the app gets an access token in the `acquireToken` closure. For subsequent web API calls, use the acquire token silent (`acquireTokenSilent`) method, as described in this section. -The `acquireTokenSilent` method does the following: +The `acquireTokenSilent` method does the following actions: 1. It attempts to fetch an access token with the requested scopes from the token cache. If the token is present and hasn't expired, the token is returned. 1. If the token isn't present in the token cache or it has expired, the MSAL library attempts to use the refresh token to acquire a new access token. |
active-directory-b2c | Enable Authentication Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app.md | app.listen(port, () => { ## Step 4: Create the SPA user interface -Add the SAP app `https://docsupdatetracker.net/index.html` file. This file implements a user interface that's built with a Bootstrap framework, and it imports script files for configuration, authentication, and web API calls. +Add the SPA app `https://docsupdatetracker.net/index.html` file. This file implements a user interface that's built with a Bootstrap framework, and it imports script files for configuration, authentication, and web API calls. The resources referenced by the *https://docsupdatetracker.net/index.html* file are detailed in the following table: |
active-directory-b2c | Localization String Ids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md | The following example shows the use of some of the user interface elements in th <LocalizedString ElementType="UxElement" StringId="error_phone_throttled">You hit the limit on the number of call attempts. Try again shortly.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="error_throttled">You hit the limit on the number of verification attempts. Try again shortly.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="error_incorrect_code">The verification code you have entered does not match our records. Please try again, or request a new code.</LocalizedString>- <LocalizedString ElementType="UxElement" StringId="countryList">{"DEFAULT":"Country/Region","AF":"Afghanistan","AX":"Åland Islands","AL":"Albania","DZ":"Algeria","AS":"American Samoa","AD":"Andorra","AO":"Angola","AI":"Anguilla","AQ":"Antarctica","AG":"Antigua and Barbuda","AR":"Argentina","AM":"Armenia","AW":"Aruba","AU":"Australia","AT":"Austria","AZ":"Azerbaijan","BS":"Bahamas","BH":"Bahrain","BD":"Bangladesh","BB":"Barbados","BY":"Belarus","BE":"Belgium","BZ":"Belize","BJ":"Benin","BM":"Bermuda","BT":"Bhutan","BO":"Bolivia","BQ":"Bonaire","BA":"Bosnia and Herzegovina","BW":"Botswana","BV":"Bouvet Island","BR":"Brazil","IO":"British Indian Ocean Territory","VG":"British Virgin Islands","BN":"Brunei","BG":"Bulgaria","BF":"Burkina Faso","BI":"Burundi","CV":"Cabo Verde","KH":"Cambodia","CM":"Cameroon","CA":"Canada","KY":"Cayman Islands","CF":"Central African Republic","TD":"Chad","CL":"Chile","CN":"China","CX":"Christmas Island","CC":"Cocos (Keeling) Islands","CO":"Colombia","KM":"Comoros","CG":"Congo","CD":"Congo (DRC)","CK":"Cook Islands","CR":"Costa Rica","CI":"Côte d'Ivoire","HR":"Croatia","CU":"Cuba","CW":"Curaçao","CY":"Cyprus","CZ":"Czech Republic","DK":"Denmark","DJ":"Djibouti","DM":"Dominica","DO":"Dominican Republic","EC":"Ecuador","EG":"Egypt","SV":"El Salvador","GQ":"Equatorial Guinea","ER":"Eritrea","EE":"Estonia","ET":"Ethiopia","FK":"Falkland Islands","FO":"Faroe Islands","FJ":"Fiji","FI":"Finland","FR":"France","GF":"French Guiana","PF":"French Polynesia","TF":"French Southern Territories","GA":"Gabon","GM":"Gambia","GE":"Georgia","DE":"Germany","GH":"Ghana","GI":"Gibraltar","GR":"Greece","GL":"Greenland","GD":"Grenada","GP":"Guadeloupe","GU":"Guam","GT":"Guatemala","GG":"Guernsey","GN":"Guinea","GW":"Guinea-Bissau","GY":"Guyana","HT":"Haiti","HM":"Heard Island and McDonald Islands","HN":"Honduras","HK":"Hong Kong SAR","HU":"Hungary","IS":"Iceland","IN":"India","ID":"Indonesia","IR":"Iran","IQ":"Iraq","IE":"Ireland","IM":"Isle of Man","IL":"Israel","IT":"Italy","JM":"Jamaica","JP":"Japan","JE":"Jersey","JO":"Jordan","KZ":"Kazakhstan","KE":"Kenya","KI":"Kiribati","KR":"Korea","KW":"Kuwait","KG":"Kyrgyzstan","LA":"Laos","LV":"Latvia","LB":"Lebanon","LS":"Lesotho","LR":"Liberia","LY":"Libya","LI":"Liechtenstein","LT":"Lithuania","LU":"Luxembourg","MO":"Macao SAR","MK":"North Macedonia","MG":"Madagascar","MW":"Malawi","MY":"Malaysia","MV":"Maldives","ML":"Mali","MT":"Malta","MH":"Marshall Islands","MQ":"Martinique","MR":"Mauritania","MU":"Mauritius","YT":"Mayotte","MX":"Mexico","FM":"Micronesia","MD":"Moldova","MC":"Monaco","MN":"Mongolia","ME":"Montenegro","MS":"Montserrat","MA":"Morocco","MZ":"Mozambique","MM":"Myanmar","NA":"Namibia","NR":"Nauru","NP":"Nepal","NL":"Netherlands","NC":"New Caledonia","NZ":"New Zealand","NI":"Nicaragua","NE":"Niger","NG":"Nigeria","NU":"Niue","NF":"Norfolk Island","KP":"North Korea","MP":"Northern Mariana Islands","NO":"Norway","OM":"Oman","PK":"Pakistan","PW":"Palau","PS":"Palestinian Authority","PA":"Panama","PG":"Papua New Guinea","PY":"Paraguay","PE":"Peru","PH":"Philippines","PN":"Pitcairn Islands","PL":"Poland","PT":"Portugal","PR":"Puerto Rico","QA":"Qatar","RE":"Réunion","RO":"Romania","RU":"Russia","RW":"Rwanda","BL":"Saint Barthélemy","KN":"Saint Kitts and Nevis","LC":"Saint Lucia","MF":"Saint Martin","PM":"Saint Pierre and Miquelon","VC":"Saint Vincent and the Grenadines","WS":"Samoa","SM":"San Marino","ST":"São Tomé and Príncipe","SA":"Saudi Arabia","SN":"Senegal","RS":"Serbia","SC":"Seychelles","SL":"Sierra Leone","SG":"Singapore","SX":"Sint Maarten","SK":"Slovakia","SI":"Slovenia","SB":"Solomon Islands","SO":"Somalia","ZA":"South Africa","GS":"South Georgia and South Sandwich Islands","SS":"South Sudan","ES":"Spain","LK":"Sri Lanka","SH":"St Helena, Ascension, Tristan da Cunha","SD":"Sudan","SR":"Suriname","SJ":"Svalbard","SZ":"Swaziland","SE":"Sweden","CH":"Switzerland","SY":"Syria","TW":"Taiwan","TJ":"Tajikistan","TZ":"Tanzania","TH":"Thailand","TL":"Timor-Leste","TG":"Togo","TK":"Tokelau","TO":"Tonga","TT":"Trinidad and Tobago","TN":"Tunisia","TR":"Turkey","TM":"Turkmenistan","TC":"Turks and Caicos Islands","TV":"Tuvalu","UM":"U.S. Outlying Islands","VI":"U.S. Virgin Islands","UG":"Uganda","UA":"Ukraine","AE":"United Arab Emirates","GB":"United Kingdom","US":"United States","UY":"Uruguay","UZ":"Uzbekistan","VU":"Vanuatu","VA":"Vatican City","VE":"Venezuela","VN":"Vietnam","WF":"Wallis and Futuna","YE":"Yemen","ZM":"Zambia","ZW":"Zimbabwe"}</LocalizedString> + <LocalizedString ElementType="UxElement" StringId="countryList">{"DEFAULT":"Country/Region","AF":"Afghanistan","AX":"Åland Islands","AL":"Albania","DZ":"Algeria","AS":"American Samoa","AD":"Andorra","AO":"Angola","AI":"Anguilla","AQ":"Antarctica","AG":"Antigua and Barbuda","AR":"Argentina","AM":"Armenia","AW":"Aruba","AU":"Australia","AT":"Austria","AZ":"Azerbaijan","BS":"Bahamas","BH":"Bahrain","BD":"Bangladesh","BB":"Barbados","BY":"Belarus","BE":"Belgium","BZ":"Belize","BJ":"Benin","BM":"Bermuda","BT":"Bhutan","BO":"Bolivia","BQ":"Bonaire","BA":"Bosnia and Herzegovina","BW":"Botswana","BV":"Bouvet Island","BR":"Brazil","IO":"British Indian Ocean Territory","VG":"British Virgin Islands","BN":"Brunei","BG":"Bulgaria","BF":"Burkina Faso","BI":"Burundi","CV":"Cabo Verde","KH":"Cambodia","CM":"Cameroon","CA":"Canada","KY":"Cayman Islands","CF":"Central African Republic","TD":"Chad","CL":"Chile","CN":"China","CX":"Christmas Island","CC":"Cocos (Keeling) Islands","CO":"Colombia","KM":"Comoros","CG":"Congo","CD":"Congo (DRC)","CK":"Cook Islands","CR":"Costa Rica","CI":"Côte d'Ivoire","HR":"Croatia","CU":"Cuba","CW":"Curaçao","CY":"Cyprus","CZ":"Czech Republic","DK":"Denmark","DJ":"Djibouti","DM":"Dominica","DO":"Dominican Republic","EC":"Ecuador","EG":"Egypt","SV":"El Salvador","GQ":"Equatorial Guinea","ER":"Eritrea","EE":"Estonia","ET":"Ethiopia","FK":"Falkland Islands","FO":"Faroe Islands","FJ":"Fiji","FI":"Finland","FR":"France","GF":"French Guiana","PF":"French Polynesia","TF":"French Southern Territories","GA":"Gabon","GM":"Gambia","GE":"Georgia","DE":"Germany","GH":"Ghana","GI":"Gibraltar","GR":"Greece","GL":"Greenland","GD":"Grenada","GP":"Guadeloupe","GU":"Guam","GT":"Guatemala","GG":"Guernsey","GN":"Guinea","GW":"Guinea-Bissau","GY":"Guyana","HT":"Haiti","HM":"Heard Island and McDonald Islands","HN":"Honduras","HK":"Hong Kong SAR","HU":"Hungary","IS":"Iceland","IN":"India","ID":"Indonesia","IR":"Iran","IQ":"Iraq","IE":"Ireland","IM":"Isle of Man","IL":"Israel","IT":"Italy","JM":"Jamaica","JP":"Japan","JE":"Jersey","JO":"Jordan","KZ":"Kazakhstan","KE":"Kenya","KI":"Kiribati","KR":"Korea","KW":"Kuwait","KG":"Kyrgyzstan","LA":"Laos","LV":"Latvia","LB":"Lebanon","LS":"Lesotho","LR":"Liberia","LY":"Libya","LI":"Liechtenstein","LT":"Lithuania","LU":"Luxembourg","MO":"Macao SAR","MK":"North Macedonia","MG":"Madagascar","MW":"Malawi","MY":"Malaysia","MV":"Maldives","ML":"Mali","MT":"Malta","MH":"Marshall Islands","MQ":"Martinique","MR":"Mauritania","MU":"Mauritius","YT":"Mayotte","MX":"Mexico","FM":"Micronesia","MD":"Moldova","MC":"Monaco","MN":"Mongolia","ME":"Montenegro","MS":"Montserrat","MA":"Morocco","MZ":"Mozambique","MM":"Myanmar","NA":"Namibia","NR":"Nauru","NP":"Nepal","NL":"Netherlands","NC":"New Caledonia","NZ":"New Zealand","NI":"Nicaragua","NE":"Niger","NG":"Nigeria","NU":"Niue","NF":"Norfolk Island","KP":"North Korea","MP":"Northern Mariana Islands","NO":"Norway","OM":"Oman","PK":"Pakistan","PW":"Palau","PS":"Palestinian Authority","PA":"Panama","PG":"Papua New Guinea","PY":"Paraguay","PE":"Peru","PH":"Philippines","PN":"Pitcairn Islands","PL":"Poland","PT":"Portugal","PR":"Puerto Rico","QA":"Qatar","RE":"Réunion","RO":"Romania","RU":"Russia","RW":"Rwanda","BL":"Saint Barthélemy","KN":"Saint Kitts and Nevis","LC":"Saint Lucia","MF":"Saint Martin","PM":"Saint Pierre and Miquelon","VC":"Saint Vincent and the Grenadines","WS":"Samoa","SM":"San Marino","ST":"São Tomé and Príncipe","SA":"Saudi Arabia","SN":"Senegal","RS":"Serbia","SC":"Seychelles","SL":"Sierra Leone","SG":"Singapore","SX":"Sint Maarten","SK":"Slovakia","SI":"Slovenia","SB":"Solomon Islands","SO":"Somalia","ZA":"South Africa","GS":"South Georgia and South Sandwich Islands","SS":"South Sudan","ES":"Spain","LK":"Sri Lanka","SH":"St Helena, Ascension, Tristan da Cunha","SD":"Sudan","SR":"Suriname","SJ":"Svalbard","SZ":"Swaziland","SE":"Sweden","CH":"Switzerland","SY":"Syria","TW":"Taiwan","TJ":"Tajikistan","TZ":"Tanzania","TH":"Thailand","TL":"Timor-Leste","TG":"Togo","TK":"Tokelau","TO":"Tonga","TT":"Trinidad and Tobago","TN":"Tunisia","TR":"Türkiye","TM":"Turkmenistan","TC":"Turks and Caicos Islands","TV":"Tuvalu","UM":"U.S. Outlying Islands","VI":"U.S. Virgin Islands","UG":"Uganda","UA":"Ukraine","AE":"United Arab Emirates","GB":"United Kingdom","US":"United States","UY":"Uruguay","UZ":"Uzbekistan","VU":"Vanuatu","VA":"Vatican City","VE":"Venezuela","VN":"Vietnam","WF":"Wallis and Futuna","YE":"Yemen","ZM":"Zambia","ZW":"Zimbabwe"}</LocalizedString> <LocalizedString ElementType="UxElement" StringId="error_448">The phone number you provided is unreachable.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="error_449">User has exceeded the number of retry attempts.</LocalizedString> <LocalizedString ElementType="UxElement" StringId="verification_code_input_placeholder_text">Verification code</LocalizedString> |
active-directory | Customize Application Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md | There are four different mapping types supported: - **Direct** ΓÇô the target attribute is populated with the value of an attribute of the linked object in Azure AD. - **Constant** ΓÇô the target attribute is populated with a specific string you specified.-- **Expression** - the target attribute is populated based on the result of a script-like expression.- For more information, see [Writing Expressions for Attribute-Mappings in Azure Active Directory](../app-provisioning/functions-for-customizing-application-data.md). +- **Expression** - the target attribute is populated based on the result of a script-like expression. For more information about expressions, see [Writing Expressions for Attribute-Mappings in Azure Active Directory](../app-provisioning/functions-for-customizing-application-data.md). - **None** - the target attribute is left unmodified. However, if the target attribute is ever empty, it's populated with the Default value that you specify. Along with these four basic types, custom attribute-mappings support the concept of an optional **default** value assignment. The default value assignment ensures that a target attribute is populated with a value if there's not a value in Azure AD or on the target object. The most common configuration is to leave this blank. ### Understanding attribute-mapping properties -In the previous section, you were already introduced to the attribute-mapping type property. -Along with this property, attribute-mappings also support the following attributes: +In the previous section, you were introduced to the attribute-mapping type property. +Along with this property, attribute-mappings also supports the attributes: - **Source attribute** - The user attribute from the source system (example: Azure Active Directory). - **Target attribute** ΓÇô The user attribute in the target system (example: ServiceNow).-- **Default value if null (optional)** - The value that is passed to the target system if the source attribute is null. This value is only provisioned when a user is created. The "default value when null" won't be provisioned when updating an existing user. If for example, you provision all existing users in the target system with a particular Job Title (when it's null in the source system), you'll use the following [expression](../app-provisioning/functions-for-customizing-application-data.md): Switch(IsPresent([jobTitle]), "DefaultValue", "True", [jobTitle]). Make sure to replace the "Default Value" with the value to provision when null in the source system. +- **Default value if null (optional)** - The value that is passed to the target system if the source attribute is null. This value is only provisioned when a user is created. The "default value when null" isn't provisioned when updating an existing user. For example, add a default value for job title, when creating a user, with the expression: `Switch(IsPresent([jobTitle]), "DefaultValue", "True", [jobTitle])`. For more information about expressions, see [Reference for writing expressions for attribute mappings in Azure Active Directory](../app-provisioning/functions-for-customizing-application-data.md). - **Match objects using this attribute** ΓÇô Whether this mapping should be used to uniquely identify users between the source and target systems. It's typically set on the userPrincipalName or mail attribute in Azure AD, which is typically mapped to a username field in a target application.-- **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they're evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated. While you can set as many matching attributes as you would like, consider whether the attributes you're using as matching attributes are truly unique and need to be matching attributes. Generally customers have 1 or 2 matching attributes in their configuration. +- **Matching precedence** ΓÇô Multiple matching attributes can be set. When there are multiple, they're evaluated in the order defined by this field. As soon as a match is found, no further matching attributes are evaluated. While you can set as many matching attributes as you would like, consider whether the attributes you're using as matching attributes are truly unique and need to be matching attributes. Generally customers have one or two matching attributes in their configuration. - **Apply this mapping** - **Always** ΓÇô Apply this mapping on both user creation and update actions. - **Only during creation** - Apply this mapping only on user creation actions. Along with this property, attribute-mappings also support the following attribut The Azure AD provisioning service can be deployed in both "green field" scenarios (where users don't exist in the target system) and "brownfield" scenarios (where users already exist in the target system). To support both scenarios, the provisioning service uses the concept of matching attributes. Matching attributes allow you to determine how to uniquely identify a user in the source and match the user in the target. As part of planning your deployment, identify the attribute that can be used to uniquely identify a user in the source and target systems. Things to note: - **Matching attributes should be unique:** Customers often use attributes such as userPrincipalName, mail, or object ID as the matching attribute.-- **Multiple attributes can be used as matching attributes:** You can define multiple attributes to be evaluated when matching users and the order in which they're evaluated (defined as matching precedence in the UI). If for example, you define three attributes as matching attributes, and a user is uniquely matched after evaluating the first two attributes, the service won't evaluate the third attribute. The service will evaluate matching attributes in the order specified and stop evaluating when a match is found. +- **Multiple attributes can be used as matching attributes:** You can define multiple attributes to be evaluated when matching users and the order in which they're evaluated (defined as matching precedence in the UI). If for example, you define three attributes as matching attributes, and a user is uniquely matched after evaluating the first two attributes, the service won't evaluate the third attribute. The service evaluates matching attributes in the order specified and stops evaluating when a match is found. - **The value in the source and the target don't have to match exactly:** The value in the target can be a function of the value in the source. So, one could have an emailAddress attribute in the source and the userPrincipalName in the target, and match by a function of the emailAddress attribute that replaces some characters with some constant value. - **Matching based on a combination of attributes isn't supported:** Most applications don't support querying based on two properties. Therefore, it's not possible to match based on a combination of attributes. It's possible to evaluate single properties on after another. - **All users must have a value for at least one matching attribute:** If you define one matching attribute, all users must have a value for that attribute in the source system. If for example, you define userPrincipalName as the matching attribute, all users must have a userPrincipalName. If you define multiple matching attributes (for example, both extensionAttribute1 and mail), not all users have to have the same matching attribute. One user could have a extensionAttribute1 but not mail while another user could have mail but no extensionAttribute1. Applications and systems that support customization of the attribute list includ - ServiceNow - Workday to Active Directory / Workday to Azure Active Directory - SuccessFactors to Active Directory / SuccessFactors to Azure Active Directory-- Azure Active Directory ([Azure AD Graph API default attributes](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#user-entity) and custom directory extensions are supported). Learn more about [creating extensions](./user-provisioning-sync-attributes-for-mapping.md) and [known limitations](./known-issues.md). +- Azure Active Directory ([Azure AD Graph API default attributes](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#user-entity) and custom directory extensions are supported). For more information about creating extensions, see [Syncing extension attributes for Azure Active Directory Application Provisioning](./user-provisioning-sync-attributes-for-mapping.md) and [Known issues for provisioning in Azure Active Directory](./known-issues.md). - Apps that support [SCIM 2.0](https://tools.ietf.org/html/rfc7643)-- For Azure Active Directory writeback to Workday or SuccessFactors, it's supported to update relevant metadata for supported attributes (XPATH and JSONPath), but isn't supported to add new Workday or SuccessFactors attributes beyond those included in the default schema+- Azure Active Directory supports writeback to Workday or SuccessFactors for XPATH and JSONPath metadata. Azure Active Directory doesn't support new Workday or SuccessFactors attributes not included in the default schema. > [!NOTE] The SCIM RFC defines a core user and group schema, while also allowing for exten 4. Select **Edit attribute list for AppName**. 5. At the bottom of the attribute list, enter information about the custom attribute in the fields provided. Then select **Add Attribute**. -For SCIM applications, the attribute name must follow the pattern shown in the example below. The "CustomExtensionName" and "CustomAttribute" can be customized per your application's requirements, for example: urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User:CustomAttribute +For SCIM applications, the attribute name must follow the pattern shown in the example. The "CustomExtensionName" and "CustomAttribute" can be customized per your application's requirements, for example: urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User:CustomAttribute These instructions are only applicable to SCIM-enabled applications. Applications such as ServiceNow and Salesforce aren't integrated with Azure AD using SCIM, and therefore they don't require this specific namespace when adding a custom attribute. Custom attributes can't be referential attributes, multi-value or complex-typed ## Provisioning a role to a SCIM app-Use the steps below to provision roles for a user to your application. Note that the description below is specific to custom SCIM applications. For gallery applications such as Salesforce and ServiceNow, use the pre-defined role mappings. The bullets below describe how to transform the AppRoleAssignments attribute to the format your application expects. +Use the steps in the example to provision roles for a user to your application. Note that the description is specific to custom SCIM applications. For gallery applications such as Salesforce and ServiceNow, use the predefined role mappings. The bullets describe how to transform the AppRoleAssignments attribute to the format your application expects. - Mapping an appRoleAssignment in Azure AD to a role in your application requires that you transform the attribute using an [expression](../app-provisioning/functions-for-customizing-application-data.md). The appRoleAssignment attribute **shouldn't be mapped directly** to a role attribute without using an expression to parse the role details. The request formats in the PATCH and POST differ. To ensure that POST and PATCH <br> - Then use the AppRoleAssignmentsComplex expression to map to the custom role attribute as shown in the image below: + Then use the AppRoleAssignmentsComplex expression to map to the custom role attribute as shown in the image: <br> - **Things to consider** The request formats in the PATCH and POST differ. To ensure that POST and PATCH ## Provisioning a multi-value attribute-Certain attributes such as phoneNumbers and emails are multi-value attributes where you may need to specify different types of phone numbers or emails. Use the expression below for multi-value attributes. It allows you to specify the attribute type and map that to the corresponding Azure AD user attribute for the value. +Certain attributes such as phoneNumbers and emails are multi-value attributes where you may need to specify different types of phone numbers or emails. Use the expression for multi-value attributes. It allows you to specify the attribute type and map that to the corresponding Azure AD user attribute for the value. * phoneNumbers[type eq "work"].value * phoneNumbers[type eq "mobile"].value Selecting this option will effectively force a resynchronization of all users wh - The attribute IsSoftDeleted is often part of the default mappings for an application. IsSoftdeleted can be true in one of four scenarios (the user is out of scope due to being unassigned from the application, the user is out of scope due to not meeting a scoping filter, the user has been soft deleted in Azure AD, or the property AccountEnabled is set to false on the user). It's not recommended to remove the IsSoftDeleted attribute from your attribute mappings. - The Azure AD provisioning service doesn't support provisioning null values. - They primary key, typically "ID", shouldn't be included as a target attribute in your attribute mappings. -- The role attribute typically needs to be mapped using an expression, rather than a direct mapping. See section above for more details on role mapping. +- The role attribute typically needs to be mapped using an expression, rather than a direct mapping. For more information about role mapping, see [Provisioning a role to a SCIM app](#Provisioning a role to a SCIM app). - While you can disable groups from your mappings, disabling users isn't supported. ## Next steps |
active-directory | Concept Authentication Passwordless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md | The following providers offer FIDO2 security keys of different form factors that | Fortinet | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.fortinet.com/ | | Giesecke + Devrient (G+D) | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.gi-de.com/en/identities/enterprise-security/hardware-based-authentication | | GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key |-| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us | +| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/products/crescendo-key | | Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido |+| Hypr | ![y] | ![y]| ![n]| ![y]| ![n] | https://www.hypr.com/true-passwordless-mfa | | Identiv | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.identiv.com/products/logical-access-control/utrust-fido2-security-keys/nfc | | IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon | | Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ | The following providers offer FIDO2 security keys of different form factors that | Thales Group | ![n] | ![y]| ![y]| ![n]| ![y] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | | Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |+| Token Ring | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.tokenring.com/ | | TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ | | VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net |+| WiSECURE Technologies | ![n] | ![y]| ![n]| ![n]| ![n] | https://wisecure-tech.com/en-us/zero-trust/fido/authtron | | Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ | + <!--Image references--> [y]: ./media/fido2-compatibility/yes.png [n]: ./media/fido2-compatibility/no.png |
active-directory | Concept Fido2 Hardware Vendor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md | The following table lists partners who are Microsoft-compatible FIDO2 security k | Fortinet | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.fortinet.com/ | | Giesecke + Devrient (G+D) | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.gi-de.com/en/identities/enterprise-security/hardware-based-authentication | | GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key |-| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us | +| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/products/crescendo-key | | Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido |+| Hypr | ![y] | ![y]| ![n]| ![y]| ![n] | https://www.hypr.com/true-passwordless-mfa | +| Identiv | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.identiv.com/products/logical-access-control/utrust-fido2-security-keys/nfc | | IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon | | Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ | | KONA I | ![y] | ![n]| ![y]| ![y]| ![n] | https://konai.com/business/security/fido |+| Movenda | ![y] | ![n]| ![y]| ![y]| ![n] | https://www.movenda.com/en/authentication/fido2/overview | | NeoWave | ![n] | ![y]| ![y]| ![n]| ![n] | https://neowave.fr/en/products/fido-range/ | | Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/nymi-band | | Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ | The following table lists partners who are Microsoft-compatible FIDO2 security k | Thales Group | ![n] | ![y]| ![y]| ![n]| ![y] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | | Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |+| Token Ring | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.tokenring.com/ | | TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ | | VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net |+| WiSECURE Technologies | ![n] | ![y]| ![n]| ![n]| ![n] | https://wisecure-tech.com/en-us/zero-trust/fido/authtron | | Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ | ++ <!--Image references--> [y]: ./media/fido2-compatibility/yes.png [n]: ./media/fido2-compatibility/no.png |
active-directory | Howto Mfa Getstarted | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md | For more information, and additional Azure AD Multi-Factor Authentication report ### Troubleshoot Azure AD Multi-Factor Authentication See [Troubleshooting Azure AD Multi-Factor Authentication](https://support.microsoft.com/help/2937344/troubleshooting-azure-multi-factor-authentication-issues) for common issues. +## Guided walkthrough ++For a guided walkthrough of many of the recommendations in this article, see the [Microsoft 365 Configure multifactor authentication guided walkthrough](https://go.microsoft.com/fwlink/?linkid=2221401). + ## Next steps [Deploy other identity features](../fundamentals/active-directory-deployment-plans.md) |
active-directory | Howto Sspr Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md | For more information about pricing, see [Azure Active Directory pricing](https:/ * An account with Global Administrator privileges. +### Guided walkthrough ++For a guided walkthrough of many of the recommendations in this article, see the [Plan your self-service password reset deployment](https://go.microsoft.com/fwlink/?linkid=2221600) guide. ### Training resources |
active-directory | Concept Continuous Access Evaluation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md | Networks and network services used by clients connecting to identity and resourc CAE only has insight into [IP-based named locations](../conditional-access/location-condition.md#ipv4-and-ipv6-address-ranges). CAE doesn't have insight into other location conditions like [MFA trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) or country-based locations. When a user comes from an MFA trusted IP, trusted location that includes MFA Trusted IPs, or country location, CAE won't be enforced after that user moves to a different location. In those cases, Azure AD will issue a one-hour access token without instant IP enforcement check. > [!IMPORTANT]-> If you want your location policies to be enforced in real time by continuous access evaluation, use only the [IP based Conditional Access location condition](../conditional-access/location-condition.md) and configure all IP addresses, **including both IPv4 and IPv6**, that can be seen by your identity provider and resources provider. Do not use country location conditions or the trusted ips feature that is available in Azure AD Multifactor Authentication's service settings page. +> If you want your location policies to be enforced in real time by continuous access evaluation, use only the [IP based Conditional Access location condition](../conditional-access/location-condition.md) and configure all IP addresses, **including both IPv4 and IPv6**, that can be seen by your identity provider and resources provider. Do not use country/region location conditions or the trusted ips feature that is available in Azure AD Multifactor Authentication's service settings page. ### Named location limitations |
active-directory | Concept Token Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md | description: Learn how to use token protection in Conditional Access policies. Previously updated : 03/09/2023 Last updated : 03/24/2023 Token protection creates a cryptographically secure tie between the token and th > [!IMPORTANT] > Token protection is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). -With this preview, we're giving you the ability to create a Conditional Access policy to require token protection for sign-in tokens for specific services. We support token protection for sign-in tokens in Conditional Access for desktop applications accessing Exchange Online and SharePoint Online on Windows devices. +With this preview, we're giving you the ability to create a Conditional Access policy to require token protection for sign-in tokens (refresh tokens) for specific services. We support token protection for sign-in tokens in Conditional Access for desktop applications accessing Exchange Online and SharePoint Online on Windows devices. ++> [!NOTE] +> We may interchange sign in tokens and refresh tokens in this content. This preview doesn't currently support access tokens or web cookies. :::image type="content" source="media/concept-token-protection/complete-policy-components-session.png" alt-text="Screenshot showing a Conditional Access policy requiring token protection as the session control"::: |
active-directory | Location Condition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md | The location found using the public IP address a client provides to Azure Active ## Named locations -Locations exist in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries. +Locations exist in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries/regions.  Locations such as your organization's public network ranges can be marked as tru Organizations can determine country location by IP address or GPS coordinates. -To define a named location by country, you need to provide: +To define a named location by country/region, you need to provide: - A **Name** for the location. - Choose to determine location by IP address or GPS coordinates.-- Add one or more countries.+- Add one or more countries/regions. - Optionally choose to **Include unknown countries/regions**.  -If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address (starting April 3, 2023) to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries to block traffic from countries where they don't do business. +If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address (starting April 3, 2023) to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries/regions to block traffic from countries/regions where they don't do business. If you select **Determine location by GPS coordinates**, the user needs to have the Microsoft Authenticator app installed on their mobile device. Every hour, the system contacts the userΓÇÖs Microsoft Authenticator app to collect the GPS location of the userΓÇÖs mobile device. |
active-directory | Multi Service Web App Access Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-storage.md | To create a general-purpose v2 storage account in the Azure portal, follow these 1. On the Azure portal menu, select **All services**. In the list of resources, enter **Storage Accounts**. As you begin typing, the list filters based on your input. Select **Storage Accounts**. -1. In the **Storage Accounts** window that appears, select **Add**. +1. In the **Storage Accounts** window that appears, select **Create**. 1. Select the subscription in which to create the storage account. To create a general-purpose v2 storage account in the Azure portal, follow these 1. Select a location for your storage account, or use the default location. -1. Leave these fields set to their default values: +1. For **Performance**, select the **Standard** option. - |Field|Value| - |--|--| - |Deployment model|Resource Manager| - |Performance|Standard| - |Account kind|StorageV2 (general-purpose v2)| - |Replication|Read-access geo-redundant storage (RA-GRS)| - |Access tier|Hot| +1. For **Redundancy**, select the **Locally-redundant storage (LRS)** option from the dropdown. -1. Select **Review + Create** to review your storage account settings and create the account. +1. Select **Review** to review your storage account settings and create the account. 1. Select **Create**. To create a Blob Storage container in Azure Storage, follow these steps. 1. Go to your new storage account in the Azure portal. -1. In the left menu for the storage account, scroll to the **Blob service** section, and then select **Containers**. +1. In the left menu for the storage account, scroll to the **Data storage** section, and then select **Containers**. 1. Select the **+ Container** button. To create a Blob Storage container in Azure Storage, follow these steps. 1. Set the level of public access to the container. The default level is **Private (no anonymous access)**. -1. Select **OK** to create the container. +1. Select **Create** to create the container. # [PowerShell](#tab/azure-powershell) You need to grant your web app access to the storage account before you can crea In the [Azure portal](https://portal.azure.com), go into your storage account to grant your web app access. Select **Access control (IAM)** in the left pane, and then select **Role assignments**. You'll see a list of who has access to the storage account. Now you want to add a role assignment to a robot, the app service that needs access to the storage account. Select **Add** > **Add role assignment** to open the **Add role assignment** page. -Assign the **Storage Blob Data Contributor** role to the **App Service** at subscription scope. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). +1. In the **Assignment type** tab, select **Job function type** and then **Next**. ++1. In the **Role** tab, select **Storage Blob Data Contributor** role from the dropdown and then select **Next**. ++1. In the **Members** tab, select **Assign access to** -> **Managed identity** and then select **Members** -> **Select members**. In the **Select managed identities** window, find and select the managed identity created for your App Service in the **Managed identity** dropdown. Select the **Select** button. ++1. Select **Review and assign** and then select **Review and assign** once more. + +For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). Your web app now has access to your storage account. |
active-directory | Scenario Desktop Acquire Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token.md | There are various ways you can acquire tokens in a desktop application. - [Device code flow](scenario-desktop-acquire-token-device-code-flow.md) ++> [!IMPORTANT] +If users need to use multi-factor authentication (MFA) to log in to the application, they will be blocked instead. + ## Next steps Move on to the next article in this scenario, |
active-directory | Single Sign Out Saml Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-out-saml-protocol.md | Per section 3.7 of the [SAML 2.0 core specification](http://docs.oasis-open.org/ The `Issuer` element in a `LogoutRequest` must exactly match one of the **ServicePrincipalNames** in the cloud service in Azure AD. Typically, this is set to the **App ID URI** that is specified during application registration. ### NameID-The value of the `NameID` element must exactly match the `NameID` of the user that is being signed out. +The value of the `NameID` element must exactly match the `NameID` of the user that is being signed out. ++> [!NOTE] +> During SAML logout request, the `NameID` value is not considered by Azure Active Directory. +> If a single user session is active, Azure Active Directory will automatically select that session and the SAML logout will proceed. +> If multiple user sessions are active, Azure Active Directory will enumerate the active sessions for user selection. After user selection, the SAML logout will proceed. ## LogoutResponse Azure AD sends a `LogoutResponse` in response to a `LogoutRequest` element. The following excerpt shows a sample `LogoutResponse`. |
active-directory | Supported Accounts Validation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md | -# required metadata Title: Validation differences by supported account types description: Learn about the validation differences of various properties for different supported account types when registering your app with the Microsoft identity platform. Previously updated : 09/29/2021 Last updated : 03/24/2023 -+ If you change this property you may need to change other properties first. See the following table for the validation differences of various properties for different supported account types. -| Property | `AzureADMyOrg` | `AzureADMultipleOrgs` | `AzureADandPersonalMicrosoftAccount` and `PersonalMicrosoftAccount` | -| | | - | | -| Application ID URI (`identifierURIs`) | Must be unique in the tenant <br><br> urn:// schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> urn:// schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> urn:// schemes aren't supported <br><br> Wildcards, fragments, and query strings aren't supported <br><br> Maximum length of 120 characters <br><br> Maximum of 50 identifierURIs | -| Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key | -| Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets | -| Redirect URIs (`replyURLs`) | See [Redirect URI/reply URL restrictions and limitations](reply-url.md) for more info. | | | -| API permissions (`requiredResourceAccess`) | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | Maximum of 50 resources per application and 30 permissions per resource (for example, Microsoft Graph). Total limit of 200 per application (resources x permissions). | -| Scopes defined by this API (`oauth2Permissions`) | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 40 characters <br><br> Maximum of 100 scopes defined | -| Authorized client applications (`preAuthorizedApplications`) | No limit\* | No limit\* | Total maximum of 500 <br><br> Maximum of 100 client apps defined <br><br> Maximum of 30 scopes defined per client | -| appRoles | Supported <br> No limit\* | Supported <br> No limit\* | Not supported | -| Front-channel logout URL | https://localhost is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | https://localhost is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | https://localhost is allowed, http://localhost fails <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters <br><br> Wildcards aren't supported | -| Display name | Maximum length of 120 characters | Maximum length of 120 characters | Maximum length of 90 characters | -| Tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | +| Property | `AzureADMyOrg` | `AzureADMultipleOrgs` | `AzureADandPersonalMicrosoftAccount` and `PersonalMicrosoftAccount` | +| -- | | | -- | +| Application ID URI (`identifierURIs`) | Must be unique in the tenant <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> `urn://` schemes are supported <br><br> Wildcards aren't supported <br><br> Query strings and fragments are supported <br><br> Maximum length of 255 characters <br><br> No limit\* on number of identifierURIs | Must be globally unique <br><br> urn:// schemes aren't supported <br><br> Wildcards, fragments, and query strings aren't supported <br><br> Maximum length of 120 characters <br><br> Maximum of 50 identifierURIs | +| National clouds | Supported | Supported | Not supported | +| Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key | +| Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets | +| Redirect URIs (`replyURLs`) | See [Redirect URI/reply URL restrictions and limitations](reply-url.md) for more info. | | | +| API permissions (`requiredResourceAccess`) | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | Maximum of 50 resources per application and 30 permissions per resource (for example, Microsoft Graph). Total limit of 200 per application (resources x permissions). | +| Scopes defined by this API (`oauth2Permissions`) | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 40 characters <br><br> Maximum of 100 scopes defined | +| Authorized client applications (`preAuthorizedApplications`) | No limit\* | No limit\* | Total maximum of 500 <br><br> Maximum of 100 client apps defined <br><br> Maximum of 30 scopes defined per client | +| appRoles | Supported <br> No limit\* | Supported <br> No limit\* | Not supported | +| Front-channel logout URL | `https://localhost` is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | `https://localhost` is allowed <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters | `https://localhost` is allowed, `http://localhost` fails <br><br> `http` scheme isn't allowed <br><br> Maximum length of 255 characters <br><br> Wildcards aren't supported | +| Display name | Maximum length of 120 characters | Maximum length of 120 characters | Maximum length of 90 characters | +| Tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | Individual tag size must be between 1 and 256 characters (inclusive) <br><br> No whitespaces or duplicate tags allowed <br><br> No limit\* on number of tags | \* There's a global limit of about 1000 items across all the collection properties on the app object. |
active-directory | Licensing Service Plan Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic - **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]->This information last updated on February 16th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). +>This information last updated on March 23rd, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv). ><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic | Power BI Pro for GCC | POWERBI_PRO_GOV | f0612879-44ea-47fb-baf0-3d76d9235576 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76) | | Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | | Power Virtual Agents Viral Trial | CCIBOTS_PRIVPREV_VIRAL | 606b54a9-78d8-4298-ad8b-df6ef4481c80 | DYN365_CDS_CCI_BOTS (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>FLOW_CCI_BOTS (5d798708-6473-48ad-9776-3acc301c40af) | Common Data Service for CCI Bots (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Flow for CCI Bots (5d798708-6473-48ad-9776-3acc301c40af) |+| Privacy Management ΓÇô risk| PRIVACY_MANAGEMENT_RISK | e42bc969-759a-4820-9283-6b73085b68e6 | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) | +| Privacy Management - risk for EDU | PRIVACY_MANAGEMENT_RISK_EDU | dcdbaae7-d8c9-40cb-8bb1-62737b9e5a86 | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) | +| Privacy Management - risk GCC | PRIVACY_MANAGEMENT_RISK_GCC | 046f7d3b-9595-4685-a2e8-a2832d2b26aa | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) | +| Privacy Management - risk_USGOV_DOD | PRIVACY_MANAGEMENT_RISK_USGOV_DOD | 83b30692-0d09-435c-a455-2ab220d504b9 | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) | +| Privacy Management - risk_USGOV_GCCHIGH | PRIVACY_MANAGEMENT_RISK_USGOV_GCCHIGH | 787d7e75-29ca-4b90-a3a9-0b780b35367c | MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>PRIVACY_MANGEMENT_RISK (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>PRIVACY_MANGEMENT_RISK_EXCHANGE (ebb17a6e-6002-4f65-acb0-d386480cebc1) | Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Priva - Risk (f281fb1f-99a7-46ab-9edb-ffd74e260ed3)<br/>Priva - Risk (Exchange) (ebb17a6e-6002-4f65-acb0-d386480cebc1) | +| Privacy Management - subject rights request (1) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2 | d9020d1c-94ef-495a-b6de-818cbbcaa3b8 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (MIP_S_EXCHANGE_CO)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (PRIVACY_MANGEMENT_DSR_EXCHANGE_1)<br/>Privacy Management - Subject Rights Request (1) (PRIVACY_MANGEMENT_DSR_1) | +| Privacy Management - subject rights request (1) for EDU | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_EDU_V2 | 475e3e81-3c75-4e07-95b6-2fed374536c8 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | +| Privacy Management - subject rights request (1) GCC | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_GCC | 017fb6f8-00dd-4025-be2b-4eff067cae72 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | +| Privacy Management - subject rights request (1) USGOV_DOD | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_USGOV_DOD | d3c841f3-ea93-4da2-8040-6f2348d20954 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | +| Privacy Management - subject rights request (1) USGOV_GCCHIGH | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_1_V2_USGOV_GCCHIGH | 706d2425-6170-4818-ba08-2ad8f1d2d078 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_1 (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>PRIVACY_MANGEMENT_DSR_1 (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (1 - Exchange) (93d24177-c2c3-408a-821d-3d25dfa66e7a)<br/>Privacy Management - Subject Rights Request (1) (07a4098c-3f2d-427f-bfe2-5889ed75dd7b) | +| Privacy Management - subject rights request (10) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_V2 | 78ea43ac-9e5d-474f-8537-4abb82dafe27 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) | +| Privacy Management - subject rights request (10) for EDU | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_EDU_V2 | e001d9f1-5047-4ebf-8927-148530491f83 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) | +| Privacy Management - subject rights request (10) GCC | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_V2_GCC | a056b037-1fa0-4133-a583-d05cff47d551 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) | +| Privacy Management - subject rights request (10) USGOV_DOD | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_V2_USGOV_DOD | ab28dfa1-853a-4f54-9315-f5146975ac9a | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) | +| Privacy Management - subject rights request (10) USGOV_GCCHIGH | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_10_V2_USGOV_GCCHIGH | f6aa3b3d-62f4-4c1d-a44f-0550f40f729c | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_10 (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>PRIVACY_MANGEMENT_DSR_10 (74853901-d7a9-428e-895d-f4c8687a9f0b) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (10 - Exchange) (f0241705-7b44-4401-a6b6-7055062b5b03)<br/>Privacy Management - Subject Rights Request (10) (74853901-d7a9-428e-895d-f4c8687a9f0b) | +| Privacy Management - subject rights request (50) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_50 | c416b349-a83c-48cb-9529-c420841dedd6 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE (7ca7f875-98db-4458-ab1b-47503826dd73) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>Privacy Management - Subject Rights Request (Exchange) (7ca7f875-98db-4458-ab1b-47503826dd73) | +| Privacy Management - subject rights request (50) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_50_V2 | f6c82f13-9554-4da1-bed3-c024cc906e02 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE (7ca7f875-98db-4458-ab1b-47503826dd73) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>Privacy Management - Subject Rights Request (Exchange) (7ca7f875-98db-4458-ab1b-47503826dd73) | +| Privacy Management - subject rights request (50) for EDU | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_50_EDU_V2 | ed45d397-7d61-4110-acc0-95674917bb14 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE (7ca7f875-98db-4458-ab1b-47503826dd73) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (8bbd1fea-6dc6-4aef-8abc-79af22d746e4)<br/>Privacy Management - Subject Rights Request (Exchange) (7ca7f875-98db-4458-ab1b-47503826dd73) | +| Privacy Management - subject rights request (100) | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_V2 | cf4c6c3b-f863-4940-97e8-1d25e912f4c4 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) | +| Privacy Management - subject rights request (100) for EDU | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_EDU_V2 | 9b85b4f0-92d9-4c3d-b230-041520cb1046 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) | +| Privacy Management - subject rights request (100) GCC | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_V2_GCC | 91bbc479-4c2c-4210-9c88-e5b468c35b83 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) | +| Privacy Management - subject rights request (100) USGOV_DOD | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_V2_USGOV_DOD | ba6e69d5-ba2e-47a7-b081-66c1b8e7e7d4 | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) | +| Privacy Management - subject rights request (100) USGOV_GCCHIGH | PRIVACY_MANAGEMENT_SUB_RIGHTS_REQ_100_V2_USGOV_GCCHIGH | cee36ce4-cc31-481f-8cab-02765d3e441f | MIP_S_EXCHANGE_CO (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>PRIVACY_MANGEMENT_DSR_EXCHANGE_100 (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>PRIVACY_MANGEMENT_DSR_100 (500f440d-167e-4030-a3a7-8cd35421fbd8) | Data Classification in Microsoft 365 - Company Level (5b96ffc4-3853-4cf4-af50-e38505080f6b)<br/>Privacy Management - Subject Rights Request (100 - Exchange) (5c221cec-2c39-435b-a1e2-7cdd7fac5913)<br/>Privacy Management - Subject Rights Request (100) (500f440d-167e-4030-a3a7-8cd35421fbd8) | | Project for Office 365 | PROJECTCLIENT | a10d5e58-74da-4312-95c8-76be4e5b75a0 | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | | Project Online Essentials | PROJECTESSENTIALS | 776df282-9fc0-4862-99e2-70e561b9909e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | Project Online Essentials for Faculty | PROJECTESSENTIALS_FACULTY | e433b246-63e7-4d0b-9efa-7940fa3264d6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | |
active-directory | Azure Ad Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md | -Azure Active Directory is available as an identity provider option for [B2B collaboration](what-is-b2b.md#integrate-with-identity-providers) by default. If an external guest user has an Azure AD account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Azure AD account. +Azure Active Directory is available as an identity provider option for B2B collaboration by default. If an external guest user has an Azure AD account through work or school, they can redeem your B2B collaboration invitations or complete your sign-up user flows using their Azure AD account. ## Guest sign-in using Azure Active Directory accounts -Azure Active Directory is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Azure AD account using either the [invitation flow](redemption-experience.md#invitation-redemption-flow) or a [self-service sign-up user flow](self-service-sign-up-overview.md). +Azure Active Directory is available in the list of External Identities identity providers by default. No further configuration is needed to allow guest users to sign in with their Azure AD account using either the invitation flow or a self-service sign-up user flow. :::image type="content" source="media/azure-ad-account/azure-ad-account-identity-provider.png" alt-text="Screenshot of Azure AD account in the identity provider list." lightbox="media/azure-ad-account/azure-ad-account-identity-provider.png"::: |
active-directory | Configure Saas Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/configure-saas-apps.md | - Title: Configure SaaS apps for B2B collaboration -description: Learn how to configure SaaS apps for Azure Active Directory B2B collaboration and view additional available resources. ----- Previously updated : 05/23/2017---------# Configure SaaS apps for B2B collaboration --Azure Active Directory (Azure AD) B2B collaboration works with most apps that integrate with Azure AD. In this section, we walk through instructions for configuring some popular SaaS apps for use with Azure AD B2B. --Before you look at app-specific instructions, here are some rules of thumb: --* For most of the apps, user setup needs to happen manually. That is, users must be created manually in the app as well. --* For apps that support automatic setup, such as Dropbox, separate invitations are created from the apps. Users must be sure to accept each invitation. --* In the user attributes, to mitigate any issues with mangled user profile disk (UPD) in guest users, always set **User Identifier** to **user.mail**. ---## Dropbox Business --To enable users to sign in using their organization account, you must manually configure Dropbox Business to use Azure AD as a Security Assertion Markup Language (SAML) identity provider. If Dropbox Business has not been configured to do so, it cannot prompt or otherwise allow users to sign in using Azure AD. --1. To add the Dropbox Business app into Azure AD, select **Enterprise applications** in the left pane, and then click **Add**. --  --2. In the **Add an application** window, enter **dropbox** in the search box, and then select **Dropbox for Business** in the results list. --  --3. On the **Single sign-on** page, select **Single sign-on** in the left pane, and then enter **user.mail** in the **User Identifier** box. (It's set as UPN by default.) --  --4. To download the certificate to use for Dropbox configuration, select **Configure DropBox**, and then select **SAML Single Sign On Service URL** in the list. --  --5. Sign in to Dropbox with the sign-on URL from the **Single sign-on** page. --  --6. On the menu, select **Admin Console**. --  --7. In the **Authentication** dialog box, select **More**, upload the certificate and then, in the **Sign in URL** box, enter the SAML single sign-on URL. --  --  --8. To configure automatic user setup in the Azure portal, select **Provisioning** in the left pane, select **Automatic** in the **Provisioning Mode** box, and then select **Authorize**. --  --After guest or member users have been set up in the Dropbox app, they receive a separate invitation from Dropbox. To use Dropbox single sign-on, invitees must accept the invitation by clicking a link in it. --## Box -You can enable users to authenticate Box guest users with their Azure AD account by using federation that's based on the SAML protocol. In this procedure, you upload metadata to Box.com. --1. Add the Box app from the enterprise apps. --2. Configure single sign-on in the following order: --  -- a. In the **Sign on URL** box, ensure that the sign-on URL is set appropriately for Box in the Azure portal. This URL is the URL of your Box.com tenant. It should follow the naming convention *https://.box.com*. - The **Identifier** does not apply to this app, but it still appears as a mandatory field. -- b. In the **User identifier** box, enter **user.mail** (for SSO for guest accounts). -- c. Under **SAML Signing Certificate**, click **Create new certificate**. -- d. To begin configuring your Box.com tenant to use Azure AD as an identity provider, download the metadata file and then save it to your local drive. -- e. Forward the metadata file to the Box support team, which configures single sign-on for you. --3. For Azure AD automatic user setup, in the left pane, select **Provisioning**, and then select **Authorize**. --  --Like Dropbox invitees, Box invitees must redeem their invitation from the Box app. --## Next steps --See the following articles on Azure AD B2B collaboration: --- [What is Azure AD B2B collaboration?](what-is-b2b.md)-- [Dynamic groups and B2B collaboration](use-dynamic-groups.md)-- [B2B collaboration user claims mapping](claims-mapping.md)-- [Microsoft 365 external sharing](o365-external-user.md)- |
active-directory | Reset Redemption Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md | ContentType: application/json - [Add Azure Active Directory B2B collaboration users by using PowerShell](customize-invitation-api.md#powershell) - [Properties of an Azure AD B2B guest user](user-properties.md)-- [B2B for Azure AD integrated apps](configure-saas-apps.md) |
active-directory | Concept Secure Remote Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md | The guidance helps: This guide assumes that your cloud only or hybrid identities have been established in Azure AD already. For help with choosing your identity type see the article, [Choose the right authentication method for your Azure Active Directory hybrid identity solution](../hybrid/choose-ad-authn.md) +### Guided walkthrough ++For a guided walkthrough of many of the recommendations in this article, see the [Set up Azure AD](https://go.microsoft.com/fwlink/?linkid=2221308) guide. + ## Guidance for Azure AD Free, Office 365, or Microsoft 365 customers. There are many recommendations that Azure AD Free, Office 365, or Microsoft 365 app customers should take to protect their user identities. The following table is intended to highlight key actions for the following license subscriptions: |
active-directory | How To Customize Branding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md | -The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. Before you customize any settings, the default Microsoft branding appears in your sign-in pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a custom CSS. +The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. Before you customize any settings, the default Microsoft branding appears in your sign-in pages. You can customize this default experience with a custom background image and/or color, favicon, layout, header, and footer. You can also upload a custom CSS. > [!NOTE] > Instructions for the legacy company branding customization process can be found in the **[Customize branding](customize-branding.md)** article.<br><br>The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > -## User experience --You can customize the sign-in pages when users access your organization's tenant-specific apps. For Microsoft and SaaS applications (multi-tenant apps) such as <https://myapps.microsoft.com>, or <https://outlook.com> the customized sign-in page appears only after the user types their **Email**, or **Phone**, and select **Next**. --Some of the Microsoft applications support the home realm discovery `whr` query string parameter, or a domain variable. With the home realm discovery and domain parameter, the customized sign-in page appears immediately in the first step. --In the following examples replace the contoso.com with your own tenant name, or verified domain name: --- For Microsoft Outlook `https://outlook.com/contoso.com` -- For SharePoint online `https://contoso.sharepoint.com`-- For my app portal `https://myapps.microsoft.com/?whr=contoso.com` -- Self-service password reset `https://passwordreset.microsoftonline.com/?whr=contoso.com`--## Role and license requirements +## License requirements Adding custom branding requires one of the following licenses: The **Global Administrator** role is required to customize company branding. **Use Microsoft Graph with Azure AD company branding.** Company branding can be viewed and managed using Microsoft Graph on the `/beta` endpoint and the `organizationalBranding` resource type. For more information, see the [organizational branding API documentation](/graph/api/resources/organizationalbranding?view=graph-rest-beta&preserve-view=true). +The branding elements are called out in the following example. Text descriptions are provided following the image. +++1. **Favicon**: Small icon that appears on the left side of the browser tab. +1. **Header logo**: Space across the top of the web page, below the web browser navigation area. +1. **Background image** and **page background color**: The entire space behind the sign-in box. +1. **Banner logo**: The logo that appears in the upper-left corner of the sign-in box. +1. **Username hint and text**: The text that appears before a user enters their information. +1. **Sign-in page text**: Additional text you can add below the username field. +1. **Self-service password reset**: A link you can add below the sign-in page text for password resets. +1. **Template**: The layout of the page and sign-in boxes. +1. **Footer**: Text in the lower-right corner of the page where you can add Terms of use or privacy information. ++### User experience ++When customizing the sign-in pages that users see when accessing your organization's tenant-specific applications, there are some user experience scenarios you may need to consider. ++For Microsoft, Software as a Service (SaaS), and multi-tenant applications such as <https://myapps.microsoft.com>, or <https://outlook.com>, the customized sign-in page appears only after the user types their **Email** or **Phone number** and selects the **Next** button. ++Some Microsoft applications support [Home Realm Discovery](../manage-apps/home-realm-discovery-policy.md) for authentication. In these scenarios, when a customer signs in to an Azure AD common sign-in page, Azure AD can use the customer's user name to determine where they should sign in. ++For customers who access applications from a custom URL, the `whr` query string parameter, or a domain variable, can be used to apply company branding at the initial sign-in screen, not just after adding the email or phone number. For example, `whr=contoso.com` would appear in the custom URL for the app. With the Home Realm Discover and domain parameter included, the company branding appears immediately in the first sign-in step. Other domain hints can be included. ++In the following examples replace the contoso.com with your own tenant name, or verified domain name: ++- For Microsoft Outlook `https://outlook.com/contoso.com` +- For SharePoint online `https://contoso.sharepoint.com` +- For my app portal `https://myapps.microsoft.com/?whr=contoso.com` +- Self-service password reset `https://passwordreset.microsoftonline.com/?whr=contoso.com` ++> [!NOTE] +> The settings to manage the 'Stay signed in?' prompt can now be found in the User settings area of Azure AD. Go to **Azure AD** > **Users** > **User settings**. +<br><br> +For more information on the 'Stay signed in?' prompt, see [How to manage user profile information](how-to-manage-user-profile-info.md#learn-about-the-stay-signed-in-prompt). + ## How to navigate the company branding process 1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory. The sign-in experience process is grouped into sections. At the end of each sect - **Favicon**: Select a PNG or JPG of your logo that appears in the web browser tab. +  + - **Background image**: Select a PNG or JPG to display as the main image on your sign-in page. This image scales and crops according to the window size, but may be partially blocked by the sign-in prompt. - **Page background color**: If the background image isn't able to load because of a slower connection, your selected background color appears instead. The sign-in experience process is grouped into sections. At the end of each sect - Choose one of two **Templates**: Full-screen or partial-screen background. The full-screen background could obscure your background image, so choose the partial-screen background if your background image is important. - The details of the **Header** and **Footer** options are set on the next two sections of the process.+ +  -- **Custom CSS**: Upload custom CSS to replace the Microsoft default style of the page. [Download the CSS template](https://download.microsoft.com/download/7/2/7/727f287a-125d-4368-a673-a785907ac5ab/custom-styles-template-013023.css).+- **Custom CSS**: Upload custom CSS to replace the Microsoft default style of the page. + - [Download the CSS template](https://download.microsoft.com/download/7/2/7/727f287a-125d-4368-a673-a785907ac5ab/custom-styles-template-013023.css). + - View the [CSS template reference guide](reference-company-branding-css-template.md). ## Header If you haven't enabled the header, go to the **Layout** section and select **Show header**. Once enabled, select a PNG or JPG to display in the header of the sign-in page. + + ## Footer If you haven't enabled the footer, go to the **Layout** section and select **Show footer**. Once enabled, adjust the following settings. If you haven't enabled the footer, go to the **Layout** section and select **Sho Uncheck this option to hide the default Microsoft link. Optionally provide your own **Display text** and **URL**. The text and links don't have to be related to privacy and cookies. -- **Show 'Terms of Use'**: This option is also elected by default and displays the [Microsoft 'Terms of Use'](https://www.microsoft.com/servicesagreement/) link.+- **Show 'Terms of Use'**: This option is also selected by default and displays the [Microsoft 'Terms of Use'](https://www.microsoft.com/servicesagreement/) link. Uncheck this option to hide the default Microsoft link. Optionally provide your own **Display text** and **URL**. The text and links don't have to be related to your terms of use. To create an inclusive experience for all of your users, you can customize the s The process for customizing the experience is the same as the [default sign-in experience](#basics) process, except you must select a language from the dropdown list in the **Basics** section. We recommend adding custom text in the same areas as your default sign-in experience. +Azure AD supports right-to-left functionality for languages such as Arabic and Hebrew that are read right-to-left. The layout adjusts automatically, based on the user's browser settings. ++ + ## Next steps +- [View the CSS template reference guide](reference-company-branding-css-template.md). - [Learn more about default user permissions in Azure AD](../fundamentals/users-default-permissions.md)--- [Manage the 'stay signed in' prompt](active-directory-users-profile-azure-portal.md#learn-about-the-stay-signed-in-prompt)+- [Manage the 'stay signed in' prompt](how-to-manage-user-profile-info.md#learn-about-the-stay-signed-in-prompt) |
active-directory | How To Manage User Profile Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-manage-user-profile-info.md | + + Title: How to manage user profile information +description: Instructions about how to manage a user's profile and settings in Azure Active Directory. ++++++++ Last updated : 03/23/2023++++++# Add or update a user's profile information and settings +A user's profile information and settings can be managed on an individual basis and for all users in your directory. When you look at these settings together, you can see how permissions, restrictions, and other connections work together. ++This article covers how to add user profile information, such as a profile picture and job-specific information. You can also choose to allow users to connect their LinkedIn accounts or restrict access to the Azure AD administration portal. Some settings may be managed in more than one area of Azure AD. For more information about adding new users, see [How to add or delete users in Azure Active Directory](add-users-azure-active-directory.md). ++## Add or change profile information +When new users are created, only some details are added to their user profile. If your organization needs more details, they can be added after the user is created. ++1. Sign in to the [Azure portal](https://portal.azure.com/) in the User Administrator role for the organization. ++1. Go to **Azure Active Directory** > **Users** and select a user. + +1. There are two ways to edit user profile details. Either select **Edit properties** from the top of the page or select **Properties**. ++  ++1. After making any changes, select the **Save** button. ++If you selected the **Edit properties option**: + - The full list of properties appears in edit mode on the **All** category. + - To edit properties based on the category, select a category from the top of the page. + - Select the **Save** button at the bottom of the page to save any changes. + +  + +If you selected the **Properties tab option**: + - The full list of properties appears for you to review. + - To edit a property, select the pencil icon next to the category heading. + - Select the **Save** button at the bottom of the page to save any changes. + +  ++### Profile categories +There are six categories of profile details you may be able to edit. ++- **Identity:** Add or update other identity values for the user, such as a married last name. You can set this name independently from the values of First name and Last name. For example, you could use it to include initials, a company name, or to change the sequence of names shown. If you have two users with the same name, such as ΓÇÿChris Green,ΓÇÖ you could use the Identity string to set their names to 'Chris B. Green' and 'Chris R. Green.' ++- **Job information:** Add any job-related information, such as the user's job title, department, or manager. ++- **Contact info:** Add any relevant contact information for the user. ++- **Parental controls:** For organizations like K-12 school districts, the user's age group may need to be provided. *Minors* are 12 and under, *Not adult* are 13-18 years old, and *Adults* are 18 and over. The combination of age group and consent provided by parent options determine the Legal age group classification. The Legal age group classification may limit the user's access and authority. ++- **Settings:** Decide whether the user can sign in to the Azure Active Directory tenant. You can also specify the user's global location. ++- **On-premises:** Accounts synced from Windows Server Active Directory include other values not applicable to Azure AD accounts. ++ >[!Note] + >You must use Windows Server Active Directory to update the identity, contact info, or job info for users whose source of authority is Windows Server Active Directory. After you complete your update, you must wait for the next synchronization cycle to complete before you'll see the changes. ++### Add or edit the profile picture +On the user's overview page, select the camera icon in the lower-right corner of the user's thumbnail. If no image has been added, the user's initials appear here. This picture appears in Azure Active Directory and on the user's personal pages, such as the myapps.microsoft.com page. ++All your changes are saved for the user. ++>[!Note] +> If you're having issues updating a user's profile picture, please ensure that your Office 365 Exchange Online Enterprise App is Enabled for users to sign in. ++## Manage settings for all users +In the **User settings** area of Azure AD, you can adjust several settings that affect all users, such as restricting access to the Azure AD administration portal, how external collaboration is managed, and providing users the option to connect their LinkedIn account. Some settings are managed in a separate area of Azure AD and linked from this page. ++Go to **Azure AD** > **User settings**. ++### Learn about the 'Stay signed in?' prompt ++The **Stay signed in?** prompt appears after a user successfully signs in. This process is known as **Keep me signed in** (KMSI). If a user answers **Yes** to this prompt, a persistent authentication cookie is issued. The cookie must be stored in session for KMSI to work. KMSI won't work with locally stored cookies. If KMSI isn't enabled, a non-persistent cookie is issued and lasts for 24 hours or until the browser is closed. ++The following diagram shows the user sign-in flow for a managed tenant and federated tenant using the KMSI in prompt. This flow contains smart logic so that the **Stay signed in?** option won't be displayed if the machine learning system detects a high-risk sign-in or a sign-in from a shared device. For federated tenants, the prompt will show after the user successfully authenticates with the federated identity service. ++The KMSI setting is available in **User settings**. Some features of SharePoint Online and Office 2010 depend on users being able to choose to remain signed in. If you uncheck the **Show option to remain signed in** option, your users may see other unexpected prompts during the sign-in process. ++ ++Configuring the 'keep me signed in' (KMSI) option requires one of the following licenses: ++- Azure AD Premium 1 +- Azure AD Premium 2 +- Office 365 (for Office apps) +- Microsoft 365 ++#### Troubleshoot 'Stay signed in?' issues ++If a user doesn't act on the **Stay signed in?** prompt but abandons the sign-in attempt, a sign-in log entry appears in the Azure AD **Sign-ins** page. The prompt the user sees is called an "interrupt." ++ ++Details about the sign-in error are found in the **Sign-in logs** in Azure AD. Select the impacted user from the list and locate the following error code details in the **Basic info** section. ++* **Sign in error code**: 50140 +* **Failure reason**: This error occurred due to "Keep me signed in" interrupt when the user was signing in. ++You can stop users from seeing the interrupt by setting the **Show option to remain signed in** setting to **No** in the user settings. This setting disables the KMSI prompt for all users in your Azure AD directory. ++You also can use the [persistent browser session controls in Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to prevent users from seeing the KMSI prompt. This option allows you to disable the KMSI prompt for a select group of users (such as the global administrators) without affecting sign-in behavior for everyone else in the directory. ++To ensure that the KMSI prompt is shown only when it can benefit the user, the KMSI prompt is intentionally not shown in the following scenarios: ++* User is signed in via seamless SSO and integrated Windows authentication (IWA) +* User is signed in via Active Directory Federation Services and IWA +* User is a guest in the tenant +* User's risk score is high +* Sign-in occurs during user or admin consent flow +* Persistent browser session control is configured in a conditional access policy ++## Next steps +- [Add or delete users](add-users-azure-active-directory.md) ++- [Assign roles to users](active-directory-users-assign-role-azure-portal.md) ++- [Create a basic group and add members](active-directory-groups-create-azure-portal.md) ++- [View Azure AD enterprise user management documentation](../enterprise-users/index.yml). |
active-directory | Reference Company Branding Css Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/reference-company-branding-css-template.md | + + Title: CSS reference guide for customizing company branding - Azure AD +description: Learn about the CSS template selectors for customizing company branding. ++++++++ Last updated : 03/24/2023++++++# CSS template reference guide ++Configuring your company branding for the user sign-in process provides a seamless experience in your applications that use Azure Active Directory (Azure AD) as the identity and access management service. Use this CSS reference guide if you're using the [CSS template](https://download.microsoft.com/download/7/2/7/727f287a-125d-4368-a673-a785907ac5ab/custom-styles-template-013023.css) as part of the [customize company branding](reference-company-branding-css-template.md) process. +++## HTML selectors ++The following CSS styles become the default body and link styles for the whole page. Applying styles for other links or text override CSS selectors. ++- `body` - Styles for the whole page +- Styles for links: + - `a, a:link` - All links + - `a:hover` - When the mouse is over the link + - `a:focus` - When the link has focus + - `a:focus:hover` - When the link has focus *and* the mouse is over the link + - `a:active` - When the link is being clicked ++## Azure AD CSS selectors ++Use the following CSS selectors to configure the details of the sign-in experience. ++- `.ext-background-image` - Container that includes the background image in the default lightbox template +- `.ext-header` - Header at the top of the container +- `.ext-header-logo` - Header logo at the top of the container ++  ++- `.ext-middle` - Style for the full-screen background that aligns the sign-in box vertically to the middle and horizontally to the center +- `.ext-vertical-split-main-section` - Style for the container of the partial-screen background in the vertical split template that contains both a sign-in box and a background (This style is also known as the Active Directory Federation Services (ADFS) template.) +- `.ext-vertical-split-background-image-container` - Sign-in box background in the vertical split/ADFS template +- `.ext-sign-in-box` - Sign-in box container ++  ++- `.ext-title` - Title text ++  ++- `.ext-subtitle` - Subtitle text ++- Styles for primary buttons: + - `.ext-button.ext-primary` - Primary button default style + - `.ext-button.ext-primary:hover` - When the mouse is over the button + - `.ext-button.ext-primary:focus` - When the button has focus + - `.ext-button.ext-primary:focus:hover` - When the button has focus *and* the mouse is over the button + - `.ext-button.ext-primary:active` - When the button is being clicked ++  ++- Styles for secondary buttons: + - `.ext-button.ext-secondary` - Secondary buttons + - `.ext-button.ext-secondary:hover` - When the mouse is over the button + - `.ext-button.ext-secondary:focus` When the button has focus + - `.ext-button.ext-secondary:focus:hover` - When the button has focus *and* the mouse is over the button + - `.ext-button.ext-secondary:active` - When the button is being clicked ++  ++- `.ext-error` - Error text ++  ++- Styles for text boxes: + - `.ext-input.ext-text-box` - Text boxes + - `.ext-input.ext-text-box.ext-has-error` - When there's a validation error associated with the text box + - `.ext-input.ext-text-box:hover` - When the mouse is over the text box + - `.ext-input.ext-text-box:focus` - When the text box has focus + - `.ext-input.ext-text-box:focus:hover` - When the text box has focus *and* the mouse is over the text box ++  ++- `.ext-boilerplate-text` - Custom message text at the bottom of the sign-in box ++  ++- `.ext-promoted-fed-cred-box` - Sign-in options text box ++  + +- Styles for the footer: + - `.ext-footer` - Footer area at the bottom of the page + - `.ext-footer-links` - Links area in the footer at the bottom of the page + - `.ext-footer-item` - Link items (such as "Terms of use" or "Privacy & cookies") in the footer at the bottom of the page + - `.ext-debug-item` - Debug details ellipsis in the footer at the bottom of the page + |
active-directory | Users Default Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md | The set of default permissions depends on whether the user is a native member of | | - | - Users and contacts | <ul><li>Enumerate the list of all users and contacts<li>Read all public properties of users and contacts</li><li>Invite guests<li>Change their own password<li>Manage their own mobile phone number<li>Manage their own photo<li>Invalidate their own refresh tokens</li></ul> | <ul><li>Read their own properties<li>Read display name, email, sign-in name, photo, user principal name, and user type properties of other users and contacts<li>Change their own password<li>Search for another user by object ID (if allowed)<li>Read manager and direct report information of other users</li></ul> | <ul><li>Read their own properties<li>Change their own password</li><li>Manage their own mobile phone number</li></ul> Groups | <ul><li>Create security groups<li>Create Microsoft 365 groups<li>Enumerate the list of all groups<li>Read all properties of groups<li>Read non-hidden group memberships<li>Read hidden Microsoft 365 group memberships for joined groups<li>Manage properties, ownership, and membership of groups that the user owns<li>Add guests to owned groups<li>Manage dynamic membership settings<li>Delete owned groups<li>Restore owned Microsoft 365 groups</li></ul> | <ul><li>Read properties of non-hidden groups, including membership and ownership (even non-joined groups)<li>Read hidden Microsoft 365 group memberships for joined groups<li>Search for groups by display name or object ID (if allowed)</li></ul> | <ul><li>Read object ID for joined groups<li>Read membership and ownership of joined groups in some Microsoft 365 apps (if allowed)</li></ul>-Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>List permissions granted to applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications</li><li>List permissions granted to applications</li></ul> +Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications</li><li>List permissions granted to applications</li></ul> Devices</li></ul> | <ul><li>Enumerate the list of all devices<li>Read all properties of devices<li>Manage all properties of owned devices</li></ul> | No permissions | No permissions Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> Roles and scopes | <ul><li>Read all administrative roles and memberships<li>Read all properties and membership of administrative units</li></ul> | No permissions | No permissions |
active-directory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md | The What's new in Azure Active Directory? release notes provide information abou - Deprecated functionality - Plans for changes +++## September 2022 ++### General Availability - SSPR writeback is now available for disconnected forests using Azure AD Connect cloud sync ++++**Type:** New feature +**Service category:** Azure AD Connect Cloud Sync +**Product capability:** Identity Lifecycle Management ++Azure AD Connect Cloud Sync Password writeback now provides customers the ability to synchronize Azure AD password changes made in the cloud to an on-premises directory in real time. This can be accomplished using the lightweight Azure AD cloud provisioning agent. For more information, see: [Tutorial: Enable cloud sync self-service password reset writeback to an on-premises environment](../authentication/tutorial-enable-cloud-sync-sspr-writeback.md). ++++### General Availability - Device-based conditional access on Linux Desktops ++++**Type:** New feature +**Service category:** Conditional Access +**Product capability:** SSO ++++This feature empowers users on Linux clients to register their devices with Azure AD, enroll into Intune management, and satisfy device-based Conditional Access policies when accessing their corporate resources. ++- Users can register their Linux devices with Azure AD. +- Users can enroll in Mobile Device Management (Intune), which can be used to provide compliance decisions based upon policy definitions to allow device based conditional access on Linux Desktops. +- If compliant, users can use Microsoft Edge Browser to enable Single-Sign on to M365/Azure resources and satisfy device-based Conditional Access policies. ++For more information, see: ++- [Azure AD registered devices](../devices/concept-azure-ad-register.md) +- [Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md) ++++### General Availability - Azure AD SCIM Validator ++++**Type:** New feature +**Service category:** Provisioning +**Product capability:** Outbound to SaaS Applications ++++Independent Software Vendors(ISVs) and developers can self-test their SCIM endpoints for compatibility: We have made it easier for ISVs to validate that their endpoints are compatible with the SCIM-based Azure AD provisioning services. This is now in general availability (GA) status. ++For more information, see: [Tutorial: Validate a SCIM endpoint](../app-provisioning/scim-validator-tutorial.md) ++++### General Availability - prevent accidental deletions ++++**Type:** New feature +**Service category:** Provisioning +**Product capability:** Outbound to SaaS Applications ++++Accidental deletion of users in any system could be disastrous. WeΓÇÖre excited to announce the general availability of the accidental deletions prevention capability as part of the Azure AD provisioning service. When the number of deletions to be processed in a single provisioning cycle spikes above a customer defined threshold the following will happen. The Azure AD provisioning service pauses, provide you with visibility into the potential deletions, and allow you to accept or reject the deletions. This functionality has historically been available for Azure AD Connect, and Azure AD Connect Cloud Sync. It's now available across the various provisioning flows, including both HR-driven provisioning and application provisioning. ++For more information, see: [Enable accidental deletions prevention in the Azure AD provisioning service](../app-provisioning/accidental-deletions.md) ++++### General Availability - Identity Protection Anonymous and Malicious IP for ADFS on-premises logins ++++**Type:** New feature +**Service category:** Identity Protection +**Product capability:** Identity Security & Protection ++++Identity protection expands its Anonymous and Malicious IP detections to protect ADFS sign-ins. This automatically applies to all customers who have AD Connect Health deployed and enabled, and show up as the existing "Anonymous IP" or "Malicious IP" detections with a token issuer type of "AD Federation Services". ++For more information, see: [What is risk?](../identity-protection/concept-identity-protection-risks.md) +++++### New Federated Apps available in Azure AD Application gallery - September 2022 ++++**Type:** New feature +**Service category:** Enterprise Apps +**Product capability:** 3rd Party Integration ++++In September 2022 we've added the following 15 new applications in our App gallery with Federation support: ++[RocketReach SSO](../saas-apps/rocketreach-sso-tutorial.md), [Arena EU](../saas-apps/arena-eu-tutorial.md), [Zola](../saas-apps/zola-tutorial.md), [FourKites SAML2.0 SSO for Tracking](../saas-apps/fourkites-tutorial.md), [Syniverse Customer Portal](../saas-apps/syniverse-customer-portal-tutorial.md), [Rimo](https://rimo.app/), [Q Ware CMMS](https://qware.app/), [Mapiq (OIDC)](https://app.mapiq.com/), [NICE Cxone](../saas-apps/nice-cxone-tutorial.md), [dominKnow|ONE](../saas-apps/dominknowone-tutorial.md), [Waynbo for Azure AD](https://webportal-eu.waynbo.com/Login), [innDex](https://web.inndex.co.uk/azure/authorize), [Profiler Software](https://www.profiler.net.au/), [Trotto go links](https://trot.to/_/auth/login), [AsignetSSOIntegration](../saas-apps/asignet-sso-tutorial.md). ++You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial, ++For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest +++ ## August 2022 IT Admins can start using the new "Hybrid Admin" role as the least privileged ro In May 2020, we've added the following 36 new applications in our App gallery with Federation support: -[Moula](https://moula.com.au/pay/merchants), [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/), [TackleBox](https://tacklebox.in/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md) +[Moula](https://moula.com.au/pay/merchants), [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md) You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial. For more information about group-based licensing, see [What is group-based licen In November 2018, we've added these 26 new apps with Federation support to the app gallery: -[CoreStack](https://cloud.corestack.io/site/login), [HubSpot](../saas-apps/hubspot-tutorial.md), [GetThere](../saas-apps/getthere-tutorial.md), [Gra-Pe](../saas-apps/grape-tutorial.md), [eHour](https://getehour.com/try-now), [Consent2Go](../saas-apps/consent2go-tutorial.md), [Appinux](../saas-apps/appinux-tutorial.md), [DriveDollar](https://azuremarketplace.microsoft.com/marketplace/apps/savitas.drivedollar-azuread?tab=Overview), [Useall](../saas-apps/useall-tutorial.md), [Infinite Campus](../saas-apps/infinitecampus-tutorial.md), [Alaya](https://alayagood.com), [HeyBuddy](../saas-apps/heybuddy-tutorial.md), [Wrike SAML](../saas-apps/wrike-tutorial.md), [Drift](../saas-apps/drift-tutorial.md), [Zenegy for Business Central 365](https://accounting.zenegy.com/), [Everbridge Member Portal](../saas-apps/everbridge-tutorial.md), [Ivanti Service Manager (ISM)](../saas-apps/ivanti-service-manager-tutorial.md), [Peakon](../saas-apps/peakon-tutorial.md), [Allbound SSO](../saas-apps/allbound-sso-tutorial.md), [Plex Apps - Classic Test](https://test.plexonline.com/signon), [Plex Apps ΓÇô Classic](https://www.plexonline.com/signon), [Plex Apps - UX Test](https://test.cloud.plex.com/sso), [Plex Apps ΓÇô UX](https://cloud.plex.com/sso), [Plex Apps ΓÇô IAM](https://accounts.plex.com/), [CRAFTS - Childcare Records, Attendance, & Financial Tracking System](https://getcrafts.ca/craftsregistration) +[CoreStack](https://cloud.corestack.io/site/login), [HubSpot](../saas-apps/hubspot-tutorial.md), [GetThere](../saas-apps/getthere-tutorial.md), [Gra-Pe](../saas-apps/grape-tutorial.md), [eHour](https://getehour.com/try-now), [Consent2Go](../saas-apps/consent2go-tutorial.md), [Appinux](../saas-apps/appinux-tutorial.md), [DriveDollar](https://azuremarketplace.microsoft.com/marketplace/apps/savitas.drivedollar-azuread?tab=Overview), [Useall](../saas-apps/useall-tutorial.md), [Infinite Campus](../saas-apps/infinitecampus-tutorial.md), [Alaya](https://alayagood.com), [HeyBuddy](../saas-apps/heybuddy-tutorial.md), [Wrike SAML](../saas-apps/wrike-tutorial.md), [Drift](../saas-apps/drift-tutorial.md), [Zenegy for Business Central 365](https://accounting.zenegy.com/), [Everbridge Member Portal](../saas-apps/everbridge-tutorial.md), [Ivanti Service Manager (ISM)](../saas-apps/ivanti-service-manager-tutorial.md), [Peakon](../saas-apps/peakon-tutorial.md), [Allbound SSO](../saas-apps/allbound-sso-tutorial.md), [Plex Apps - Classic Test](https://test.plexonline.com/signon), [Plex Apps ΓÇô Classic](https://www.plexonline.com/signon), [Plex Apps - UX Test](https://test.cloud.plex.com/sso), [Plex Apps ΓÇô UX](https://cloud.plex.com/sso), [Plex Apps ΓÇô IAM](https://accounts.plex.com/) For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). This connector version is gradually being rolled out through November. This new For more information, see [Understand Azure AD Application Proxy connectors](../app-proxy/application-proxy-connectors.md). --## February 2018 --### Improved navigation for managing users and groups --**Type:** Plan for change -**Service category:** Directory Management -**Product capability:** Directory --The navigation experience for managing users and groups has been streamlined. You can now navigate from the directory overview directly to the list of all users, with easier access to the list of deleted users. You can also navigate from the directory overview directly to the list of all groups, with easier access to group management settings. And also from the directory overview page, you can search for a user, group, enterprise application, or app registration. ----### Availability of sign-ins and audit reports in Microsoft Azure operated by 21Vianet (Azure China 21Vianet) --**Type:** New feature -**Service category:** Azure Stack -**Product capability:** Monitoring & Reporting --Azure AD Activity log reports are now available in Microsoft Azure operated by 21Vianet (Azure China 21Vianet) instances. The following logs are included: --- **Sign-ins activity logs** - Includes all the sign-ins logs associated with your tenant.--- **Self service Password Audit Logs** - Includes all the SSPR audit logs.--- **Directory Management Audit logs** - Includes all the directory management-related audit logs like User management, App Management, and others.--With these logs, you can gain insights into how your environment is doing. The provided data enables you to: --- Determine how your apps and services are utilized by your users.--- Troubleshoot issues preventing your users from getting their work done.--For more information about how to use these reports, see [Azure Active Directory reporting](../reports-monitoring/overview-reports.md). ----### Use "Reports Reader" role (non-admin role) to view Azure AD Activity Reports --**Type:** New feature -**Service category:** Reporting -**Product capability:** Monitoring & Reporting --As part of customers feedback to enable non-admin roles to have access to Azure AD activity logs, we've enabled the ability for users who are in the "Reports Reader" role to access Sign-ins and Audit activity within the Azure portal as well as using the Microsoft Graph API. --For more information, how to use these reports, see [Azure Active Directory reporting](../reports-monitoring/overview-reports.md). ----### EmployeeID claim available as user attribute and user identifier --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** SSO --You can configure **EmployeeID** as the User identifier and User attribute for member users and B2B guests in SAML-based sign-on applications from the Enterprise application UI. --For more information, see [Customizing claims issued in the SAML token for enterprise applications in Azure Active Directory](../develop/active-directory-saml-claims-customization.md). ----### Simplified Application Management using Wildcards in Azure AD Application Proxy --**Type:** New feature -**Service category:** App Proxy -**Product capability:** User Authentication --To make application deployment easier and reduce your administrative overhead, we now support the ability to publish applications using wildcards. To publish a wildcard application, you can follow the standard application publishing flow, but use a wildcard in the internal and external URLs. --For more information, see [Wildcard applications in the Azure Active Directory application proxy](../app-proxy/application-proxy-wildcard.md) ----### New cmdlets to support configuration of Application Proxy --**Type:** New feature -**Service category:** App Proxy -**Product capability:** Platform --The latest release of the AzureAD PowerShell Preview module contains new cmdlets that allow customers to configure Application Proxy Applications using PowerShell. --The new cmdlets are: --- Get-AzureADApplicationProxyApplication-- Get-AzureADApplicationProxyApplicationConnectorGroup-- Get-AzureADApplicationProxyConnector-- Get-AzureADApplicationProxyConnectorGroup-- Get-AzureADApplicationProxyConnectorGroupMembers-- Get-AzureADApplicationProxyConnectorMemberOf-- New-AzureADApplicationProxyApplication-- New-AzureADApplicationProxyConnectorGroup-- Remove-AzureADApplicationProxyApplication-- Remove-AzureADApplicationProxyApplicationConnectorGroup-- Remove-AzureADApplicationProxyConnectorGroup-- Set-AzureADApplicationProxyApplication-- Set-AzureADApplicationProxyApplicationConnectorGroup-- Set-AzureADApplicationProxyApplicationCustomDomainCertificate-- Set-AzureADApplicationProxyApplicationSingleSignOn-- Set-AzureADApplicationProxyConnector-- Set-AzureADApplicationProxyConnectorGroup----### New cmdlets to support configuration of groups --**Type:** New feature -**Service category:** App Proxy -**Product capability:** Platform --The latest release of the AzureAD PowerShell module contains cmdlets to manage groups in Azure AD. These cmdlets were previously available in the AzureADPreview module and are now added to the AzureAD module --The Group cmdlets that are now release for General Availability are: --- Get-AzureADMSGroup-- New-AzureADMSGroup-- Remove-AzureADMSGroup-- Set-AzureADMSGroup-- Get-AzureADMSGroupLifecyclePolicy-- New-AzureADMSGroupLifecyclePolicy-- Remove-AzureADMSGroupLifecyclePolicy-- Add-AzureADMSLifecyclePolicyGroup-- Remove-AzureADMSLifecyclePolicyGroup-- Reset-AzureADMSLifeCycleGroup-- Get-AzureADMSLifecyclePolicyGroup----### A new release of Azure AD Connect is available --**Type:** New feature -**Service category:** AD Sync -**Product capability:** Platform --Azure AD Connect is the preferred tool to synchronize data between Azure AD and on premises data sources, including Windows Server Active Directory and LDAP. -->[!Important] ->This build introduces schema and sync rule changes. The Azure AD Connect Synchronization Service triggers a Full Import and Full Synchronization steps after an upgrade. For information on how to change this behavior, see [How to defer full synchronization after upgrade](../hybrid/how-to-upgrade-previous-version.md#how-to-defer-full-synchronization-after-upgrade). --This release has the following updates and changes: --**Fixed issues** --- Fix timing window on background tasks for Partition Filtering page when switching to next page.--- Fixed a bug that caused Access violation during the ConfigDB custom action.--- Fixed a bug to recover from sql connection timeout.--- Fixed a bug where certificates with SAN wildcards fail pre-req check.--- Fixed a bug that causes miiserver.exe crash during Azure AD connector export.--- Fixed a bug where a bad password attempt logged on DC when running caused the Azure AD connect wizard to change configuration--**New features and improvements** --- Application telemetry - Administrators can switch this class of data on/off.--- Azure AD Health data - Administrators must visit the health portal to control their health settings. Once the service policy has been changed, the agents will read and enforce it.--- Added device writeback configuration actions and a progress bar for page initialization.--- Improved general diagnostics with HTML report and full data collection in a ZIP-Text / HTML Report.--- Improved reliability of auto upgrade and added additional telemetry to ensure the health of the server can be determined.--- Restrict permissions available to privileged accounts on AD Connector account. For new installations, the wizard restricts the permissions that privileged accounts have on the MSOL account after creating the MSOL account. The changes affect express installations and custom installations with Auto-Create account.--- Changed the installer to not require SA privilege on clean install of AADConnect.--- New utility to troubleshoot synchronization issues for a specific object. Currently, the utility checks for the following things:-- - UserPrincipalName mismatch between synchronized user object and the user account in Azure AD Tenant. -- - If the object is filtered from synchronization due to domain filtering -- - If the object is filtered from synchronization due to organizational unit (OU) filtering --- New utility to synchronize the current password hash stored in the on-premises Active Directory for a specific user account. The utility does not require a password change.----### Applications supporting Intune App Protection policies added for use with Azure AD application-based Conditional Access --**Type:** Changed feature -**Service category:** Conditional Access -**Product capability:** Identity Security & Protection --We have added more applications that support application-based Conditional Access. Now, you can get access to Office 365 and other Azure AD-connected cloud apps using these approved client apps. --The following applications will be added by the end of February: --- Microsoft Power BI--- Microsoft Launcher--- Microsoft Invoicing--For more information, see: --- [Approved client app requirement](../conditional-access/concept-conditional-access-conditions.md#client-apps)-- [Azure AD app-based Conditional Access](../conditional-access/app-based-conditional-access.md)----### Terms of use update to mobile experience --**Type:** Changed feature -**Service category:** Terms of use -**Product capability:** Compliance --When the terms of use are displayed, you can now select **Having trouble viewing? Click here**. Clicking this link opens the terms of use natively on your device. Regardless of the font size in the document or the screen size of device, you can zoom and read the document as needed. ----## January 2018 --### New Federated Apps available in Azure AD app gallery --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In January 2018, the following new apps with federation support were added in the app gallery: --[IBM OpenPages](../saas-apps/ibmopenpages-tutorial.md), [OneTrust Privacy Management Software](../saas-apps/onetrust-tutorial.md), [Dealpath](../saas-apps/dealpath-tutorial.md), [IriusRisk Federated Directory, and [Fidelity NetBenefits](../saas-apps/fidelitynetbenefits-tutorial.md). --For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). --For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). ----### Sign in with additional risk detected --**Type:** New feature -**Service category:** Identity Protection -**Product capability:** Identity Security & Protection --The insight you get for a detected risk detection is tied to your Azure AD subscription. With the Azure AD Premium P2 edition, you get the most detailed information about all underlying detections. --With the Azure AD Premium P1 edition, detections that aren't covered by your license appear as the risk detection Sign-in with additional risk detected. --For more information, see [Azure Active Directory risk detections](../identity-protection/overview-identity-protection.md). ----### Hide Office 365 applications from end user's access panels --**Type:** New feature -**Service category:** My Apps -**Product capability:** SSO --You can now better manage how Office 365 applications show up on your user's access panels through a new user setting. This option is helpful for reducing the number of apps in a user's access panels if you prefer to only show Office apps in the Office portal. The setting is located in the **User Settings** and is labeled, **Users can only see Office 365 apps in the Office 365 portal**. --For more information, see [Hide an application from user's experience in Azure Active Directory](../manage-apps/hide-application-from-user-portal.md). ----### Seamless sign into apps enabled for Password SSO directly from app's URL --**Type:** New feature -**Service category:** My Apps -**Product capability:** SSO --The My Apps browser extension is now available via a convenient tool that gives you the My Apps single-sign on capability as a shortcut in your browser. After installing, user's will see a waffle icon in their browser that provides them quick access to apps. Users can now take advantage of: --- The ability to directly sign in to password-SSO based apps from the app's sign-in page-- Launch any app using the quick search feature-- Shortcuts to recently used apps from the extension-- The extension is available for Microsoft Edge, Chrome, and Firefox.--For more information, see [My Apps Secure Sign-in Extension](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510#download-and-install-the-my-apps-secure-sign-in-extension). ----### Azure AD administration experience in Azure Classic Portal has been retired --**Type:** Deprecated -**Service category:** Azure AD -**Product capability:** Directory --As of January 8, 2018, the Azure AD administration experience in the Azure classic portal has been retired. This took place in conjunction with the retirement of the Azure classic portal itself. In the future, you should use the [Azure portal](https://portal.azure.com) for all your portal-based administration of Azure AD. ----### The PhoneFactor web portal has been retired --**Type:** Deprecated -**Service category:** Azure AD -**Product capability:** Directory --As of January 8, 2018, the PhoneFactor web portal has been retired. This portal was used for the administration of multi-factor authentication (MFA) server, but those functions have been moved into the Azure portal at portal.azure.com. --The multifactor authentication (MFA) configuration is located at: **Azure Active Directory \> multi-factor authentication (MFA) Server** ----### Deprecate Azure AD reports --**Type:** Deprecated -**Service category:** Reporting -**Product capability:** Identity Lifecycle Management ---With the general availability of the new Azure Active Directory Administration console and new APIs now available for both activity and security reports, the report APIs under "/reports" endpoint have been retired as of end of December 31, 2017. --**What's available?** --As part of the transition to the new admin console, we have made 2 new APIs available for retrieving Azure AD Activity Logs. The new set of APIs provides richer filtering and sorting functionality in addition to providing richer audit and sign-in activities. The data previously available through the security reports can now be accessed through the Identity Protection risk detections API in Microsoft Graph. --For more information, see: --- [Get started with the Azure Active Directory reporting API](../reports-monitoring/concept-reporting-api.md)--- [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md)-- |
active-directory | How To Connect Group Writeback V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md | To see the default behavior in your environment for newly created groups, use th You can also use the PowerShell cmdlet [AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md). -> Example: `(Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"} | FL *).values` +> Example: `(Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"} | Select-Object -ExpandProperty Values` > If nothing is returned, you're using the default directory settings. Newly created Microsoft 365 groups *will automatically* be written back. |
active-directory | How To Connect Install Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md | To read more about securing your Active Directory environment, see [Best practic #### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later - **note that Windows Server 2022 is not yet supported**. You can deploy Azure AD Connect on Windows Server 2016 but since Windows Server 2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration. We recommend the usage of domain joined Windows Server 2019.+- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later. You can deploy Azure AD Connect on Windows Server 2016 but since Windows Server 2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration. We recommend the usage of domain joined Windows Server 2022. - The minimum .NET Framework version required is 4.6.2, and newer versions of .Net are also supported. - Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported. |
active-directory | Reference Connect Health Version History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-version-history.md | The Azure Active Directory team regularly updates Azure AD Connect Health with n Azure AD Connect Health for Sync is integrated with Azure AD Connect installation. Read more about [Azure AD Connect release history](./reference-connect-version-history.md) For feature feedback, vote at [Connect Health User Voice channel](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) +## 27 March 2023 +**Agent Update** ++Azure AD Connect Health ADDS and ADFS Health Agents (version 3.2.2256.26) ++- We created a fix for so that the agents would be FIPS compliant + - the change was to have the agents use ΓÇÿCloudStorageAccount.UseV1MD5 = falseΓÇÖ so the agent only uses only FIPS compliant cryptography, otherwise azure blob client causes FIPs exceptions to be thrown. +- Update of Newtonsoft.json library from 12.0.1 to 13.0.1 to resolve a component governance alert. +- In ADFS health agent, the TestADFSDuplicateSPN test was disabled as the test was unreliable, it would generate misleading alerts when server experienced transient connectivity issues. + ## 19 January 2023 **Agent Update** - Azure AD Connect Health agent for Azure AD Connect (version 3.2.2188.23) |
active-directory | Reference Connect Sync Attributes Synchronized | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-attributes-synchronized.md | In this case, start with the list of attributes in this topic and identify those | targetAddress |X |X | | | | telephoneAssistant |X |X | | | | telephoneNumber |X |X | | |-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.| +| thumbnailphoto |X |X | |Synced to M365 profile photo periodically. Admins can set the frequency of the sync by changing the Azure AD Connect value. Please note that if users change their photo both on-premises and in cloud in a time span that is less than the Azure AD Connect value, we do not guarantee that the latest photo will be served.| | title |X |X | | | | unauthOrig |X |X |X | | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. | In this case, start with the list of attributes in this topic and identify those | targetAddress |X |X | | | | telephoneAssistant |X |X | | | | telephoneNumber |X |X | | |-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.| +| thumbnailphoto |X |X | |Synced to M365 profile photo periodically. Admins can set the frequency of the sync by changing the Azure AD Connect value. Please note that if users change their photo both on-premises and in cloud in a time span that is less than the Azure AD Connect value, we do not guarantee that the latest photo will be served.| | title |X |X | | | | unauthOrig |X |X |X | | | url |X |X | | | In this case, start with the list of attributes in this topic and identify those | st |X |X | | | | streetAddress |X |X | | | | telephoneNumber |X |X | | |-| thumbnailphoto |X |X | |synced only once from Azure AD to Exchange Online after which Exchange Online becomes source of authority for this attribute and any later changes can't be synced from on-premises. See ([KB](https://support.microsoft.com/help/3062745/user-photos-aren-t-synced-from-the-on-premises-environment-to-exchange)) for more.| +| thumbnailphoto |X |X | |Synced to M365 profile photo periodically. Admins can set the frequency of the sync by changing the Azure AD Connect value. Please note that if users change their photo both on-premises and in cloud in a time span that is less than the Azure AD Connect value, we do not guarantee that the latest photo will be served.| | title |X |X | | | | usageLocation |X | | |mechanical property. The userΓÇÖs country/region. Used for license assignment. | | userPrincipalName |X | | |UPN is the login ID for the user. Most often the same as [mail] value. | |
active-directory | Howto Enforce Signed Saml Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-enforce-signed-saml-authentication.md | If enabled Azure Active Directory will validate the requests against the public - Key identifier in request is missing and two most recently added certificates don't match with the request signature. - Request signed but algorithm missing. - No certificate matching with provided key identifier. -- Signature algorithm not allowed. Only RSA-SHA256 is supported. +- Signature algorithm not allowed. Only RSA-SHA256 is supported. ++> [!NOTE] +> A `Signature` element in `AuthnRequest` elements is optional. If `Require Verification certificates` is not checked, Azure AD does not validate signed authentication requests if a signature is present. Requestor verification is provided for by only responding to registered Assertion Consumer Service URLs. ++> If `Require Verification certificates` is checked, SAML Request Signature Verification will work for SP-initiated(service provider/relying party initiated) authentication requests only. Only the application configured by the service provider will have the access to to the private and public keys for signing the incoming SAML Authentication Reqeusts from the applicaiton. The public key should be uploaded to allow the verification of the request, in which case AAD will have access to only the public key. ++> Enabling `Require Verification certificates` will not allow IDP-initiated authentication requests (like SSO testing feature, MyApps or M365 app launcher) to be validated as the IDP would not possess the same private keys as the registered applicaiton. ## To configure SAML Request Signature Verification in the Azure portal |
active-directory | What Is Application Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md | Your Azure AD reporting and monitoring solution depends on your legal, security, You can clean up access to applications. For example, [removing a userΓÇÖs access](methods-for-removing-user-access.md). You can also [disable how a user signs in](disable-user-sign-in-portal.md). And finally, you can delete the application if it's no longer needed for the organization. For more information on how to delete an enterprise application from your Azure AD tenant, see [Quickstart: Delete an enterprise application](delete-application-portal.md). +## Guided walkthrough ++For a guided walkthrough of many of the recommendations in this article, see the [Microsoft 365 Secure your cloud apps with Single Sign On (SSO) guided walkthrough](https://go.microsoft.com/fwlink/?linkid=2221502). + ## Next steps - Get started by adding your first enterprise application with the [Quickstart: Add an enterprise application](add-application-portal.md). |
active-directory | Concept All Sign Ins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md | Title: Sign-in logs (preview) in Azure Active Directory -description: Conceptual information about Azure AD sign-in logs, including new features in preview. + Title: Sign-in logs (preview) +description: Conceptual information about sign-in logs, including new features in preview. You can customize the list view by clicking **Columns** in the toolbar.  +#### Considerations for MFA sign-ins ++When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`. + ### Non-interactive user sign-ins -Like interactive user sign-ins, non-interactive sign-ins are done on behalf of a user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user will perceive these sign-ins as happening in the background. +Like interactive user sign-ins, non-interactive sign-ins are done on behalf of a user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user perceives these sign-ins as happening in the background. **Report size:** Large </br> **Examples:** You can't customize the fields shown in this report. To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period. The non-interactive sign-ins share the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the state of the user or client doesn't change, the IP address, resource, and all other information is the same for each access token request. The only state that does change is the date and time of the sign-in. -When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins will be from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) will have a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps. +When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) have a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the non-interactive users when the following data matches: The IP address of non-interactive sign-ins doesn't match the actual source IP of ### Service principal sign-ins -Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any non-user account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources. +Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any nonuser account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources. **Report size:** Large </br> Select the **Add filters** option from the top of the table to get started.  -There are several filter options to choose from. Below are some notable options and details. +There are several filter options to choose from: - **User:** The *user principal name* (UPN) of the user in question. - **Status:** Options are *Success*, *Failure*, and *Interrupted*. |
active-directory | Concept Sign Ins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md | Select the **Add filters** option from the top of the table to get started.  -There are several filter options to choose from. Below are some notable options and details. +There are several filter options to choose from: - **User:** The *user principal name* (UPN) of the user in question. - **Status:** Options are *Success*, *Failure*, and *Interrupted*. There are several filter options to choose from. Below are some notable options - *Not applied:* No policy applied to the user and application during sign-in. - *Success:* One or more CA policies applied to the user and application (but not necessarily the other conditions) during sign-in. - *Failure:* The sign-in satisfied the user and application condition of at least one CA policy and grant controls are either not satisfied or set to block access.-- **IP addresses:** There is no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.+- **IP addresses:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information. The following table provides the options and descriptions for the **Client app** filter option. Now that your sign-in logs table is formatted appropriately, you can more effect ### Sign-in error codes -If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we cannot document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue. +If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we can't document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue.  When analyzing authentication details, take note of the following details: - The **Primary authentication** row isn't initially logged. - If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting. +#### Considerations for MFA sign-ins ++When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`. + ## Sign-in data used by other services Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage. |
active-directory | Recommendation Migrate Apps From Adfs To Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-apps-from-adfs-to-azure-ad.md | -[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. +[Azure AD recommendations](overview-recommendations.md) provides you with personalized insights and actionable guidance to align your tenant with recommended best practices. This article covers the recommendation to migrate apps from Active Directory Federated Services (AD FS) to Azure Active Directory (Azure AD). This recommendation is called `adfsAppsMigration` in the recommendations API in Microsoft Graph. Using Azure AD gives you granular per-application access controls to secure acce ## Action plan 1. [Install Azure AD Connect Health](../hybrid/how-to-connect-install-roadmap.md) on your AD FS server. +1. [Review the AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md) to get insights about your AD FS applications. +1. Read the solution guide for [migrating applications to Azure AD](../manage-apps/migrate-adfs-apps-to-azure.md). +1. Migrate applications to Azure AD. For more information, see the article [Migrate from federation to cloud authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md). -2. [Review the AD FS application activity report](../manage-apps/migrate-adfs-application-activity.md) to get insights about your AD FS applications. +### Guided walkthrough -3. Read the solution guide for [migrating applications to Azure AD](../manage-apps/migrate-adfs-apps-to-azure.md). +For a guided walkthrough of many of the recommendations in this article, see the migration guide [Migrate from AD FS to Microsoft Azure Active Directory for identity management](https://setup.microsoft.com/azure/migrate-ad-fs-to-microsoft-azure-ad). -4. Migrate applications to Azure AD. For more information, use [the deployment plan for enabling single sign-on](https://go.microsoft.com/fwlink/?linkid=2110877&clcid=0x409). - ## Next steps - [Review the Azure AD recommendations overview](overview-recommendations.md) |
active-directory | Citi Program Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citi-program-tutorial.md | + + Title: Azure Active Directory SSO integration with CITI Program +description: Learn how to configure single sign-on between Azure Active Directory and CITI Program. ++++++++ Last updated : 03/26/2023+++++# Azure Active Directory SSO integration with CITI Program ++In this article, you learn how to integrate CITI Program with Azure Active Directory (Azure AD). The CITI Program identifies education and training needs in the communities we serve and provides high quality, peer-reviewed, web-based educational materials to meet those needs. When you integrate CITI Program with Azure AD, you can: ++* Control in Azure AD who has access to CITI Program. +* Enable your users to be automatically signed-in to CITI Program with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You configure and test Azure AD single sign-on for CITI Program in a test environment. CITI Program supports **SP** initiated single sign-on and **Just In Time** user provisioning. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with CITI Program, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* CITI Program single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the CITI Program application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add CITI Program from the Azure AD gallery ++Add CITI Program from the Azure AD application gallery to configure single sign-on with CITI Program. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **CITI Program** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type the URL: + `https://www.citiprogram.org/shibboleth` ++ b. In the **Reply URL** textbox, type the URL: + `https://www.citiprogram.org/Shibboleth.sso/SAML2/POST` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + `https://www.citiprogram.org/Shibboleth.sso/Login?target=https://www.citiprogram.org/Secure/Welcome.cfm?inst=<InstitutionID>&entityID=<EntityID>` ++ > [!NOTE] + > This value is not real. Update this value with the actual Sign on URL. Contact [CITI Program support team](mailto:shibboleth@citiprogram.org) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. CITI Program application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++  ++1. In addition to above, CITI Program application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. ++ | Name | Source Attribute| + | | | + | urn:oid:1.3.6.1.4.1.5923.1.1.1.6 | user.userprincipalname | + | urn:oid:0.9.2342.19200300.100.1.3 | user.userprincipalname | + | urn:oid:2.5.4.42 | user.givenname | + | urn:oid:2.5.4.4 | user.surname | ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up CITI Program** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure CITI Program SSO ++To configure single sign-on on **CITI Program** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CITI Program support team](mailto:shibboleth@citiprogram.org). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create CITI Program test user ++In this section, a user called B.Simon is created in CITI Program. CITI Program supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in CITI Program, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to CITI Program Sign-on URL where you can initiate the login flow. ++* Go to CITI Program Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the CITI Program tile in the My Apps, this will redirect to CITI Program Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure CITI Program you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Infor Cloudsuite Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infor-cloudsuite-provisioning-tutorial.md | This section guides you through the steps to configure the Azure AD provisioning 9. Review the user attributes that are synchronized from Azure AD to Infor CloudSuite in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Infor CloudSuite for update operations. Select the **Save** button to commit any changes. -  + |Attribute|Type|Supported for filtering|Required by Infor CloudSuite| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |displayName|String|| + |externalId|String|| + |name.familyName|String|| + |name.givenName|String|| + |displayName|String|| + |title|String|| + |emails[type eq "work"].value|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|| + |urn:ietf:params:scim:schemas:extension:infor:2.0:User:actorId|String|| + |urn:ietf:params:scim:schemas:extension:infor:2.0:User:federationId|String|| + |urn:ietf:params:scim:schemas:extension:infor:2.0:User:ifsPersonId|String|| + |urn:ietf:params:scim:schemas:extension:infor:2.0:User:inUser|String|| + |urn:ietf:params:scim:schemas:extension:infor:2.0:User:userAlias|String|| + 10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Infor CloudSuite**. This section guides you through the steps to configure the Azure AD provisioning 11. Review the group attributes that are synchronized from Azure AD to Infor CloudSuite in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Infor CloudSuite for update operations. Select the **Save** button to commit any changes. -  + |Attribute|Type|Supported for filtering|Required by Infor CloudSuite| + ||||| + |displayName|String|✓|✓ + |members|Reference|| + |externalId|String|| 12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). This section guides you through the steps to configure the Azure AD provisioning  -This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Infor CloudSuite. +This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). + -For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md). +## Change log +02/15/2023 - Added support for custom extension user attributes **urn:ietf:params:scim:schemas:extension:infor:2.0:User:actorId**, **urn:ietf:params:scim:schemas:extension:infor:2.0:User:federationId**, **urn:ietf:params:scim:schemas:extension:infor:2.0:User:ifsPersonId**, **urn:ietf:params:scim:schemas:extension:infor:2.0:User:inUser**, and **urn:ietf:params:scim:schemas:extension:infor:2.0:User:userAlias**. -## Additional resources +## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) |
active-directory | Intradiem Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intradiem-tutorial.md | + + Title: Azure Active Directory SSO integration with Intradiem +description: Learn how to configure single sign-on between Azure Active Directory and Intradiem. ++++++++ Last updated : 03/26/2023+++++# Azure Active Directory SSO integration with Intradiem ++In this article, you learn how to integrate Intradiem with Azure Active Directory (Azure AD). AI-Powered Productivity Solution that Integrates with Call Center and Workforce Management Software to Improve Savings, Productivity, and Engagement. When you integrate Intradiem with Azure AD, you can: ++* Control in Azure AD who has access to Intradiem. +* Enable your users to be automatically signed-in to Intradiem with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You configure and test Azure AD single sign-on for Intradiem in a test environment. Intradiem supports only **SP** initiated single sign-on. ++## Prerequisites ++To integrate Azure Active Directory with Intradiem, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Intradiem single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Intradiem application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Intradiem from the Azure AD gallery ++Add Intradiem from the Azure AD application gallery to configure single sign-on with Intradiem. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Intradiem** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using one of the following patterns: ++ | **Identifier** | + || + | `https://<CustomerName>.intradiem.com/auth/realms/<CustomerName>` | + | `https://<CustomerName>auth.intradiem.com/auth/realms/<CustomerName>` | ++ b. In the **Reply URL** textbox, type a URL using one of the following patterns: ++ | **Reply URL** | + || + | `https://<CustomerName>auth.intradiem.com/auth/realms/<CustomerName>/broker/<CustomerName>/endpoint` | + | `https://<CustomerName>.intradiem.com/auth/realms/<CustomerName>/broker/<CustomerName>/endpoint` | ++ c. In the **Sign on URL** textbox, type a URL using one of the following patterns: + + | **Sign on URL** | + |-| + | `https://<CustomerName>auth.intradiem.com` | + | `https://<CustomerName>.intradiem.com` | ++ > [!Note] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Intradiem support team](mailto:support@intradiem.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++  ++## Configure Intradiem SSO ++To configure single sign-on on **Intradiem** side, you need to send the **App Federation Metadata Url** to [Intradiem support team](mailto:support@intradiem.com). They set this setting to have the SAML SSO connection set properly on both sides ++### Create Intradiem test user ++In this section, you create a user called Britta Simon in Intradiem. Work with [Intradiem support team](mailto:support@intradiem.com) to add the users in the Intradiem platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Intradiem Sign-on URL where you can initiate the login flow. ++* Go to Intradiem Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the Intradiem tile in the My Apps, this will redirect to Intradiem Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Intradiem you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Lambda Test Single Sign On Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lambda-test-single-sign-on-tutorial.md | + + Title: Azure Active Directory SSO integration with LambdaTest Single Sign on +description: Learn how to configure single sign-on between Azure Active Directory and LambdaTest Single Sign on. ++++++++ Last updated : 03/26/2023+++++# Azure Active Directory SSO integration with LambdaTest Single Sign on ++In this article, you learn how to integrate LambdaTest Single Sign on with Azure Active Directory (Azure AD). LambdaTest's Single Sign-on application enables you to self-configure SSO with your Azure AD instance. When you integrate LambdaTest Single Sign on with Azure AD, you can: ++* Control in Azure AD who has access to LambdaTest Single Sign on. +* Enable your users to be automatically signed-in to LambdaTest Single Sign on with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You configure and test Azure AD single sign-on for LambdaTest Single Sign on in a test environment. LambdaTest Single Sign on supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning. ++## Prerequisites ++To integrate Azure Active Directory with LambdaTest Single Sign on, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* LambdaTest Single Sign on single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the LambdaTest Single Sign on application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add LambdaTest Single Sign on from the Azure AD gallery ++Add LambdaTest Single Sign on from the Azure AD application gallery to configure single sign-on with LambdaTest Single Sign on. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **LambdaTest Single Sign on** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `urn:auth0:lambdatest:<CustomerName>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://lambdatest.auth0.com/login/callback?connection=<CustomerName>` ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type the URL: + `https://accounts.lambdatest.com/auth0/login` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [LambdaTest Single Sign on Client support team](mailto:support@lambdatest.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up LambdaTest Single Sign on** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure LambdaTest Single Sign on SSO ++To configure single sign-on on **LambdaTest Single Sign on** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [LambdaTest Single Sign on support team](mailto:support@lambdatest.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create LambdaTest Single Sign on test user ++In this section, a user called B.Simon is created in LambdaTest Single Sign on. LambdaTest Single Sign on supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in LambdaTest Single Sign on, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to LambdaTest Single Sign on Sign-on URL where you can initiate the login flow. ++* Go to LambdaTest Single Sign on Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the LambdaTest Single Sign on for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the LambdaTest Single Sign on tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the LambdaTest Single Sign on for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure LambdaTest Single Sign on you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Sauce Labs Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sauce-labs-tutorial.md | + + Title: Azure Active Directory SSO integration with Sauce Labs +description: Learn how to configure single sign-on between Azure Active Directory and Sauce Labs. ++++++++ Last updated : 03/26/2023+++++# Azure Active Directory SSO integration with Sauce Labs ++In this article, you learn how to integrate Sauce Labs with Azure Active Directory (Azure AD). App integration for single sign-on and automatic account provisioning at Sauce Labs. When you integrate Sauce Labs with Azure AD, you can: ++* Control in Azure AD who has access to Sauce Labs. +* Enable your users to be automatically signed-in to Sauce Labs with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You configure and test Azure AD single sign-on for Sauce Labs in a test environment. Sauce Labs supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with Sauce Labs, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Sauce Labs single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Sauce Labs application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Sauce Labs from the Azure AD gallery ++Add Sauce Labs from the Azure AD application gallery to configure single sign-on with Sauce Labs. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Sauce Labs** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, the user doesn't have to perform any step as the app is already preintegrated with Azure. ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type the URL: + `https://accounts.saucelabs.com/` ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Sauce Labs** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Sauce Labs SSO ++To configure single sign-on on **Sauce Labs** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Sauce Labs support team](mailto:support@saucelabs.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Sauce Labs test user ++In this section, a user called B.Simon is created in Sauce Labs. Sauce Labs supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Sauce Labs, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to Sauce Labs Sign-on URL where you can initiate the login flow. ++* Go to Sauce Labs Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the Sauce Labs for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the Sauce Labs tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Sauce Labs for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Sauce Labs you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
aks | Load Balancer Standard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md | This article covers integration with a public load balancer on AKS. For internal ## Before you begin -Azure Load Balancer is available in two SKUs: *Basic* and *Standard*. The *Standard* SKU is used by default when you create an AKS cluster. The *Standard* SKU gives you access to added functionality, such as a larger backend pool, [multiple node pools](use-multiple-node-pools.md), [Availability Zones](availability-zones.md), and is [secure by default][azure-lb]. It's the recommended load balancer SKU for AKS. --For more information on the *Basic* and *Standard* SKUs, see [Azure Load Balancer SKU comparison][azure-lb-comparison]. --This article assumes you have an AKS cluster with the *Standard* SKU Azure Load Balancer. If you need an AKS cluster, you can create one [using Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [the Azure portal][aks-quickstart-portal]. +* Azure Load Balancer is available in two SKUs: *Basic* and *Standard*. The *Standard* SKU is used by default when you create an AKS cluster. The *Standard* SKU gives you access to added functionality, such as a larger backend pool, [multiple node pools](use-multiple-node-pools.md), [Availability Zones](availability-zones.md), and is [secure by default][azure-lb]. It's the recommended load balancer SKU for AKS. For more information on the *Basic* and *Standard* SKUs, see [Azure Load Balancer SKU comparison][azure-lb-comparison]. +* This article assumes you have an AKS cluster with the *Standard* SKU Azure Load Balancer. If you need an AKS cluster, you can create one [using Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [the Azure portal][aks-quickstart-portal]. +* AKS manages the lifecycle and operations of agent nodes. Modifying the IaaS resources associated with the agent nodes isn't supported. An example of an unsupported operation is making manual changes to the load balancer resource group. > [!IMPORTANT] > If you'd prefer to use your own gateway, firewall, or proxy to provide outbound connection, you can skip the creation of the load balancer outbound pool and respective frontend IP by using [**outbound type as UserDefinedRouting (UDR)**](egress-outboundtype.md). The outbound type defines the egress method for a cluster and defaults to type `LoadBalancer`. |
aks | Private Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md | Private cluster is available in public regions, Azure Government, and Azure Chin * The `aks-preview` extension 0.5.29 or higher. * If using Azure Resource Manager (ARM) or the Azure REST API, the AKS API version must be 2021-05-01 or higher. * Azure Private Link service is supported on Standard Azure Load Balancer only. Basic Azure Load Balancer isn't supported. -* To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16] +* To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server, and make sure to add this public IP address as the *first* DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16] ## Limitations az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --lo The API server endpoint has no public IP address. To manage the API server, you'll need to use a VM that has access to the AKS cluster's Azure Virtual Network (VNet). There are several options for establishing network connectivity to the private cluster. -* Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster. +* Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster using the [`az vm create`][az-vm-create] command with the `--vnet-name` parameter. * Use a VM in a separate network and set up [Virtual network peering][virtual-network-peering]. See the section below for more information on this option. * Use an [Express Route or VPN][express-route-or-VPN] connection. * Use the [AKS `command invoke` feature][command-invoke]. * Use a [private endpoint][private-endpoint-service] connection. -Creating a VM in the same VNET as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges. +Creating a VM in the same VNet as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges. ## Virtual network peering Virtual network peering is one way to access your private cluster. To use virtua 1. In the Azure portal, navigate to the resource group that contains your cluster's virtual network. 1. In the right pane, select the virtual network. The virtual network name is in the form *aks-vnet-\**. 1. In the left pane, select **Peerings**. -1. Select **Add**, add the virtual network of the VM, and then create the peering. -1. Go to the virtual network where you have the VM and select **Peerings**. Select the AKS virtual network, and then create the peering. If the address ranges on the AKS virtual network and the VM's virtual network clash, peering fails. For more information, see [Virtual network peering][virtual-network-peering]. +1. Select **Add**, add the virtual network of the VM, and then create the peering. For more information, see [Virtual network peering][virtual-network-peering]. ## Hub and spoke with custom DNS For associated best practices, see [Best practices for network connectivity and [install-azure-cli]: /cli/azure/install-azure-cli [private-dns-zone-contributor-role]: ../role-based-access-control/built-in-roles.md#dns-zone-contributor [network-contributor-role]: ../role-based-access-control/built-in-roles.md#network-contributor+[az-vm-create]: /cli/azure/vm#az-vm-create |
aks | Stop Api Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/stop-api-upgrade.md | + + Title: Stop cluster upgrades on API breaking changes in Azure Kubernetes Service (AKS) (preview) +description: Learn how to stop minor version change Azure Kubernetes Service (AKS) cluster upgrades on API breaking changes. +++ Last updated : 03/24/2023+++# Stop cluster upgrades on API breaking changes in Azure Kubernetes Service (AKS) ++To stay within a supported Kubernetes version, you usually have to upgrade your version at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes and deprecations and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime. ++Azure Kubernetes Service (AKS) now supports fail fast on minor version change cluster upgrades. This feature alerts you with an error message if it detects usage on deprecated APIs in the goal version. +++## Fail fast on control plane minor version manual upgrades in AKS (preview) ++AKS will fail fast on minor version change cluster manual upgrades if it detects usage on deprecated APIs in the goal version. This will only happen if the following criteria are true: ++- It's a minor version change for the cluster control plane. +- Your Kubernetes goal version is >= 1.26.0. +- The PUT MC request uses a preview API version of >= 2023-01-02-preview. +- The usage is performed within the last 1-12 hours. We record usage hourly, so usage within the last hour isn't guaranteed to appear in the detection. ++If the previous criteria are true and you attempt an upgrade, you'll receive an error message similar to the following example error message: ++``` +Bad Request({ ++ "code": "ValidationError", ++ "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set IgnoreKubernetesDeprecations in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n", ++ "subcode": "UpgradeBlockedOnDeprecatedAPIUsage" ++}) +``` ++After receiving the error message, you have two options: ++- Remove usage on your end and wait 12 hours for the current record to expire. +- Bypass the validation to ignore API changes. ++### Remove usage on API breaking changes ++Remove usage on API breaking changes using the following steps: ++1. Remove the deprecated API, which is listed in the error message. +2. Wait 12 hours for the current record to expire. +3. Retry your cluster upgrade. ++### Bypass validation to ignore API changes ++To bypass validation to ignore API breaking changes, update the `"properties":` block of `Microsoft.ContainerService/ManagedClusters` `PUT` operation with the following settings: ++> [!NOTE] +> The date and time you specify for `"until"` has to be in the future. `Z` stands for timezone. The following example is in GMT. For more information, see [Combined date and time representations](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). ++``` +{ + "properties": { + "upgradeSettings": { + "overrideSettings": { + "controlPlaneOverrides": [ + "IgnoreKubernetesDeprecations" + ], + "until": "2023-04-01T13:00:00Z" + } + } + } +} +``` ++## Next steps ++In this article, you learned how AKS detects deprecated APIs before an update is triggered and fails the upgrade operation upfront. To learn more about AKS cluster upgrades, see: ++- [Upgrade an AKS cluster][upgrade-cluster] +- [Use Planned Maintenance to schedule and control upgrades for your AKS clusters (preview)][planned-maintenance-aks] ++<!-- INTERNAL LINKS --> +[upgrade-cluster]: upgrade-cluster.md +[planned-maintenance-aks]: planned-maintenance.md |
aks | Use Metrics Server Vertical Pod Autoscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-metrics-server-vertical-pod-autoscaler.md | Title: Configure Metrics Server VPA in Azure Kubernetes Service (AKS) description: Learn how to vertically autoscale your Metrics Server pods on an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/21/2023 Last updated : 03/27/2023 # Configure Metrics Server VPA in Azure Kubernetes Service (AKS) To update the coefficient values, create a ConfigMap in the overlay *kube-system 1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest. ```yml- apiVersion: v1 - kind: ConfigMap - metadata: - name: metrics-server-config - namespace: kube-system - labels: - kubernetes.io/cluster-service: "true" - addonmanager.kubernetes.io/mode: EnsureExists - data: - NannyConfiguration: |- - apiVersion: nannyconfig/v1alpha1 - kind: NannyConfiguration - baseCPU: 100m - cpuPerNode: 1m - baseMemory: 100Mi - memoryPerNode: 8Mi + apiVersion: v1 + kind: ConfigMap + metadata: + name: metrics-server-config + namespace: kube-system + labels: + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: EnsureExists + data: + NannyConfiguration: |- + apiVersion: nannyconfig/v1alpha1 + kind: NannyConfiguration + baseCPU: 100m + cpuPerNode: 1m + baseMemory: 100Mi + memoryPerNode: 8Mi ``` In the ConfigMap example, the resource limit and request are changed to the following: If you would like to bypass VPA for Metrics Server and manually control its reso 1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest. ```yml- apiVersion: v1 - kind: ConfigMap - metadata: - name: metrics-server-config - namespace: kube-system - labels: - kubernetes.io/cluster-service: "true" - addonmanager.kubernetes.io/mode: EnsureExists - data: - NannyConfiguration: |- - apiVersion: nannyconfig/v1alpha1 - kind: NannyConfiguration - baseCPU: 100m - cpuPerNode: 0m - baseMemory: 100Mi - memoryPerNode: 0Mi + apiVersion: v1 + kind: ConfigMap + metadata: + name: metrics-server-config + namespace: kube-system + labels: + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: EnsureExists + data: + NannyConfiguration: |- + apiVersion: nannyconfig/v1alpha1 + kind: NannyConfiguration + baseCPU: 100m + cpuPerNode: 0m + baseMemory: 100Mi + memoryPerNode: 0Mi ``` In this ConfigMap example, it changes the resource limit and request to the following: If you would like to bypass VPA for Metrics Server and manually control its reso kubectl -n kube-system delete po metrics-server-pod-name ``` -4. To verify the updated resources took affect, run the following command to review the Metrics Server VPA log. +4. To verify the updated resources took effect, run the following command to review the Metrics Server VPA log. ```bash kubectl -n kube-system logs metrics-server-pod-name -c metrics-server-vpa If you would like to bypass VPA for Metrics Server and manually control its reso 1. If you use the following configmap, the Metrics Server VPA customizations aren't applied. You need add a unit for `baseCPU`. ```yml- apiVersion: v1 - kind: ConfigMap - metadata: - name: metrics-server-config - namespace: kube-system - labels: - kubernetes.io/cluster-service: "true" - addonmanager.kubernetes.io/mode: EnsureExists - data: - NannyConfiguration: |- - apiVersion: nannyconfig/v1alpha1 - kind: NannyConfiguration - baseCPU: 100 - cpuPerNode: 1m - baseMemory: 100Mi - memoryPerNode: 8Mi + apiVersion: v1 + kind: ConfigMap + metadata: + name: metrics-server-config + namespace: kube-system + labels: + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: EnsureExists + data: + NannyConfiguration: |- + apiVersion: nannyconfig/v1alpha1 + kind: NannyConfiguration + baseCPU: 100 + cpuPerNode: 1m + baseMemory: 100Mi + memoryPerNode: 8Mi ``` The following example output resembles the results showing the updated throttling settings aren't applied. Metrics Server is a component in the core metrics pipeline. For more information [metrics-server-api-design]: https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md <! INTERNAL LINKS >-[horizontal-pod-autoscaler]: concepts-scale.md#horizontal-pod-autoscaler +[horizontal-pod-autoscaler]: concepts-scale.md#horizontal-pod-autoscaler |
aks | Workload Identity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md | Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service (AKS) description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 03/14/2023 Last updated : 03/27/2023 Azure AD workload identity supports the following mappings related to a service If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think of a service account as an Azure Identity, except a service account is part of the core Kubernetes API, rather than a [Custom Resource Definition][custom-resource-definition] (CRD). The following describes a list of available labels and annotations that can be used to configure the behavior when exchanging the service account token for an Azure AD access token. -### Service account labels --|Label |Description |Recommended value |Required | -||||| -|`azure.workload.identity/use` |Represents the service account<br> is to be used for workload identity. |true |Yes | - ### Service account annotations |Annotation |Description |Default | If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think |Label |Description |Recommended value |Required | |||||-|`azure.workload.identity/use` | Represents the pod is to be used for workload identity. |true |Yes | +|`azure.workload.identity/use` | This label is required in the pod template spec. Only pods with this label will be mutated by the azure-workload-identity mutating admission webhook to inject the Azure specific environment variables and the projected service account token volume. |true |Yes | ### Pod annotations |Annotation |Description |Default | |--||--|-|`azure.workload.identity/use` |Represents the service account<br> is to be used for workload identity. | | |`azure.workload.identity/service-account-token-expiration` |Represents the `expirationSeconds` field for the projected service account token. It's an optional field that you configure to prevent any downtime caused by errors during service account token refresh. Kubernetes service account token expiry isn't correlated with Azure AD tokens. Azure AD tokens expire in 24 hours after they're issued. <sup>1</sup> |3600<br> Supported range is 3600-86400. | |`azure.workload.identity/skip-containers` |Represents a semi-colon-separated list of containers to skip adding projected service account token volume. For example `container1;container2`. |By default, the projected service account token volume is added to all containers if the service account is labeled with `azure.workload.identity/use: true`. | |`azure.workload.identity/inject-proxy-sidecar` |Injects a proxy init container and proxy sidecar into the pod. The proxy sidecar is used to intercept token requests to IMDS and acquire an Azure AD token on behalf of the user with federated identity credential. |true | |
api-management | Api Management Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md | Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct | -- | -- | | -- | -- | - | | Azure AD integration<sup>1</sup> | No | Yes | No | Yes | Yes | | Virtual Network (VNet) support | No | Yes | No | No | Yes |+| Private endpoint support for inbound connections | No | Yes | Yes | Yes | Yes | | Multi-region deployment | No | No | No | No | Yes | | Availability zones | No | No | No | No | Yes | | Multiple custom domain names | No | Yes | No | No | Yes | Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct | Built-in cache | No | Yes | Yes | Yes | Yes | | Built-in analytics | No | Yes | Yes | Yes | Yes | | [Self-hosted gateway](self-hosted-gateway-overview.md)<sup>3</sup> | No | Yes | No | No | Yes |+| [Workspaces](workspaces-overview.md) | No | Yes | No | Yes | Yes | | [TLS settings](api-management-howto-manage-protocols-ciphers.md) | Yes | Yes | Yes | Yes | Yes | | [External cache](./api-management-howto-cache-external.md) | Yes | Yes | Yes | Yes | Yes | | [Client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) | Yes | Yes | Yes | Yes | Yes |-| [Policies](api-management-howto-policies.md)<sup>4</sup> | Yes | Yes | Yes | Yes | Yes | +| [Policies](api-management-howto-policies.md)<sup>4</sup> | Yes | Yes | Yes | Yes | Yes | | [Backup and restore](api-management-howto-disaster-recovery-backup-restore.md) | No | Yes | Yes | Yes | Yes | | [Management over Git](api-management-configuration-repository-git.md) | No | Yes | Yes | Yes | Yes | | Direct management API | No | Yes | Yes | Yes | Yes | Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct <sup>1</sup> Enables the use of Azure AD (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/> <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/> <sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/>-<sup>4</sup> The following policies aren't available in the Consumption tier: rate limit by key and quota by key. <br/> +<sup>4</sup> See [Gateway overview](api-management-gateways-overview.md#policies) for differences in policy support in the dedicated, consumption, and self-hosted gateways. <br/> <sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier. |
api-management | Api Management Gateways Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md | |
api-management | Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md | Title: Set up private endpoint for Azure API Management Preview -description: Learn how to restrict access to an Azure API Management instance by using an Azure private endpoint and Azure Private Link. + Title: Set up inbound private endpoint for Azure API Management +description: Learn how to restrict inbound access to an Azure API Management instance by using an Azure private endpoint and Azure Private Link. Previously updated : 03/31/2022 Last updated : 03/20/2023 -# Connect privately to API Management using a private endpoint +# Connect privately to API Management using an inbound private endpoint -You can configure a [private endpoint](../private-link/private-endpoint-overview.md) for your API Management instance to allow clients in your private network to securely access the instance over [Azure Private Link](../private-link/private-link-overview.md). +You can configure an inbound [private endpoint](../private-link/private-endpoint-overview.md) for your API Management instance to allow clients in your private network to securely access the instance over [Azure Private Link](../private-link/private-link-overview.md). -* The private endpoint uses an IP address from your Azure VNet address space. +* The private endpoint uses an IP address from an Azure VNet in which it's hosted. * Network traffic between a client on your private network and API Management traverses over the VNet and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. * Configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. --With a private endpoint and Private Link, you can: --- Create multiple Private Link connections to an API Management instance. --- Use the private endpoint to send inbound traffic on a secure connection. --- Use policy to distinguish traffic that comes from the private endpoint. --- Limit incoming traffic only to private endpoints, preventing data exfiltration. [!INCLUDE [api-management-private-endpoint](../../includes/api-management-private-endpoint.md)] With a private endpoint and Private Link, you can: ## Limitations -* Only the API Management instance's Gateway endpoint currently supports Private Link connections. -* Each API Management instance currently supports at most 100 Private Link connections. -* Connections are not supported on the [self-hosted gateway](self-hosted-gateway-overview.md). +* Only the API Management instance's Gateway endpoint supports inbound Private Link connections. +* Each API Management instance supports at most 100 Private Link connections. +* Connections aren't supported on the [self-hosted gateway](self-hosted-gateway-overview.md). ## Prerequisites When you use the Azure portal to create a private endpoint, as shown in the next 1. In the left-hand menu, select **Network**. -1. Select **Private endpoint connections** > **+ Add endpoint**. +1. Select **Inbound private endpoint connections** > **+ Add endpoint**. :::image type="content" source="media/private-endpoint/add-endpoint-from-instance.png" alt-text="Add a private endpoint using Azure portal"::: When you use the Azure portal to create a private endpoint, as shown in the next | Subscription | Select your subscription. | | Resource group | Select an existing resource group, or create a new one. It must be in the same region as your virtual network.| | **Instance details** | |- | Name | Enter a name for the endpoint such as **myPrivateEndpoint**. | + | Name | Enter a name for the endpoint such as *myPrivateEndpoint*. | + | Network Interface Name | Enter a name for the network interface, such as *myInterface* | | Region | Select a location for the private endpoint. It must be in the same region as your virtual network. It may differ from the region where your API Management instance is hosted. | 1. Select the **Resource** tab or the **Next: Resource** button at the bottom of the page. The following information about your API Management instance is already populated: When you use the Azure portal to create a private endpoint, as shown in the next :::image type="content" source="media/private-endpoint/create-private-endpoint.png" alt-text="Create a private endpoint in Azure portal"::: -1. Select the **Configuration** tab or the **Next: Configuration** button at the bottom of the screen. +1. Select the **Virtual Network** tab or the **Next: Virtual Network** button at the bottom of the screen. -1. In **Configuration**, enter or select this information: +1. In **Networking**, enter or select this information: | Setting | Value | | - | -- |- | **Networking** | | | Virtual network | Select your virtual network. | | Subnet | Select your subnet. |- | **Private DNS integration** | | + | Private IP configuration | In most cases, select **Dynamically allocate IP address.** | + | Application security group | Optionally select an [application security group](../virtual-network/application-security-groups.md). | ++1. Select the **DNS** tab or the **Next: DNS** button at the bottom of the screen. ++1. In **Private DNS integration**, enter or select this information: ++ | Setting | Value | + | - | -- | | Integrate with private DNS zone | Leave the default of **Yes**. | | Subscription | Select your subscription. | | Resource group | Select your resource group. |- | Private DNS zones | Leave the default of **(new) privatelink.azure-api.net**. + | Private DNS zones | The default value is displayed: **(new) privatelink.azure-api.net**. -1. Select **Review + create**. +1. Select the **Tags** tab or the **Next: Tabs** button at the bottom of the screen. If you desire, enter tags to organize your Azure resources. ++1. Select **Review + create**. 1. Select **Create**. ### List private endpoint connections to the instance -After the private endpoint is created, it appears in the list on the API Management instance's **Private endpoint connections** page in the portal. +After the private endpoint is created, it appears in the list on the API Management instance's **Inbound private endpoint connections** page in the portal. You can also use the [Private Endpoint Connection - List By Service](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-by-service) REST API to list private endpoint connections to the service instance. Use the following JSON body: After the private endpoint is created, confirm its DNS settings in the portal: -1. In the portal, navigate to the **Private Link Center**. -1. Select **Private endpoints** and select the private endpoint you created. +1. Navigate to your API Management service in the [Azure portal](https://portal.azure.com/). ++1. In the left-hand menu, select **Network** > **Inbound private endpoint connections**, and select the private endpoint you created. + 1. In the left-hand navigation, select **DNS configuration**.+ 1. Review the DNS records and IP address of the private endpoint. The IP address is a private address in the address space of the subnet where the private endpoint is configured. ### Test in virtual network To connect to 'Microsoft.ApiManagement/service/my-apim-service', please use the ## Next steps * Use [policy expressions](api-management-policy-expressions.md#ref-context-request) with the `context.request` variable to identify traffic from the private endpoint.-* Learn more about [private endpoints](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md). +* Learn more about [private endpoints](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md), including [Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/). * Learn more about [managing private endpoint connections](../private-link/manage-private-endpoint.md). * [Troubleshoot Azure private endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md). * Use a [Resource Manager template](https://azure.microsoft.com/resources/templates/api-management-private-endpoint/) to create an API Management instance and a private endpoint with private DNS integration. |
api-management | Virtual Network Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md | API Management provides several options to secure access to your API Management You can choose one of two integration modes: *external* or *internal*. They differ in whether inbound connectivity to the gateway and other API Management endpoints is allowed from the internet or only from within the virtual network. -* **Enabling secure and private connectivity** to the API Management gateway using a *private endpoint* (preview). +* **Enabling secure and private inbound connectivity** to the API Management gateway using a *private endpoint*. The following table compares virtual networking options. For more information, see later sections of this article and links to detailed guidance. The following table compares virtual networking options. For more information, s |||||-| |**[Virtual network - external](#virtual-network-integration)** | Developer, Premium | Azure portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to internet, peered virtual networks, Express Route, and S2S VPN connections. | External access to private and on-premises backends |**[Virtual network - internal](#virtual-network-integration)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository. | Inbound and outbound traffic can be allowed to peered virtual networks, Express Route, and S2S VPN connections. | Internal access to private and on-premises backends-|**[Private endpoint (preview)](#private-endpoint)** | Developer, Basic, Standard, Premium | Gateway only (managed gateway supported, self-hosted gateway not supported). | Only inbound traffic can be allowed from internet, peered virtual networks, Express Route, and S2S VPN connections. | Secure client connection to API Management gateway | +|**[Inbound private endpoint](#inbound-private-endpoint)** | Developer, Basic, Standard, Premium | Gateway only (managed gateway supported, self-hosted gateway not supported). | Only inbound traffic can be allowed from internet, peered virtual networks, Express Route, and S2S VPN connections. | Secure client connection to API Management gateway | ## Virtual network integration With Azure virtual networks (VNets), you can place ("inject") your API Management instance in a non-internet-routable network to which you control access. In a virtual network, your API Management instance can securely access other networked Azure resources and also connect to on-premises networks using various VPN technologies. To learn more about Azure VNets, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md). Some virtual network limitations differ depending on the version (`stv2` or `stv * A subnet containing API Management instances can't be moved across subscriptions. * For multi-region API Management deployments configured in internal VNet mode, users own the routing and are responsible for managing the load balancing across multiple regions. * To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address.-* Due to platform limitations, connectivity between a resource in a globally peered VNet in another region and an API Management service in internal mode won't work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). +* Due to platform limitations, connectivity between a resource in a globally peered VNet in another region and an API Management service in internal mode doesn't work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). -## Private endpoint +## Inbound private endpoint -API Management supports [private endpoints](../private-link/private-endpoint-overview.md). A private endpoint enables secure client connectivity to your API Management instance using a private IP address from your virtual network and Azure Private Link. +API Management supports [private endpoints](../private-link/private-endpoint-overview.md) for secure inbound client connections to your API Management instance. Each secure connection uses a private IP address from your virtual network and Azure Private Link. :::image type="content" source="media/virtual-network-concepts/api-management-private-endpoint.png" alt-text="Diagram showing a secure connection to API Management using private endpoint." lightbox="media/virtual-network-concepts/api-management-private-endpoint.png"::: -With a private endpoint and Private Link, you can: --* Create multiple Private Link connections to an API Management instance. -* Use the private endpoint to send inbound traffic on a secure connection. -* Use policy to distinguish traffic that comes from the private endpoint. -* Limit incoming traffic only to private endpoints, preventing data exfiltration. - [!INCLUDE [api-management-private-endpoint](../../includes/api-management-private-endpoint.md)] -For more information, see [Connect privately to API Management using a private endpoint](private-endpoint.md). +For more information, see [Connect privately to API Management using an inbound private endpoint](private-endpoint.md). ## Advanced networking configurations |
api-management | Workspaces Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md | Therefore, the following sample scenarios aren't currently supported in workspac * Specifying API authorization server information (for example, for the developer portal) +Workspace APIs can't be published to self-hosted gateways. + All resources in an API Management service need to have unique names, even if they are located in different workspaces. ## Next steps |
app-service | Create From Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-from-template.md | If you want to make an ASE, use this Resource Manager template [ASEv2][quickstar * *existingVirtualNetworkResourceGroup*: his parameter defines the resource group name of the existing virtual network and subnet where ASE will reside. * *subnetName*: This parameter defines the subnet name of the existing virtual network and subnet where ASE will reside. * *internalLoadBalancingMode*: In most cases, set this to 3, which means both HTTP/HTTPS traffic on ports 80/443, and the control/data channel ports listened to by the FTP service on the ASE, will be bound to an ILB-allocated virtual network internal address. If this property is set to 2, only the FTP service-related ports (both control and data channels) are bound to an ILB address. If this property is set to 0, the HTTP/HTTPS traffic remains on the public VIP.-* *dnsSuffix*: This parameter defines the default root domain that's assigned to the ASE. In the public variation of Azure App Service, the default root domain for all web apps is *azurewebsites.net*. Because an ILB ASE is internal to a customer's virtual network, it doesn't make sense to use the public service's default root domain. Instead, an ILB ASE should have a default root domain that makes sense for use within a company's internal virtual network. For example, Contoso Corporation might use a default root domain of *internal-contoso.com* for apps that are intended to be resolvable and accessible only within Contoso's virtual network. +* *dnsSuffix*: This parameter defines the default root domain that's assigned to the ASE. In the public variation of Azure App Service, the default root domain for all web apps is *azurewebsites.net*. Because an ILB ASE is internal to a customer's virtual network, it doesn't make sense to use the public service's default root domain. Instead, an ILB ASE should have a default root domain that makes sense for use within a company's internal virtual network. For example, Contoso Corporation might use a default root domain of *internal-contoso.com* for apps that are intended to be resolvable and accessible only within Contoso's virtual network. To specify custom root domain you need to use api version `2018-11-01` or earlier versions. * *ipSslAddressCount*: This parameter automatically defaults to a value of 0 in the *azuredeploy.json* file because ILB ASEs only have a single ILB address. There are no explicit IP-SSL addresses for an ILB ASE. Hence, the IP-SSL address pool for an ILB ASE must be set to zero. Otherwise, a provisioning error occurs. After the *azuredeploy.parameters.json* file is filled in, create the ASE by using the PowerShell code snippet. Change the file paths to match the Resource Manager template-file locations on your machine. Remember to supply your own values for the Resource Manager deployment name and the resource group name: |
app-service | Create Ilb Ase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md | description: Learn how to create an App Service environment with an internal loa ms.assetid: 0f4c1fa4-e344-46e7-8d24-a25e247ae138 Previously updated : 02/28/2023 Last updated : 03/27/2023 To learn more about how to configure your ILB ASE with a WAF device, see [Confi ## ILB ASEs made before May 2019 -ILB ASEs that were made before May 2019 required you to set the domain suffix during ASE creation. They also required you to upload a default certificate that was based on that domain suffix. Also, with an older ILB ASE you can't perform single sign-on to the Kudu console with apps in that ILB ASE. When configuring DNS for an older ILB ASE, you need to set the wildcard A record in a zone that matches to your domain suffix. +ILB ASEs that were made before May 2019 required you to set the domain suffix during ASE creation. They also required you to upload a default certificate that was based on that domain suffix. Also, with an older ILB ASE you can't perform single sign-on to the Kudu console with apps in that ILB ASE. When configuring DNS for an older ILB ASE, you need to set the wildcard A record in a zone that matches to your domain suffix. Creating or changing ILB ASE with custom domain suffix requires you to use Azure Resource Manager templates and an api version prior to 2019. Last support api version is `2018-11-01`. ## Get started ## |
app-service | Using | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using.md | Title: Use an App Service Environment description: Learn how to use your App Service Environment to host isolated applications. Previously updated : 02/14/2022 Last updated : 03/27/2023 To configure DNS in Azure DNS private zones: 1. Create an A record in that zone that points @ to the inbound IP address. 1. Create an A record in that zone that points *.scm to the inbound IP address. -The DNS settings for the default domain suffix of your App Service Environment don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an App Service Environment. If you then want to create a zone named `contoso.net`, you can do so and point it to the inbound IP address. The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *<appname>.scm.<asename>.appserviceenvironment.net*. +The DNS settings for the default domain suffix of your App Service Environment don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an App Service Environment. If you then want to create a zone named `contoso.net`, you can do so and point it to the inbound IP address. The custom domain name works for app requests, and if the custom domain suffix certificate includes a wildcard SAN for scm, custom domain name also work for `scm` site and you can create a `*.scm` record and point it to the inbound IP address. ## Publishing If you have multiple App Service Environments, you might want some of them to be - **None**: Azure upgrades in no particular batch. This value is the default. - **Early**: Upgrade in the first half of the App Service upgrades. - **Late**: Upgrade in the second half of the App Service upgrades.+- **Manual**: Get [15 days window](./how-to-upgrade-preference.md) to deploy the upgrade manually. Select the value you want, and then select **Save**. |
app-service | Manage Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md | The **Backups** page shows you the status of each backup. To get log details reg | A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server). | Check that the connection string is valid. Allow the app's [outbound IPs](overview-inbound-outbound-ips.md) in the database server settings. | | Cannot open server "\<name>" requested by the login. The login failed. | Check that the connection string is valid. | | Missing mandatory parameters for valid Shared Access Signature. | Delete the backup schedule and reconfigure it. |-| SSL connection is required. Please specify SSL options and retry. when trying to connect. | SSL connectivity to Azure Database for MySQL and Azure Database for PostgreSQL isn't supported for database backups. Use the native backup feature in the respective database instead. | +| SSL connection is required. Please specify SSL options and retry when trying to connect. | SSL connectivity to Azure Database for MySQL and Azure Database for PostgreSQL isn't supported for database backups. Use the native backup feature in the respective database instead. | ## Automate with scripts |
app-service | Manage Create Arc Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md | Title: 'Set up Azure Arc for App Service, Functions, and Logic Apps' description: For your Azure Arc-enabled Kubernetes clusters, learn how to enable App Service apps, function apps, and logic apps. Previously updated : 11/02/2021 Last updated : 03/24/2023 # Set up an Azure Arc-enabled Kubernetes cluster to run App Service, Functions, and Logic Apps (Preview) The [custom location](../azure-arc/kubernetes/custom-locations.md) in Azure is u <!-- --kubeconfig ~/.kube/config # needed for non-Azure -->+ > [!NOTE] + > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](../azure-arc/kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with an Azure Active Directory user with restricted permissions on the cluster resource. + > 3. Validate that the custom location is successfully created with the following command. The output should show the `provisioningState` property as `Succeeded`. If not, run it again after a minute. |
app-service | Monitor Instances Health Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md | In addition to configuring the Health check options, you can also configure the | App setting name | Allowed values | Description | |-|-|-| |`WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | 2 - 10 | The required number of failed requests for an instance to be deemed unhealthy and removed from the load balancer. For example, when set to `2`, your instances will be removed after `2` failed pings. (Default value is `10`) |-|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 1 - 100 | By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. <br /> To override this behavior, set app setting to a value between `0` and `100`. A higher value means more unhealthy instances will be removed (default value is `50`). | +|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 1 - 100 | By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. <br /> To override this behavior, set app setting to a value between `1` and `100`. A higher value means more unhealthy instances will be removed (default value is `50`). | #### Authentication and security |
app-service | Overview Inbound Outbound Ips | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-inbound-outbound-ips.md | az webapp show --resource-group <group_name> --name <app_name> --query possibleO ``` ## Get a static outbound IP-You can control the IP address of outbound traffic from your app by using regional VNet integration together with a virtual network NAT gateway to direct traffic through a static public IP address. [Regional VNet integration](./overview-vnet-integration.md) is available on **Standard**, **Premium**, **PremiumV2** and **PremiumV3** App Service plans. To learn more about this setup, see [NAT gateway integration](./networking/nat-gateway-integration.md). +You can control the IP address of outbound traffic from your app by using regional VNet integration together with a virtual network NAT gateway to direct traffic through a static public IP address. [Regional VNet integration](./overview-vnet-integration.md) is available on **Basic**, **Standard**, **Premium**, **PremiumV2** and **PremiumV3** App Service plans. To learn more about this setup, see [NAT gateway integration](./networking/nat-gateway-integration.md). ## Next steps Learn how to restrict inbound traffic by source IP addresses. > [!div class="nextstepaction"]-> [Static IP restrictions](app-service-ip-restrictions.md) +> [Static IP restrictions](app-service-ip-restrictions.md) |
app-service | Overview Name Resolution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md | When your app needs to resolve a domain name using DNS, the app sends a name res The individual app allows you to override the DNS configuration by specifying the `dnsServers` property in the `dnsConfiguration` site property object. You can specify up to five custom DNS servers. You can configure custom DNS servers using the Azure CLI: ```azurecli-interactive-az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --set properties.dnsConfiguration.dnsServers="['168.63.169.16','1.1.1.1']" +az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --set properties.dnsConfiguration.dnsServers="['168.63.129.16','xxx.xxx.xxx.xxx']" ``` You can still use the existing `WEBSITE_DNS_SERVER` app setting, and you can add custom DNS servers with either setting. If you want to add multiple DNS servers using the app setting, you must separate the servers by commas with no blank spaces added. |
app-service | Troubleshoot Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md | This article uses the [Azure portal](https://portal.azure.com) and Azure CLI to | Failed request tracing | Windows | App Service file system | Detailed tracing information on failed requests, including a trace of the IIS components used to process the request and the time taken in each component. It's useful if you want to improve site performance or isolate a specific HTTP error. One folder is generated for each failed request, which contains the XML log file, and the XSL stylesheet to view the log file with. | | Deployment logging | Windows, Linux | App Service file system | Logs for when you publish content to an app. Deployment logging happens automatically and there are no configurable settings for deployment logging. It helps you determine why a deployment failed. For example, if you use a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script), you might use deployment logging to determine why the script is failing. | +When stored in the App Service file system, logs are subject to the available storage for your pricing tier (see [App Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits)). + > [!NOTE] > App Service provides a dedicated, interactive diagnostics tool to help you troubleshoot your application. For more information, see [Azure App Service diagnostics overview](overview-diagnostics.md). > |
app-service | Tutorial Connect App Access Sql Database As User Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-sql-database-as-user-dotnet.md | + + Title: 'Tutorial - Web app accesses SQL Database as the user' +description: Secure database connectivity with Azure Active Directory authentication from .NET web app, using the signed-in user. Learn how to apply it to other Azure services. ++++++ms.devlang: csharp + Last updated : 04/21/2023++# Tutorial: Connect an App Service app to SQL Database on behalf of the signed-in user ++This tutorial shows you how to enable [built-in authentication](overview-authentication-authorization.md) in an [App Service](overview.md) app using the Azure Active Directory authentication provider, then extend it by connecting it to a back-end Azure SQL Database by impersonating the signed-in user (also known as the [on-behalf-of flow](../active-directory/develop/v2-oauth2-on-behalf-of-flow.md)). This is a more advanced connectivity approach to [Tutorial: Access data with managed identity](tutorial-connect-msi-sql-database.md) and has the following advantages in enterprise scenarios: ++- Eliminates connection secrets to back-end services, just like the managed identity approach. +- Gives the back-end database (or any other Azure service) more control over who or how much to grant access to its data and functionality. +- Lets the app tailor its data presentation to the signed-in user. ++In this tutorial, you add Azure Active Directory authentication to the sample web app you deployed in one of the following tutorials: ++- [Tutorial: Build an ASP.NET app in Azure with Azure SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md) +- [Tutorial: Build an ASP.NET Core and Azure SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md) ++When you're finished, your sample app will authenticate users connect to SQL Database securely on behalf of the signed-in user. +++> [!NOTE] +> The steps covered in this tutorial support the following versions: +> +> - .NET Framework 4.8 and higher +> - .NET 6.0 and higher +> ++What you will learn: ++> [!div class="checklist"] +> * Enable built-in authentication for Azure SQL Database +> * Disable other authentication options in Azure SQL Database +> * Enable App Service authentication +> * Use Azure Active Directory as the identity provider +> * Access Azure SQL Database on behalf of the signed-in Azure AD user ++> [!NOTE] +>Azure AD authentication is _different_ from [Integrated Windows authentication](/previous-versions/windows/it-pro/windows-server-2003/cc758557(v=ws.10)) in on-premises Active Directory (AD DS). AD DS and Azure AD use completely different authentication protocols. For more information, see [Azure AD Domain Services documentation](../active-directory-domain-services/index.yml). +++## Prerequisites ++This article continues where you left off in either one of the following tutorials: ++- [Tutorial: Build an ASP.NET app in Azure with SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md) +- [Tutorial: Build an ASP.NET Core and SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md). ++If you haven't already, follow one of the two tutorials first. Alternatively, you can adapt the steps for your own .NET app with SQL Database. ++Prepare your environment for the Azure CLI. +++## 1. Configure database server with Azure AD authentication ++First, enable Azure Active Directory authentication to SQL Database by assigning an Azure AD user as the admin of the server. This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Azure AD. For more information on allowed Azure AD users, see [Azure AD features and limitations in SQL Database](/azure/azure-sql/database/authentication-aad-overview#azure-ad-features-and-limitations). ++1. If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md). ++1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az_ad_user_list) and replace *\<user-principal-name>*. The result is saved to a variable. ++ ```azurecli-interactive + azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].id --output tsv) + ``` ++ > [!TIP] + > To see the list of all user principal names in Azure AD, run `az ad user list --query [].userPrincipalName`. + > ++1. Add this Azure AD user as an Active Directory admin using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az_sql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<server-name>* with the server name (without the `.database.windows.net` suffix). ++ ```azurecli-interactive + az sql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name ADMIN --object-id $azureaduser + ``` ++1. Restrict the database server authentication to Active Directory authentication. This step effectively disables SQL authentication. ++ ```azurecli-interactive + az sql server ad-only-auth enable --resource-group <group-name> --server-name <server-name> + ``` ++For more information on adding an Active Directory admin, see [Provision Azure AD admin (SQL Database)](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database). ++## 2. Enable user authentication for your app ++You enable authentication with Azure Active Directory as the identity provider. For more information, see [Configure Azure Active Directory authentication for your App Services application](configure-authentication-provider-aad.md). ++1. In the [Azure portal](https://portal.azure.com) menu, select **Resource groups** or search for and select *Resource groups* from any page. ++1. In **Resource groups**, find and select your resource group, then select your app. ++1. In your app's left menu, select **Authentication**, and then select **Add identity provider**. ++1. In the **Add an identity provider** page, select **Microsoft** as the **Identity provider** to sign in Microsoft and Azure AD identities. ++1. Accept the default settings and select **Add**. ++ :::image type="content" source="./media/tutorial-connect-app-access-sql-database-as-user-dotnet/add-azure-ad-provider.png" alt-text="Screenshot showing the add identity provider page." lightbox="./media/tutorial-connect-app-access-sql-database-as-user-dotnet/add-azure-ad-provider.png"::: ++> [!TIP] +> If you run into errors and reconfigure your app's authentication settings, the tokens in the token store may not be regenerated from the new settings. To make sure your tokens are regenerated, you need to sign out and sign back in to your app. An easy way to do it is to use your browser in private mode, and close and reopen the browser in private mode after changing the settings in your apps. ++## 3. Configure user impersonation to SQL Database ++Currently, your Azure app connects to SQL Database uses SQL authentication (username and password) managed as app settings. In this step, you give the app permissions to access SQL Database on behalf of the signed-in Azure AD user. ++1. In the **Authentication** page for the app, select your app name under **Identity provider**. This app registration was automatically generated for you. Select **API permissions** in the left menu. ++1. Select **Add a permission**, then select **APIs my organization uses**. ++1. Type *Azure SQL Database* in the search box and select the result. ++1. In the **Request API permissions** page for Azure SQL Database, select **Delegated permissions** and **user_impersonation**, then select **Add permissions**. ++ :::image type="content" source="./media/tutorial-connect-app-access-sql-database-as-user-dotnet/select-permission.png" alt-text="Screenshot of the Request API permissions page showing Delegated permissions, user_impersonation, and the Add permission button selected." lightbox="./media/tutorial-connect-app-access-sql-database-as-user-dotnet/select-permission.png"::: ++## 4. Configure App Service to return a usable access token ++The app registration in Azure Active Directory now has the required permissions to connect to SQL Database by impersonating the signed-in user. Next, you configure your App Service app to give you a usable access token. ++In the Cloud Shell, run the following commands on the app to add the `scope` parameter to the authentication setting `identityProviders.azureActiveDirectory.login.loginParameters`. ++```azurecli-interactive +authSettings=$(az webapp auth show --resource-group <group-name> --name <app-name>) +authSettings=$(echo "$authSettings" | jq '.properties' | jq '.identityProviders.azureActiveDirectory.login += {"loginParameters":["scope=openid profile email offline_access https://database.windows.net/user_impersonation"]}') +az webapp auth set --resource-group <group-name> --name <app-name> --body "$authSettings" +``` ++The commands effectively add a `loginParameters` property with extra custom scopes. Here's an explanation of the requested scopes: ++- `openid`, `profile`, and `email` are requested by App Service by default already. For information, see [OpenID Connect Scopes](../active-directory/develop/v2-permissions-and-consent.md#openid-connect-scopes). +- `https://database.windows.net/user_impersonation` refers to Azure SQL Database. It's the scope that gives you a JWT token that includes SQL Database as a [token audience](https://wikipedia.org/wiki/JSON_Web_Token). +- [offline_access](../active-directory/develop/v2-permissions-and-consent.md#offline_access) is included here for convenience (in case you want to [refresh tokens](#what-happens-when-access-tokens-expire)). ++> [!TIP] +> To configure the required scopes using a web interface instead, see the Microsoft steps at [Refresh auth tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens). ++Your apps are now configured. The app can now generate a token that SQL Database accepts. ++## 5. Use the access token in your application code ++The steps you follow for your project depends on whether you're using [Entity Framework](/ef/ef6/) (default for ASP.NET) or [Entity Framework Core](/ef/core/) (default for ASP.NET Core). ++# [Entity Framework](#tab/ef) ++1. In Visual Studio, open the Package Manager Console and update Entity Framework: ++ ```powershell + Update-Package EntityFramework + ``` ++1. In your DbContext object (in *Models/MyDbContext.cs*), add the following code to the default constructor. ++ ```csharp + var conn = (System.Data.SqlClient.SqlConnection)Database.Connection; + conn.AccessToken = System.Web.HttpContext.Current.Request.Headers["X-MS-TOKEN-AAD-ACCESS-TOKEN"]; + ``` ++# [Entity Framework Core](#tab/efcore) ++In your `DbContext` object (in *Models/MyDbContext.cs*), change the default constructor to the following. ++```csharp +public MyDatabaseContext (DbContextOptions<MyDatabaseContext> options, IHttpContextAccessor accessor) + : base(options) +{ + var conn = Database.GetDbConnection() as SqlConnection; + conn.AccessToken = accessor.HttpContext.Request.Headers["X-MS-TOKEN-AAD-ACCESS-TOKEN"]; +} +``` ++-- ++> [!NOTE] +> The code adds the access token supplied by App Service authentication to the connection object. +> +> This code change doesn't work locally. For more information, see [How do I debug locally when using App Service authentication?](#how-do-i-debug-locally-when-using-app-service-authentication). ++## 6. Publish your changes ++# [ASP.NET](#tab/dotnet) ++1. **If you came from [Tutorial: Build an ASP.NET app in Azure with SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md)**, you set a connection string in App Service using SQL authentication, with a username and password. Use the following command to remove the connection secrets, but replace *\<group-name>*, *\<app-name>*, *\<db-server-name>*, and *\<db-name>* with yours. ++ ```azurecli-interactive + az webapp config connection-string set --resource-group <group-name> --name <app-name> --type SQLAzure --settings MyDbConnection="server=tcp:<db-server-name>.database.windows.net;database=<db-name>;" + ``` ++1. Publish your changes in Visual Studio. In the **Solution Explorer**, right-click your **DotNetAppSqlDb** project and select **Publish**. ++ :::image type="content" source="./media/app-service-web-tutorial-dotnet-sqldatabase/solution-explorer-publish.png" alt-text="Screenshot showing how to publish from the Solution Explorer in Visual Studio." lightbox="./media/app-service-web-tutorial-dotnet-sqldatabase/solution-explorer-publish.png"::: ++1. In the publish page, select **Publish**. ++# [ASP.NET Core](#tab/dotnetcore) ++1. **If you came from [Tutorial: Build an ASP.NET Core and SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md)**, you have a connection string called `defaultConnection` in App Service using SQL authentication, with a username and password. Use the following command to remove the connection secrets, but replace *\<group-name>*, *\<app-name>*, *\<db-server-name>*, and *\<db-name>* with yours. ++ ```azurecli-interactive + az webapp config connection-string set --resource-group <group-name> --name <app-name> --type SQLAzure --settings defaultConnection="server=tcp:<db-server-name>.database.windows.net;database=<db-name>;" + ``` ++1. You would have made your code changes in your GitHub fork, with Visual Studio Code in the browser. From the left menu, select **Source Control**. ++1. Type in a commit message like `OBO connect` and select **Commit**. ++ The commit triggers a GitHub Actions deployment to App Service. Wait a few minutes for the deployment to finish. ++-- ++When the new webpage shows your to-do list, your app is connecting to the database on behalf of the signed-in Azure AD user. ++ ++You should now be able to edit the to-do list as before. ++## 7. Clean up resources ++In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell: ++```azurecli-interactive +az group delete --name <group-name> +``` ++This command may take a minute to run. ++## Frequently asked questions ++- [Why do I get a `Login failed for user '<token-identified principal>'.` error?](#why-do-i-get-a-login-failed-for-user-token-identified-principal-error) +- [How do I add other Azure AD users or groups in Azure SQL Database?](#how-do-i-add-other-azure-ad-users-or-groups-in-azure-sql-database) +- [How do I debug locally when using App Service authentication?](#how-do-i-debug-locally-when-using-app-service-authentication) +- [What happens when access tokens expire?](#what-happens-when-access-tokens-expire) ++#### Why do I get a `Login failed for user '<token-identified principal>'.` error? ++The most common causes of this error are: ++- You're running the code locally, and there's no valid token in the `X-MS-TOKEN-AAD-ACCESS-TOKEN` request header. See [How do I debug locally when using App Service authentication?](#how-do-i-debug-locally-when-using-app-service-authentication). +- Azure AD authentication isn't configured on your SQL Database. +- The signed-in user isn't permitted to connect to the database. See [How do I add other Azure AD users or groups in Azure SQL Database?](#how-do-i-add-other-azure-ad-users-or-groups-in-azure-sql-database). ++#### How do I add other Azure AD users or groups in Azure SQL Database? ++1. Connect to your database server, such as with [sqlcmd](/azure/azure-sql/database/authentication-aad-configure#sqlcmd) or [SSMS](/azure/azure-sql/database/authentication-aad-configure#connect-to-the-database-using-ssms-or-ssdt). +1. [Create contained users mapped to Azure AD identities](/azure/azure-sql/database/authentication-aad-configure#create-contained-users-mapped-to-azure-ad-identities) in SQL Database documentation. ++ The following Transact-SQL example adds an Azure AD identity to SQL Server and gives it some database roles: ++ ```sql + CREATE USER [<user-or-group-name>] FROM EXTERNAL PROVIDER; + ALTER ROLE db_datareader ADD MEMBER [<user-or-group-name>]; + ALTER ROLE db_datawriter ADD MEMBER [<user-or-group-name>]; + ALTER ROLE db_ddladmin ADD MEMBER [<user-or-group-name>]; + GO + ``` ++#### How do I debug locally when using App Service authentication? ++Because App Service authentication is a feature in Azure, it's not possible for the same code to work in your local environment. Unlike the app running in Azure, your local code doesn't benefit from the authentication middleware from App Service. You have a few alternatives: ++- Connect to SQL Database from your local environment with [`Active Directory Interactive`](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-interactive-authentication). The authentication flow doesn't sign in the user to the app itself, but it does connect to the back-end database with the signed-in user, and allows you to test database authorization locally. +- Manually copy the access token from `https://<app-name>.azurewebsites.net/.auth/me` into your code, in place of the `X-MS-TOKEN-AAD-ACCESS-TOKEN` request header. +- If you deploy from Visual Studio, use remote debugging of your App Service app. ++#### What happens when access tokens expire? ++Your access token expires after some time. For information on how to refresh your access tokens without requiring users to reauthenticate with your app, see [Refresh identity provider tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens). ++## Next steps ++What you learned: ++> [!div class="checklist"] +> * Enable built-in authentication for Azure SQL Database +> * Disable other authentication options in Azure SQL Database +> * Enable App Service authentication +> * Use Azure Active Directory as the identity provider +> * Access Azure SQL Database on behalf of the signed-in Azure AD user ++> [!div class="nextstepaction"] +> [Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md) ++> [!div class="nextstepaction"] +> [Tutorial: Access Microsoft Graph from a secured .NET app as the app](scenario-secure-app-access-microsoft-graph-as-app.md) ++> [!div class="nextstepaction"] +> [Tutorial: Isolate back-end communication with Virtual Network integration](tutorial-networking-isolate-vnet.md) |
app-service | Tutorial Connect Msi Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md | -[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure SQL Database](/azure/sql-database/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. In this tutorial, you'll add managed identity to the sample web app you built in one of the following tutorials: +[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure SQL Database](/azure/sql-database/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. In this tutorial, you add managed identity to the sample web app you built in one of the following tutorials: - [Tutorial: Build an ASP.NET app in Azure with Azure SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md) - [Tutorial: Build an ASP.NET Core and Azure SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md) The steps you follow for your project depends on whether you're using [Entity Fr conn.AccessToken = token.Token; ``` - This code uses [Azure.Identity.DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) to get a useable token for SQL Database from Azure Active Directory and then adds it to the database connection. While you can customize `DefaultAzureCredential`, by default it's already very versatile. When running in App Service, it uses app's system-assigned managed identity. When running locally, it can get a token using the logged-in identity of Visual Studio, Visual Studio Code, Azure CLI, and Azure PowerShell. + This code uses [Azure.Identity.DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) to get a useable token for SQL Database from Azure Active Directory and then adds it to the database connection. While you can customize `DefaultAzureCredential`, by default it's already versatile. When it runs in App Service, it uses app's system-assigned managed identity. When it runs locally, it can get a token using the logged-in identity of Visual Studio, Visual Studio Code, Azure CLI, and Azure PowerShell. 1. In *Web.config*, find the connection string called `MyDbConnection` and replace its `connectionString` value with `"server=tcp:<server-name>.database.windows.net;database=<db-name>;"`. Replace _\<server-name>_ and _\<db-name>_ with your server name and database name. This connection string is used by the default constructor in *Models/MyDbContext.cs*. - That's every thing you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. + That's every thing you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. 1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio. The steps you follow for your project depends on whether you're using [Entity Fr > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI. > - That's everything you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token. + That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token. 1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio. What you learned: > [!div class="nextstepaction"] > [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md) +> [!div class="nextstepaction"] +> [Tutorial: Connect an App Service app to SQL Database on behalf of the signed-in user](tutorial-connect-app-access-sql-database-as-user-dotnet.md) + > [!div class="nextstepaction"] > [Tutorial: Connect to Azure databases from App Service without secrets using a managed identity](tutorial-connect-msi-azure-database.md) |
application-gateway | Application Gateway Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md | You can monitor Azure Application Gateway resources in the following ways: * [Logs](#diagnostic-logging): Logs allow for performance, access, and other data to be saved or consumed from a resource for monitoring purposes. -* [Metrics](application-gateway-metrics.md): Application Gateway has several metrics which help you verify that your system is performing as expected. +* [Metrics](application-gateway-metrics.md): Application Gateway has several metrics that help you verify your system is performing as expected. [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] The access log is generated only if you've enabled it on each Application Gatewa |httpVersion | HTTP version of the request. | |receivedBytes | Size of packet received, in bytes. | |sentBytes| Size of packet sent, in bytes.|-|clientResponseTime| Length of time (in **seconds**) that it takes for the first byte of a client request to be processed and the first byte sent in the response to the client. | +|clientResponseTime| Time difference (in **seconds**) between first byte received from the backend to first byte sent to the client. | |timeTaken| Length of time (in **seconds**) that it takes for the first byte of a client request to be processed and its last-byte sent in the response to the client. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | |WAFEvaluationTime| Length of time (in **seconds**) that it takes for the request to be processed by the WAF. | |WAFMode| Value can be either Detection or Prevention | The access log is generated only if you've enabled it on each Application Gatewa |sentBytes| Size of packet sent, in bytes.| |timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | |sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|-|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name will reflect that.| +|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name reflects that.| |originalHost| The hostname with which the request was received by the Application Gateway from the client.| ```json |
application-gateway | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/features.md | For more information, see [Application Gateway redirect overview](redirect-overv ## Session affinity -The cookie-based session affinity feature is useful when you want to keep a user session on the same server. By using gateway-managed cookies, the Application Gateway can direct subsequent traffic from a user session to the same server for processing. This is important in cases where session state is saved locally on the server for a user session. +The cookie-based session affinity feature is useful when you want to keep a user session on the same server. Using gateway-managed cookies, the Application Gateway can direct subsequent traffic from a user session to the same server for processing. This is important in cases where session state is saved locally on the server for a user session. For more information, see [How an application gateway works](how-application-gateway-works.md#modifications-to-the-request). For more information, see [WebSocket support](application-gateway-websocket.md) ## Connection draining -Connection draining helps you achieve graceful removal of backend pool members during planned service updates or problems with backend health. This setting is enabled via the [Backend Setting](configuration-http-settings.md) and is applied to all backend pool members during rule creation. Once enabled, the aplication gateway ensures all deregistering instances of a backend pool don't receive any new requests while allowing existing requests to complete within a configured time limit. It applies to cases where backend instances are -- explicitly removed from the backend pool after a configuration change by a user,+Connection draining helps you achieve graceful removal of backend pool members during planned service updates or problems with backend health. This setting is enabled via the [Backend Setting](configuration-http-settings.md) and is applied to all backend pool members during rule creation. Once enabled, the application gateway ensures all deregistering instances of a backend pool don't receive any new requests while allowing existing requests to complete within a configured time limit. It applies to cases where backend instances are: +- explicitly removed from the backend pool after a configuration change by a user - reported as unhealthy by the health probes, or-- removed during a scale-in operation.+- removed during a scale-in operation -The only exception is when requests continue to be proxied to the deregistering instances because of gateway-managed session affinity. +The only exception is when requests continue to be proxied to the deregistering instances because of gateway-managed session affinity. -The connection draining is honored for WebSocket connections as well. For information on time limits, see [Backend Settings configuration](configuration-http-settings.md#connection-draining). +The connection draining is honored for WebSocket connections as well. Connection draining is invoked for every single update to the gateway. To prevent connection loss to existing members of the backend pool, make sure to enable connection draining. ++For information on time limits, see [Backend Settings configuration](configuration-http-settings.md#connection-draining). ## Custom error pages HTTP headers allow the client and server to pass additional information with the - Removing response header fields that can reveal sensitive information. - Stripping port information from X-Forwarded-For headers. -Application Gateway and WAF v2 SKU supports the capability to add, remove, or update HTTP request and response headers, while the request and response packets move between the client and backend pools. You can also rewrite URLs, query string parameters and host name. With URL rewrite and URL path-based routing, you can choose to either route requests to one of the backend pools based on the original path or the rewritten path, using the re-evaluate path map option. +Application Gateway and WAF v2 SKU supports the capability to add, remove, or update HTTP request and response headers, while the request and response packets move between the client and backend pools. You can also rewrite URLs, query string parameters and host name. With URL rewrite and URL path-based routing, you can choose to either route requests to one of the backend pools based on the original path or the rewritten path, using the reevaluate path map option. It also provides you with the capability to add conditions to ensure the specified headers or URL are rewritten only when certain conditions are met. These conditions are based on the request and response information. |
applied-ai-services | Concept Custom Classifier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-classifier.md | recommendations: false **This article applies to:**  **Form Recognizer v3.0**. +> [!IMPORTANT] +> +> Custom classification model is currently in public preview. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback. +> + Custom classification models are deep-learning-model types that combine layout and language features to accurately detect and identify documents you process within your application. Custom classification models can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file. ## Model capabilities |
applied-ai-services | Concept Invoice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md | See how data, including customer information, vendor details, and line items, is | Supported languages | Details | |:-|:|-| • English (en) | United States (us), Australia (-au), Canada (-ca), Great Britain (-gb), India (-in)| +| • English (en) | United States (us), Australia (-au), Canada (-ca), United Kingdom (-uk), India (-in)| | • Spanish (es) |Spain (es)| | • German (de) | Germany (de)| | • French (fr) | France (fr) | |
applied-ai-services | Build A Custom Classifier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-classifier.md | monikerRange: 'form-recog-3.0.0' recommendations: false -# Build and train a custom classification model +# Build and train a custom classification model (preview) [!INCLUDE [applies to v3.0](../includes/applies-to-v3-0.md)] +> [!IMPORTANT] +> +> Custom classification model is currently in public preview. Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback. +> + Custom classification models can classify each page in an input file to identify the document(s) within. Classifier models can also identify multiple documents or multiple instances of a single document in the input file. Form Recognizer custom models require as few as five training documents per document class to get started. To get started training a custom classification model, you need at least **five documents** for each class and **two classes** of documents. ## Custom classification model input requirements The Form Recognizer Studio provides and orchestrates all the API calls required :::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot showing how to select the Form Recognizer resource."::: -1. Training a custom classifier requires the output from the Layout model for each document in your dataset. Run layout on all documents as an optional step to speed up the model training process. +1. **Training a custom classifier requires the output from the Layout model for each document in your dataset**. Run layout on all documents prior to the model training process. 1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed. Once the model training is complete, you can test your model by selecting the mo Congratulations you've trained a custom classification model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents. +## Troubleshoot ++The [classification model](../concept-custom-classifier.md) requires results from the [layout model](../concept-layout.md) for each training document. If you haven't provided the layout results, the Studio attempts to run the layout model for each document prior to training the classifier. This process is throttled and can result in a 429 response. ++In the Studiio, prior to training with the classification model, run the [layout model](https://formrecognizer.appliedai.azure.com/studio/layout) on each document and upload it to the same location as the original document. Once the layout results are added, you can train the classifier model with your documents. + ## Next steps > [!div class="nextstepaction"] |
applied-ai-services | Resource Customer Stories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md | The following customers and partners have adopted Form Recognizer across a wide ||-|-| | **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | [Customer story](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure) | | **Air Canada** | In September 2021, [**Air Canada**](https://www.aircanada.com/) was tasked with verifying the COVID-19 vaccination status of thousands of worldwide employees in only two months. After realizing manual verification would be too costly and complex within the time constraint, Air Canada turned to its internal AI team for an automated solution. The AI team partnered with Microsoft and used Form Recognizer to roll out a fully functional, accurate solution within weeks. This partnership met the government mandate on time and saved thousands of hours of manual work. | [Customer story](https://customers.microsoft.com/story/1505667713938806113-air-canada-travel-transportation-azure-form-recognizer)|-|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, Turkey's leading holding institution and operating in 23 countries. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) | +|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, T├╝rkiye's leading holding institution and operating in 23 countries. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) | |**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | [Customer story](https://customers.microsoft.com/story/811346-automation-anywhere-partner-professional-services-azure-cognitive-services) | |**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)| |**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. | [Customer story](https://customers.microsoft.com/story/737482-blue-prism-partner-professional-services-azure) | |
applied-ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md | Form Recognizer service is updated on an ongoing basis. Bookmark this page to st * Portuguese - Brazil (pt-BR) * Prebuilt invoice model - added languages supported. The invoice model now supports these added languages and locales- * English - United States (en-US), Australia (en-AU), Canada (en-CA), Great Britain (en-GB), India (en-IN) + * English - United States (en-US), Australia (en-AU), Canada (en-CA), United Kingdom (en-UK), India (en-IN) * Spanish - Spain (es-ES) * French - France (fr-FR) * Italian - Italy (it-IT) Form Recognizer service is updated on an ongoing basis. Bookmark this page to st The **prebuilt invoice model** now has added support for the following languages: - * English - Australia (en-AU), Canada (en-CA), Great Britain (en-GB), India (en-IN) + * English - Australia (en-AU), Canada (en-CA), United Kingdom (en-UK), India (en-IN) * Portuguese - Brazil (pt-BR) The **prebuilt invoice model** now has added support for the following field extractions: |
applied-ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/language-support.md | This article lists supported human languages for Immersive Reader features. | Thai | th | | Thai (Thailand) | th-TH | | Turkish | tr |-| Turkish (Turkey) | tr-TR | +| Turkish (T├╝rkiye) | tr-TR | | Ukrainian | uk | | Ukrainian (Ukraine) | uk-UA | | Urdu | ur | This article lists supported human languages for Immersive Reader features. | Tigrinya | ti | | Tongan | to | | Turkish | tr |-| Turkish (Turkey) | tr-TR | +| Turkish (T├╝rkiye) | tr-TR | | Turkmen | tk | | Ukrainian | uk | | UpperSorbian | hsb | This article lists supported human languages for Immersive Reader features. | Thai | th | | Thai (Thailand) | th-TH | | Turkish | tr |-| Turkish (Turkey) | tr-TR | +| Turkish (T├╝rkiye) | tr-TR | | Ukrainian | uk | | Vietnamese | vi | | Vietnamese (Vietnam) | vi-VN | This article lists supported human languages for Immersive Reader features. | Swedish | sv | | Swedish (Sweden) | sv-SE | | Turkish | tr |-| Turkish (Turkey) | tr-TR | +| Turkish (T├╝rkiye) | tr-TR | | Ukrainian | uk | | Welsh | cy | |
automation | Automation Graphical Authoring Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-graphical-authoring-intro.md | Title: Author graphical runbooks in Azure Automation description: This article tells how to author a graphical runbook without working with code. Previously updated : 10/21/2021 Last updated : 03/07/2023 The following example uses output from an activity called `Get Twitter Connectio ## Authenticate to Azure resources -Runbooks in Azure Automation that manage Azure resources require authentication to Azure. The [Run As account](./automation-security-overview.md), also referred to as a service principal, is the default mechanism that an Automation runbook uses to access Azure Resource Manager resources in your subscription. You can add this functionality to a graphical runbook by adding the `AzureRunAsConnection` connection asset, which uses the PowerShell [Get-AutomationConnection](/system-center/smlet. This scenario is illustrated in the following example. +Runbooks in Azure Automation that manage Azure resources require authentication to Azure. [Managed Identities](enable-managed-identity-for-automation.md) is the default mechanism that an Automation runbook uses to access Azure Resource Manager resources in your subscription. You can add this functionality to a graphical runbook by importing the following runbook into the automation account, which leverages the system-assigned Managed Identity of the automation account to authenticate and access Azure resources. - --The `Get Run As Connection` activity, or `Get-AutomationConnection`, is configured with a constant value data source named `AzureRunAsConnection`. -- --The next activity, `Connect-AzAccount`, adds the authenticated Run As account for use in the runbook. -- -->[!NOTE] ->For PowerShell runbooks, `Add-AzAccount` and `Add-AzureRMAccount` are aliases for `Connect-AzAccount`. Note that these aliases are not available for your graphical runbooks. A graphical runbook can only use `Connect-AzAccount` itself. --For the parameter fields **APPLICATIONID**, **CERTIFICATETHUMBPRINT**, and **TENANTID**, specify the name of the property for the field path, since the activity outputs an object with multiple properties. Otherwise, when the runbook executes, it fails while attempting to authenticate. This is what you need at a minimum to authenticate your runbook with the Run As account. --Some subscribers create an Automation account using an [Azure AD user account](./shared-resources/credentials.md) to manage Azure classic deployment or for Azure Resource Manager resources. To maintain backward compatibility for these subscribers, the authentication mechanism to use in your runbook is the `Add-AzureAccount` cmdlet with a [credential asset](./shared-resources/credentials.md). The asset represents an Active Directory user with access to the Azure account. --You can enable this functionality for your graphical runbook by adding a credential asset to the canvas, followed by an `Add-AzureAccount` activity that uses the credential asset for its input. See the following example. -- --The runbook must authenticate at its start and after each checkpoint. Thus you must use an `Add-AzureAccount` activity after any `Checkpoint-Workflow` activity. You do not need to use an additional credential activity. -- +```powershell-interactive +wget https://raw.githubusercontent.com/azureautomation/runbooks/master/Utility/AzMI/AzureAutomationTutorialWithIdentityGraphical.graphrunbook -outfile AzureAutomationTutorialWithIdentityGraphical.graphrunbook +``` ## Export a graphical runbook |
automation | Automation Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md | Title: Azure Automation Hybrid Runbook Worker overview description: Know about Hybrid Runbook Worker. How to install and run the runbooks on machines in your local datacenter or cloud provider. Previously updated : 03/15/2023 Last updated : 03/21/2023 There are two types of Runbook Workers - system and user. The following table de |Type | Description | |--|-| |**System** |Supports a set of hidden runbooks used by the Update Management feature that are designed to install user-specified updates on Windows and Linux machines.<br> This type of Hybrid Runbook Worker isn't a member of a Hybrid Runbook Worker group, and therefore doesn't run runbooks that target a Runbook Worker group. |-|**User** |Supports user-defined runbooks intended to run directly on the Windows and Linux machine that are members of one or more Runbook Worker groups. | +|**User** |Supports user-defined runbooks intended to run directly on the Windows and Linux machines. | Agent-based (V1) Hybrid Runbook Workers rely on the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md). The workspace isn't only to collect monitoring data from the machine, but also to download the components required to install the agent-based Hybrid Runbook Worker. |
automation | Automation Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md | description: This article provides an overview of Azure Automation account authe keywords: automation security, secure automation; automation authentication Previously updated : 11/05/2021 Last updated : 03/07/2023 For details on using managed identities, see [Enable managed identity for Azure ## Run As accounts +> [!IMPORTANT] +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. + Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. There are two types of Run As accounts in Azure Automation: - Azure Run As Account - Azure Classic Run As Account To create or renew a Run As account, permissions are needed at three levels: - Azure Active Directory (Azure AD), and - Automation account -> [!NOTE] -> Azure Automation does not automatically create the Run As account, it has been replaced by using managed identities. However, we continue to support a RunAs account for existing and new Automation accounts. You can [create a Run As account](create-run-as-account.md) in your Automation account from the Azure portal or by using PowerShell. ### Subscription permissions When you create a Run As account, it performs the following tasks: ### Azure Classic Run As account +> [!IMPORTANT] +> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. + When you create an Azure Classic Run As account, it performs the following tasks: > [!NOTE] |
automation | Automation Solution Vm Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md | -> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared soon. +> Start/Stop VM during off-hours version 1 is unavailable in the marketplace now as it will retire by 30 September 2023. We recommend you start using [version 2](../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until 30 September 2023. The details of the announcement will be shared soon. The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios. |
automation | Migrate Existing Agent Based Hybrid Worker To Extension Based Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md | The purpose of the Extension-based approach is to simplify the installation and ### Supported operating systems -| Windows | Linux (x64)| +| Windows | Linux | |||-| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 <br> ● Windows 10 Enterprise (including multi-session) and Pro| ● Debian GNU/Linux 10 and 11 <br> ● Ubuntu 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7 and 8ΓÇ»| +| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 <br> ● Windows 10 Enterprise (including multi-session) and Pro| ● Debian GNU/Linux 8,9,10, and 11 <br> ● Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7, and 8 </br> *Hybrid Worker extension would follow support timelines of the OS vendor.ΓÇ»| ### Other Requirements -| Windows | Linux (x64)| +| Windows | Linux | ||| | Windows PowerShell 5.1 (download WMF 5.1). PowerShell Core isn't supported.| Linux Hardening must not be enabled.ΓÇ» | | .NET Framework 4.6.2 or later.ΓÇ»| | |
automation | Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/certificates.md | The following example shows how to access certificates in Python 2 runbooks. ```python # get a reference to the Azure Automation certificate-cert = automationassets.get_automation_certificate("AzureRunAsCertificate") +cert = automationassets.get_automation_certificate("MyCertificate") # returns the binary cert content print cert The following example shows how to access certificates in Python 3 runbooks (pre ```python # get a reference to the Azure Automation certificate-cert = automationassets.get_automation_certificate("AzureRunAsCertificate") +cert = automationassets.get_automation_certificate("MyCertificate") # returns the binary cert content print (cert) |
azure-cache-for-redis | Cache Best Practices Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-scale.md | description: Learn how to scale your Azure Cache for Redis. Previously updated : 04/06/2022 Last updated : 03/28/2023 If you're using TLS and you have a high number of connections, consider scaling ## Scaling and memory -You can scale your cache instances in the Azure portal. Also, you can programatically scale your cache using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML). +You can scale your cache instances in the Azure portal. Also, you can programmatically scale your cache using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML). When you scale a cache up or down in the portal, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically updated to 6 GB during scaling. When you scale down, the reverse happens. When you scale a cache up or down programmatically, using PowerShell, CLI or Rest API, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed. -For more information on scaling and memory, see [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation). +For more information on scaling and memory, depending on your tier see either: +- [How to scale - Basic, Standard, and Premium tiers](cache-how-to-scale.md#how-to-scalebasic-standard-and-premium-tiers), or +- [How to scale up and out - Enterprise and Enterprise Flash tiers](cache-how-to-scale.md#how-to-scale-up-and-outenterprise-and-enterprise-flash-tiers). > [!NOTE] > When you scale a cache up or down programmatically, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed. |
azure-cache-for-redis | Cache How To Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-encryption.md | + + Title: Configure active encryption for Enterprise Azure Cache for Redis instances +description: Learn about encryption for your Azure Cache for Redis Enterprise instances across Azure regions. ++++ Last updated : 03/24/2023+++++# Configure disk encryption for Azure Cache for Redis instances using customer managed keys (preview) ++In this article, you learn how to configure disk encryption using Customer Managed Keys (CMK). The Enterprise and Enterprise Flash tiers of Azure Cache for Redis offer the ability to encrypt the OS and data persistence disks with customer-managed key encryption. Platform-managed keys (PMKs), also know as Microsoft-managed keys (MMKs), are used to encrypt the data. However, customer managed keys (CMK) can also be used to wrap the MMKs to control access to these keys. This makes the CMK a _key encryption key_ or KEK. For more information, see [key management in Azure](/azure/security/fundamentals/key-management). ++Data in a Redis server is stored in memory by default. This data isn't encrypted. You can implement your own encryption on the data before writing it to the cache. In some cases, data can reside on-disk, either due to the operations of the operating system, or because of deliberate actions to persist data using [export](cache-how-to-import-export-data.md) or [data persistence](cache-how-to-premium-persistence.md). ++> [!NOTE] +> Operating system disk encryption is more important on the Premium tier because open-source Redis can page cache data to disk. The Enterprise tiers does not do page cache data to disk, which is an advantage of the Enterprise and Enterprise Flash tiers. +> ++## Scope of availability for CMK disk encryption ++|: Tier :| Basic, Standard, Premium | Enterprise, Enterprise Flash | +|--||| +|Microsoft managed keys (MMK) | Yes | Yes | +|Customer managed keys (CMK) | No | Yes (preview) | ++> [!NOTE] +> By default, all Azure Cache for Redis tiers use Microsoft managed keys to encrypt disks mounted to cache instances. However, in the Basic and Standard tiers, the C0 and C1 SKUs do not support any disk encryption. +> ++> [!IMPORTANT] +> On the Premium tier, data persistence streams data directly to Azure Storage, so disk encryption is less important. Azure Storage offers a [variety of encryption methods](../storage/common/storage-service-encryption.md) to be used instead. +> ++## Encryption coverage ++### Enterprise tiers ++In the **Enterprise** tier, disk encryption is used to encrypt the persistence disk, temporary files, and the OS disk: ++- persistence disk: holds persisted RDB or AOF files as part of [data persistence](cache-how-to-premium-persistence.md) +- temporary files used in _export_: temporary data used exported is encrypted. When you [export](cache-how-to-import-export-data.md) data, the encryption of the final exported data is controlled by settings in the storage account. +- the OS disk ++MMK is used to encrypt these disks by default, but CMK can also be used. ++In the **Enterprise Flash** tier, keys and values are also partially stored on-disk using nonvolatile memory express (NVMe) flash storage. However, this disk isn't the same as the one used for persisted data. Instead, it's ephemeral, and data isn't persisted after the cache is stopped, deallocated, or rebooted. only MMK is only supported on this disk because this data is transient and ephemeral. ++| Data stored |Disk |Encryption Options | +|-||-| +|Persistence files | Persistence disk | MMK or CMK | +|RDB files waiting to be exported | OS disk and Persistence disk | MMK or CMK | +|Keys & values (Enterprise Flash tier only) | Transient NVMe disk | MMK | ++### Other tiers ++In the **Basic, Standard, and Premium** tiers, the OS disk is encrypted using MMK. There's no persistence disk mounted and Azure Storage is used instead. ++## Prerequisites and limitations ++### General prerequisites and limitations ++- Disk encryption isn't available in the Basic and Standard tiers for the C0 or C1 SKUs +- Only user assigned managed identity is supported to connect to Azure Key Vault +- Changing between MMK and CMK on an existing cache instance triggers a long-running maintenance operation. We don't recommend this for production use because a service disruption occurs. ++### Azure Key Vault prerequisites and limitations ++- The Azure Key Vault resource containing the customer managed key must be in the same region as the cache resource. +- [Purge protection and soft-delete](../key-vault/general/soft-delete-overview.md) must be enabled in the Azure Key Vault instance. Purge protection isn't enabled by default. +- When you use firewall rules in the Azure Key Vault, the Key Vault instance must be configured to [allow trusted services](/azure/key-vault/general/network-security). +- Only RSA keys are supported +- The user assigned managed identity must be given the permissions _Get_, _Unwrap Key_, and _Wrap Key_ in the Key Vault access policies, or the equivalent permissions within Azure Role Based Access Control. A recommended built-in role definition with the least privileges needed for this scenario is called [KeyVault Crypto Service Encryption User](../role-based-access-control/built-in-roles.md#key-vault-crypto-service-encryption-user). ++## How to configure CMK encryption on Enterprise caches ++### Use the portal to create a new cache with CMK enabled ++1. Sign in to the [Azure portal](https://portal.azure.com) and start the [Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md) quickstart guide. ++1. On the **Advanced** page, go to the section titled **Customer-managed key encryption at rest** and enable the **Use a customer-managed key** option. ++ :::image type="content" source="media/cache-how-to-encryption/cache-use-key-encryption.png" alt-text="Screenshot of the advanced settings with customer-managed key encryption checked and in a red box."::: ++1. Select **Add** to assign a [user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to the resource. This managed identity is used to connect to the [Azure Key Vault](../key-vault/general/overview.md) instance that holds the customer managed key. ++ :::image type="content" source="media/cache-how-to-encryption/cache-managed-identity-user-assigned.png" alt-text="Screenshot showing user managed identity in the working pane."::: ++1. Select your chosen user assigned managed identity, and then choose the key input method to use. ++1. If using the **Select Azure key vault and key** input method, choose the Key Vault instance that holds your customer managed key. This instance must be in the same region as your cache. ++ > [!NOTE] + > For instructions on how to set up an Azure Key Vault instance, see the [Azure Key Vault quickstart guide](../key-vault/secrets/quick-create-portal.md). You can also select the _Create a key vault_ link beneath the Key Vault selection to create a new Key Vault instance. ++1. Choose the specific key and version using the **Customer-managed key (RSA)** and **Version** drop-downs. ++ :::image type="content" source="media/cache-how-to-encryption/cache-managed-identity-version.png" alt-text="Screenshot showing the select identity and key fields completed."::: ++1. If using the **URI** input method, enter the Key Identifier URI for your chosen key from Azure Key Vault. ++1. When you've entered all the information for your cache, select **Review + create**. ++### Add CMK encryption to an existing Enterprise cache ++1. Go to the **Encryption** in the Resource menu of your cache instance. If CMK is already set up, you see the key information. ++1. If you haven't set up or if you want to change CMK settings, select **Change encryption settings** + :::image type="content" source="media/cache-how-to-encryption/cache-encryption-existing-use.png" alt-text="Screenshot encryption selected in the Resource menu for an Enterprise tier cache."::: ++1. Select **Use a customer-managed key** to see your configuration options. ++1. Select **Add** to assign a [user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to the resource. This managed identity is used to connect to the [Azure Key Vault](../key-vault/general/overview.md) instance that holds the customer managed key. ++1. Select your chosen user assigned managed identity, and then choose which key input method to use. ++1. If using the **Select Azure key vault and key** input method, choose the Key Vault instance that holds your customer managed key. This instance must be in the same region as your cache. ++ > [!NOTE] + > For instructions on how to set up an Azure Key Vault instance, see the [Azure Key Vault quickstart guide](../key-vault/secrets/quick-create-portal.md). You can also select the _Create a key vault_ link beneath the Key Vault selection to create a new Key Vault instance. ++1. Choose the specific key using the **Customer-managed key (RSA)** drop-down. If there are multiple versions of the key to choose from, use the **Version** drop-down. + :::image type="content" source="media/cache-how-to-encryption/cache-encryption-existing-key.png" alt-text="Screenshot showing the select identity and key fields completed for Encryption."::: + +1. If using the **URI** input method, enter the Key Identifier URI for your chosen key from Azure Key Vault. ++1. Select **Save** ++## Next steps ++Learn more about Azure Cache for Redis features: ++- [Data persistence](cache-how-to-premium-persistence.md) +- [Import/Export](cache-how-to-import-export-data.md) |
azure-cache-for-redis | Cache How To Import Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md | Title: Import and Export data in Azure Cache for Redis description: Learn how to import and export data to and from blob storage with your premium Azure Cache for Redis instances + Previously updated : 03/10/2023 Last updated : 03/24/2023 # Import and Export data in Azure Cache for Redis -Import/Export is an Azure Cache for Redis data management operation. It allows you to import data into a cache instance or export data from a cache instance. You import and export an Azure Cache for Redis Database (RDB) snapshot from a cache to a blob in an Azure Storage Account. Import/Export is supported in the Premium, Enterprise, and Enterprise Flash tiers. +Use the import and export functionality in Azure Cache for Redis as a data management operation. You import data into your cache instance or export data from a cache instance using an Azure Cache for Redis Database (RDB) snapshot. The snapshots are imported or exported using a blob in an Azure Storage Account. -- **Export** - you can export your Azure Cache for Redis RDB snapshots to a Page Blob.-- **Import** - you can import your Azure Cache for Redis RDB snapshots from either a Page Blob or a Block Blob.+Import/Export is supported in the Premium, Enterprise, and Enterprise Flash tiers: +- _Export_ - you can export your Azure Cache for Redis RDB snapshots to a Page Blob (Premium tier) or Block Blob (Enterprise tiers). +- _Import_ - you can import your Azure Cache for Redis RDB snapshots from either a Page Blob or a Block Blob. -Import/Export enables you to migrate between different Azure Cache for Redis instances or populate the cache with data before use. +You can use Import/Export to migrate between different Azure Cache for Redis instances or populate the cache with data before use. This article provides a guide for importing and exporting data with Azure Cache for Redis and provides the answers to commonly asked questions. -For information on which Azure Cache for Redis tiers support import and export, see [feature comparison](cache-overview.md#feature-comparison). +## Scope of availability ++|Tier | Basic, Standard | Premium |Enterprise, Enterprise Flash | +||||| +|Available | No | Yes | Yes | ++## Compatibility ++- Data is exported as an RDB page blob in the _Premium_ tier. In the _Enterprise_ and _Enterprise Flash_ tiers, data is exported as a .gz block blob. +- Caches running Redis 4.0 support RDB version 8 and below. Caches running Redis 6.0 support RDB version 9 and below. +- Exported backups from newer versions of Redis (for example, Redis 6.0) can't be imported into older versions of Redis (for example, Redis 4.0) +- RDB files from _Premium_ tier caches can be imported into _Enterprise_ and _Enterprise Flash_ tier caches. ## Import Use import to bring Redis compatible RDB files from any Redis server running in :::image type="content" source="./media/cache-how-to-import-export-data/cache-import-blobs.png" alt-text="Screenshot showing the Import button to select to begin the import."::: You can monitor the progress of the import operation by following the notifications from the Azure portal, or by viewing the events in the [audit log](../azure-monitor/essentials/activity-log.md).+ + > [!IMPORTANT] + > Audit log support is not yet available in the Enterprise tiers. + > :::image type="content" source="./media/cache-how-to-import-export-data/cache-import-data-import-complete.png" alt-text="Screenshot showing the import progress in the notifications area."::: Export allows you to export the data stored in Azure Cache for Redis to Redis co This section contains frequently asked questions about the Import/Export feature. -- [What pricing tiers can use Import/Export?](#what-pricing-tiers-can-use-importexport)+- [Which tiers support Import/Export?](#which-tiers-support-importexport) - [Can I import data from any Redis server?](#can-i-import-data-from-any-redis-server) - [What RDB versions can I import?](#what-rdb-versions-can-i-import) - [Is my cache available during an Import/Export operation?](#is-my-cache-available-during-an-importexport-operation) This section contains frequently asked questions about the Import/Export feature - [I got an error when exporting my data to Azure Blob Storage. What happened?](#i-got-an-error-when-exporting-my-data-to-azure-blob-storage-what-happened) - [How to export if I have firewall enabled on my storage account?](#how-to-export-if-i-have-firewall-enabled-on-my-storage-account) -### What pricing tiers can use Import/Export? +### Which tiers support Import/Export? -Import/Export is available in the Premium, Enterprise and Enterprise Flash tiers. +The _import_ and _export_ features are available only in the _Premium_, _Enterprise_, and _Enterprise Flash_ tiers. ### Can I import data from any Redis server? -Yes, you can import data that was exported from Azure Cache for Redis instances. You can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance. For example, you might want to export the data from your production cache and import it into a cache that is used as part of a staging environment for testing or migration. +Yes, you can import data that was exported from Azure Cache for Redis instances. You can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance. ++For example, you might want to: ++1. Export the data from your production cache. ++1. Then, import it into a cache that is used as part of a staging environment for testing or migration. > [!IMPORTANT] > To successfully import data exported from Redis servers other than Azure Cache for Redis when using a page blob, the page blob size must be aligned on a 512 byte boundary. For sample code to perform any required byte padding, see [Sample page blob upload](https://github.com/JimRoberts-MS/SamplePageBlobUpload). Yes, you can import data that was exported from Azure Cache for Redis instances. ### What RDB versions can I import? -Azure Cache for Redis supports RDB import up through RDB version 7. +For more information on supported RDB versions used with import, see the [compatibility section](#compatibility). ### Is my cache available during an Import/Export operation? Some pricing tiers have different [databases limits](cache-configure.md#database ### How is Import/Export different from Redis persistence? -Azure Cache for Redis persistence allows you to persist data stored in Redis to Azure Storage. When persistence is configured, Azure Cache for Redis persists a snapshot the cache data in a Redis binary format to disk based on a configurable backup frequency. If a catastrophic event occurs that disables both the primary and replica cache, the cache data is restored automatically using the most recent snapshot. For more information, see [How to configure data persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md). +The Azure Cache for Redis _persistence_ feature is primarily a data durability feature. Conversely, the _import/export_ functionality is designed as a method to make periodic data backups for point-in-time recovery. +<!-- Kyle I rewrote this based on another convo. Also I want the primary answer to be in the first paragraph. --> +When _persistence_ is configured, your cache persists a snapshot of the data to disk, based on a configurable backup frequency. The data is written with a Redis-proprietary binary format. If a catastrophic event occurs that disables both the primary and the replica caches, the cache data is restored automatically using the most recent snapshot. ++Data persistence is designed for disaster recovery. It isn't intended as a point-in-time recovery mechanism. ++- On the Premium tier, the data persistence file is stored in Azure Storage, but the file can't be imported into a different cache. +- On the Enterprise tiers, the data persistence file is stored in a mounted disk that isn't user-accessible. -Import/ Export allows you to bring data into or export from Azure Cache for Redis. It doesn't configure backup and restore using Redis persistence. +If you want to make periodic data backups for point-in-time recovery, we recommend using the _import/export_ functionality. For more information, see [How to configure data persistence for Azure Cache for Redis](cache-how-to-premium-persistence.md). ### Can I automate Import/Export using PowerShell, CLI, or other management clients? -Yes, for PowerShell instructions see [To import an Azure Cache for Redis](cache-how-to-manage-redis-cache-powershell.md#to-import-an-azure-cache-for-redis) and [To export an Azure Cache for Redis](cache-how-to-manage-redis-cache-powershell.md#to-export-an-azure-cache-for-redis). +Yes, see the following instructions for the _Premium_ tier: ++- PowerShell instructions [to import Redis data](cache-how-to-manage-redis-cache-powershell.md#to-import-an-azure-cache-for-redis) and [to export Redis data](cache-how-to-manage-redis-cache-powershell.md#to-export-an-azure-cache-for-redis). +- Azure CLI instructions to [import Redis data](/cli/azure/redis#az-redis-import) and [export Redis data](/cli/azure/redis#az-redis-export) ++For the _Enterprise_ and _Enterprise Flash_ tiers: ++- PowerShell instructions [to import Redis data](/powershell/module/az.redisenterprisecache/import-azredisenterprisecache) and [to export Redis data](/powershell/module/az.redisenterprisecache/export-azredisenterprisecache). +- Azure CLI instructions to [import Redis data](/cli/azure/redisenterprise/database#az-redisenterprise-database-import) and [export Redis data](/cli/azure/redisenterprise/database#az-redisenterprise-database-export) ### I received a timeout error during my Import/Export operation. What does it mean? |
azure-cache-for-redis | Cache How To Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md | In contrast, for clustered caches, we recommend using the metrics with the suffi - The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command. - Server Load - The percentage of cycles in which the Redis server is busy processing and not waiting idle for messages. If this counter reaches 100, the Redis server has hit a performance ceiling, and the CPU can't process work any faster. If you're seeing high Redis Server Load, then you see timeout exceptions in the client. In this case, you should consider scaling up or partitioning your data into multiple caches.+ +> [!CAUTION] +> The Server Load metric can present incorrect data for Enterprise and Enterprise Flash tier caches. Sometimes Server Load is represented as being over 100. We are investigating this issue. We recommend using the CPU metric instead in the meantime. ++ - Sets - The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`. - Total Keys |
azure-cache-for-redis | Cache How To Premium Persistence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md | -# Configure data persistence for a Premium Azure Cache for Redis instance +# Configure data persistence for an Azure Cache for Redis instance -[Redis persistence](https://redis.io/topics/persistence) allows you to persist data stored in Redis. You can also take snapshots and back up the data. If there's a hardware failure, you load the data. The ability to persist data is a huge advantage over the Basic or Standard tiers where all the data is stored in memory. Data loss is possible if a failure occurs where Cache nodes are down. +[Redis persistence](https://redis.io/topics/persistence) allows you to persist data stored in cache instance. If there's a hardware failure, the cache instance is rehydrated with data from the persistence file when it comes back online. The ability to persist data is an important way to boost the durability of a cache instance because all cache data is stored in memory. Data loss is possible if a failure occurs when cache nodes are down. Persistence should be a key part of your [high availability and disaster recovery](cache-high-availability.md) strategy with Azure Cache for Redis. -> [!IMPORTANT] +> [!WARNING] >-> Check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete). +> If you are using persistence on the Premium tier, check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete). > -Azure Cache for Redis offers Redis persistence using the Redis database (RDB) and Append only File (AOF): +## Scope of availability -- **RDB persistence** - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence.-- **AOF persistence** - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second into an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.+|Tier | Basic, Standard | Premium |Enterprise, Enterprise Flash | +||||| +|Available | No | Yes | Yes (preview) | -Azure Cache for Redis persistence features are intended to be used to restore data to the same cache after data loss and the RDB/AOF persisted data files can't be imported to a new cache. +## Types of data persistence in Redis -To move data across caches, use the Import/Export feature. For more information, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md). +You have two options for persistence with Azure Cache for Redis: the _Redis database_ (RDB) format and _Append only File_ (AOF) format: -To generate any backups of data that can be added to a new cache, you can write automated scripts using PowerShell or CLI to export data periodically. +- _RDB persistence_ - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence. +- _AOF persistence_ - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second in an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica caches, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence. -> [!NOTE] -> Persistence features are intended to be used to restore data to the same cache after data loss. -> -> - RDB/AOF persisted data files cannot be imported to a new cache. -> - Use the Import/Export feature to move data across caches. -> - Write automated scripts using PowerShell or CLI to create a backup of data that can be added to a new cache. +Azure Cache for Redis persistence features are intended to be used to restore data to the same cache after data loss. The RDB/AOF persisted data files can't be imported to a new cache. To move data across caches, use the _Import and Export_ feature. For more information, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md). -Persistence writes Redis data into an Azure Storage account that you own and manage. You configure the **New Azure Cache for Redis** on the left during cache creation. For existing premium caches, use the **Resource menu**. +To generate any backups of data that can be added to a new cache, you can write automated scripts using PowerShell or CLI that export data periodically. -> [!NOTE] +## Prerequisites and limitations ++Persistence features are intended to be used to restore data to the same cache after data loss. ++- RDB/AOF persisted data files can't be imported to a new cache. Use the [Import/Export](cache-how-to-import-export-data.md) feature instead. +- Persistence isn't supported with caches using [passive geo-replication](cache-how-to-geo-replication.md) or [active geo-replication](cache-how-to-active-geo-replication.md). +- On the _Premium_ tier, AOF persistence isn't supported with [multiple replicas](cache-how-to-multi-replicas.md). +- On the _Premium_ tier, data must be persisted to a storage account in the same region as the cache instance. ++## Differences between persistence in the Premium and Enterprise tiers ++On the **Premium** tier, data is persisted directly to an [Azure Storage](../storage/common/storage-introduction.md) account that you own and manage. Azure Storage automatically encrypts data when it's persisted, but you can also use your own keys for the encryption. For more information, see [Customer-managed keys for Azure Storage encryption](../storage/common/customer-managed-keys-overview.md). ++> [!WARNING] +> +> If you are using persistence on the Premium tier, check to see if your storage account has soft delete enabled before using the data persistence feature. Using data persistence with soft delete causes very high storage costs. For more information, see [should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete). >-> Azure Storage automatically encrypts data when it is persisted. You can use your own keys for the encryption. For more information, see [Customer-managed keys with Azure Key Vault](../storage/common/storage-service-encryption.md). -## Set up data persistence +On the **Enterprise** and **Enterprise Flash** tiers, data is persisted to a managed disk attached directly to the cache instance. The location isn't configurable nor accessible to the user. Using a managed disk increases the performance of persistence. The disk is encrypted using Microsoft managed keys (MMK) by default, but customer managed keys (CMK) can also be used. For more information, see [managing data encryption](#managing-data-encryption). ++## How to set up data persistence using the Azure portal ++### [Using the portal (Premium tier)](#tab/premium) -1. To create a premium cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. You can create caches in the Azure portal. You can also create them using Resource Manager templates, PowerShell, or Azure CLI. For more information about creating an Azure Cache for Redis, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache). +1. To create a Premium cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. You can create caches in the Azure portal. You can also create them using Resource Manager templates, PowerShell, or Azure CLI. For more information about creating an Azure Cache for Redis, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache). :::image type="content" source="media/cache-how-to-premium-persistence/create-resource.png" alt-text="Screenshot that shows a form to create an Azure Cache for Redis resource."::: Persistence writes Redis data into an Azure Storage account that you own and man | Setting | Suggested value | Description | | | - | -- |- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. | + | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. The *host name* for your cache instance's is `\<DNS name>.redis.cache.windows.net`. | | **Subscription** | Drop-down and select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. | | **Resource group** | Drop-down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |- | **Location** | Drop-down and select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. | + | **Location** | Drop-down and select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that use your cache. | | **Cache type** | Drop-down and select a premium cache to configure premium features. For details, see [Azure Cache for Redis pricing](https://azure.microsoft.com/pricing/details/cache/). | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). | 4. Select the **Networking** tab or select the **Networking** button at the bottom of the page. Persistence writes Redis data into an Azure Storage account that you own and man | Setting | Suggested value | Description | | | - | -- | | **Backup Frequency** | Drop-down and select a backup interval. Choices include **15 Minutes**, **30 minutes**, **60 minutes**, **6 hours**, **12 hours**, and **24 hours**. | This interval starts counting down after the previous backup operation successfully completes. When it elapses, a new backup starts. |- | **Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, using the soft delete feature on the storage account is strongly discouraged as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). | + | **Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, we _strongly_ recommend that you disable the soft delete feature on the storage account as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). | | **Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. | The first backup starts once the backup frequency interval elapses. Persistence writes Redis data into an Azure Storage account that you own and man | Setting | Suggested value | Description | | | - | -- |- | **First Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, using the soft delete feature on the storage account is strongly discouraged as it leads to increased storage costs. For more information, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). | + | **First Storage Account** | Drop-down and select your storage account. | Choose a storage account in the same region and subscription as the cache. A **Premium Storage** account is recommended because it has higher throughput. Also, we _strongly_ recommend that you disable the soft delete feature on the storage account as it leads to increased storage costs. For more information, see [Pricing and billing](/azure/storage/blobs/soft-delete-blob-overview). | | **First Storage Key** | Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. | | **Second Storage Account** | (Optional) Drop-down and select your secondary storage account. | You can optionally configure another storage account. If a second storage account is configured, the writes to the replica cache are written to this second storage account. | | **Second Storage Key** | (Optional) Drop-down and choose either the **Primary key** or **Secondary key** to use. | If the storage key for your persistence account is regenerated, you must reconfigure the key from the **Storage Key** drop-down. | Persistence writes Redis data into an Azure Storage account that you own and man It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use. +### [Using the portal (Enterprise tiers)](#tab/enterprise) ++1. Sign in to the [Azure portal](https://portal.azure.com) and start following the instructions in the [Enterprise tier quickstart guide](quickstart-create-redis-enterprise.md). ++1. When you reach the **Advanced** tab, select either _RDB_ or _AOF_ options in the **(PREVIEW) Data Persistence** section. ++ :::image type="content" source="media/cache-how-to-premium-persistence/cache-advanced-persistence.png" alt-text="Screenshot that shows the Enterprise tier Advanced tab and Data persistence is highlighted with a red box."::: ++1. To enable RDB persistence, select **RDB** and configure the settings. ++ | Setting | Suggested value | Description | + | | - | -- | + | **Backup Frequency** | Use the drop-down and select a backup interval. Choices include **60 Minutes**, **6 hours**, and **12 hours**. | This interval starts counting down after the previous backup operation successfully completes. When it elapses, a new backup starts. | ++1. To enable AOF persistence, select **AOF** and configure the settings. ++ | Setting | Suggested value | Description | + | | - | -- | + | **Backup Frequency** | Drop down and select a backup interval. Choices include **Write every second** and **Always write**. | The _Always write_ option will append new entries to the AOF file after every write to the cache. This choice offers the best durability but does lower cache performance. | + +1. Finish creating the cache by following the rest of the instructions in the [Enterprise tier quickstart guide](quickstart-create-redis-enterprise.md). ++> [!NOTE] +> You can add persistence to a previously created Enterprise tier cache at any time by navigating to the **Advanced settings** in the Resource menu. +> ++++## How to set up data persistence using PowerShell and Azure CLI ++### [Using PowerShell (Premium tier)](#tab/premium) ++The [New-AzRedisCache](/powershell/module/az.rediscache/new-azrediscache) command can be used to create a new Premium-tier cache using data persistence. See examples for [RDB persistence](/powershell/module/az.rediscache/new-azrediscache#example-5-configure-data-persistence-for-a-premium-azure-cache-for-redis) and [AOF persistence](/powershell/module/az.rediscache/new-azrediscache#example-6-configure-data-persistence-for-a-premium-azure-cache-for-redis-aof-backup-enabled) ++Existing caches can be updated using the [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache) command. See examples of [adding persistence to an existing cache](/powershell/module/az.rediscache/set-azrediscache#example-3-modify-azure-cache-for-redis-if-you-want-to-add-data-persistence-after-azure-redis-cache-created). +++### [Using PowerShell (Enterprise tier)](#tab/enterprise) ++The [New-AzRedisEnterpriseCache](/powershell/module/az.redisenterprisecache/new-azredisenterprisecache) command can be used to create a new Enterprise-tier cache using data persistence. Use the `RdbPersistenceEnabled`, `RdbPersistenceFrequency`, `AofPersistenceEnabled`, and `AofPersistenceFrequency` parameters to configure the persistence setup. This example creates a new E10 Enterprise tier cache using RDB persistence with one hour frequency: ++```powershell-interactive +New-AzRedisEnterpriseCache -Name "MyCache" -ResourceGroupName "MyGroup" -Location "West US" -Sku "Enterprise_E10" -RdbPersistenceEnabled -RdbPersistenceFrequency "1h" +``` ++Existing caches can be updated using the [Update-AzRedisEnterpriseCacheDatabase](/powershell/module/az.redisenterprisecache/update-azredisenterprisecachedatabase) command. This example adds RDB persistence with 12 hour frequency to an existing cache instance: ++```powershell-interactive +Update-AzRedisEnterpriseCacheDatabase -Name "MyCache" -ResourceGroupName "MyGroup" -RdbPersistenceEnabled -RdbPersistenceFrequency "12h" +``` ++++### [Using Azure CLI (Premium tier)](#tab/premium) ++The [az redis create](/cli/azure/redis#az-redis-create) command can be used to create a new Premium-tier cache using data persistence. For instance: ++```azurecli +az redis create --location westus2 --name MyRedisCache --resource-group MyResourceGroup --sku Premium --vm-size p1 --redis-configuration @"config_rdb.json" +``` ++Existing caches can be updated using the [az redis update](/cli/azure/redis#az-redis-update) command. For instance: ++```azurecli +az redis update --name MyRedisCache --resource-group MyResourceGroup --set "redisConfiguration.rdb-storage-connection-string"="BlobEndpoint=https//..." "redisConfiguration.rdb-backup-enabled"="true" "redisConfiguration.rdb-backup-frequency"="15" "redisConfiguration.rdb-backup-max-snapshot-count"="1" +``` ++### [Using Azure CLI (Enterprise tier)](#tab/enterprise) ++The [az redisenterprise create](/cli/azure/redisenterprise#az-redisenterprise-create) command can be used to create a new Enterprise-tier cache using data persistence. Use the `rdb-enabled`, `rdb-frequency`, `aof-enabled`, and `aof-frequency` parameters to configure the persistence setup. This example creates a new E10 Enterprise tier cache using RDB persistence with one hour frequency: ++```azurecli +az redisenterprise create --cluster-name "cache1" --resource-group "rg1" --location "East US" --sku "Enterprise_E10" --persistence rdb-enabled=true rdb-frequency="1h" +``` ++Existing caches can be updated using the [az redisenterprise update](/cli/azure/redisenterprise#az-redisenterprise-update) command. This example adds RDB persistence with 12 hour frequency to an existing cache instance: ++```azurecli +az redisenterprise database update --cluster-name "cache1" --resource-group "rg1" --persistence rdb-enabled=true rdb-frequency="12h" +``` ++++## Managing data encryption +Because Redis persistence creates data at rest, encrypting this data is an important concern for many users. Encryption options vary based on the tier of Azure Cache for Redis being used. ++With the **Premium** tier, data is streamed directly from the cache instance to Azure Storage when persistence is initiated. Various encryption methods can be used with Azure Storage, including Microsoft-managed keys, customer-managed keys, and customer-provided keys. For information on encryption methods, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md). ++With the **Enterprise** and **Enterprise Flash** tiers, data is stored on a managed disk mounted to the cache instance. By default, the disk holding the persistence data, and the OS disk are encrypted using Microsoft-managed keys. A customer-managed key (CMK) can also be used to control data encryption. See [Encryption on Enterprise tier caches](cache-how-to-encryption.md) for instructions. + ## Persistence FAQ The following list contains answers to commonly asked questions about Azure Cache for Redis persistence. The following list contains answers to commonly asked questions about Azure Cach ### AOF persistence - [When should I use a second storage account?](#when-should-i-use-a-second-storage-account)-- [Does AOF persistence affect throughout, latency, or performance of my cache?](#does-aof-persistence-affect-throughout-latency-or-performance-of-my-cache)+- [Does AOF persistence affect throughput, latency, or performance of my cache?](#does-aof-persistence-affect-throughput-latency-or-performance-of-my-cache) - [How can I remove the second storage account?](#how-can-i-remove-the-second-storage-account) - [What is a rewrite and how does it affect my cache?](#what-is-a-rewrite-and-how-does-it-affect-my-cache) - [What should I expect when scaling a cache with AOF enabled?](#what-should-i-expect-when-scaling-a-cache-with-aof-enabled) The following list contains answers to commonly asked questions about Azure Cach ### Can I enable persistence on a previously created cache? -Yes, Redis persistence can be configured both at cache creation and on existing premium caches. +Yes, persistence can be configured both at cache creation and on existing Premium, Enterprise, or Enterprise Flash caches. ### Can I enable AOF and RDB persistence at the same time? No, you can enable RDB or AOF, but not both at the same time. ### How does persistence work with geo-replication? -If you enable data persistence, geo-replication can't be enabled for your premium cache. +If you enable data persistence, geo-replication can't be enabled for your cache. ### Which persistence model should I choose? AOF persistence saves every write to a log, which has a significant effect on th - Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence. - Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence. -For more information on performance when using AOF persistence, see [Does AOF persistence affect throughout, latency, or performance of my cache?](#does-aof-persistence-affect-throughout-latency-or-performance-of-my-cache) +For more information on performance when using AOF persistence, see [Does AOF persistence affect throughput, latency, or performance of my cache?](#does-aof-persistence-affect-throughput-latency-or-performance-of-my-cache) ++### Does AOF persistence affect throughput, latency, or performance of my cache? ++AOF persistence does affect throughput. AOF runs on both the primary and replica process, therefore you see higher CPU and Server Load for a cache with AOF persistence than an identical cache without AOF persistence. AOF offers the best consistency with the data in memory because each write and delete is persisted with only a few seconds of delay. The trade-off is that AOF is more compute intensive. ++As long as CPU and Server Load are both less than 90%, there is a penalty on throughput, but the cache operates normally, otherwise. Above 90% CPU and Server Load, the throughput penalty can get much higher, and the latency of all commands processed by the cache increases. This is because AOF persistence runs on both the primary and replica process, increasing the load on the node in use, and putting persistence on the critical path of data. ### What happens if I've scaled to a different size and a backup is restored that was made before the scaling operation? For both RDB and AOF persistence: - If you've scaled to a larger size, there's no effect. - If you've scaled to a smaller size, and you have a custom [databases](cache-configure.md#databases) setting that is greater than the [databases limit](cache-configure.md#databases) for your new size, data in those databases isn't restored. For more information, see [Is my custom databases setting affected during scaling?](cache-how-to-scale.md#is-my-custom-databases-setting-affected-during-scaling)-- If you've scaled to a smaller size, and there isn't enough room in the smaller size to hold all of the data from the last backup, keys are evicted during the restore process. Typically, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.+- If you've scaled to a smaller size, and there isn't enough room in the smaller size to hold all of the data from the last backup, keys are evicted during the restore process. Typically, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy. ### Can I use the same storage account for persistence across two different caches? Yes, you can use the same storage account for persistence across two different caches. -### Will I be charged for the storage being used in Data Persistence? +### Will I be charged for the storage being used in data persistence? -Yes, you'll be charged for the storage being used as per the pricing model of the storage account being used. +- For **Premium** caches, you're charged for the storage being used per the pricing model of the storage account being used. +- For **Enterprise** and **Enterprise Flash** caches, you aren't charged for the managed disk storage. It's included in the price. ### How frequently does RDB and AOF persistence write to my blobs, and should I enable soft delete? -Enabling soft delete on storage accounts is strongly discouraged when used with Azure Cache for Redis data persistence. RDB and AOF persistence can write to your blobs as frequently as every hour, every few minutes, or every second. Also, enabling soft delete on a storage account means Azure Cache for Redis can't minimize storage costs by deleting the old backup data. Soft delete quickly becomes expensive with the typical data sizes of a cache and write operations every second. For more information on soft delete costs, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). +We recommend that you avoid enabling soft delete on storage accounts when used with Azure Cache for Redis data persistence with the Premium tier. RDB and AOF persistence can write to your blobs as frequently as every hour, every few minutes, or every second. Also, enabling soft delete on a storage account means Azure Cache for Redis can't minimize storage costs by deleting the old backup data. ++Soft delete quickly becomes expensive with the typical data sizes of a cache that also performs write operations every second. For more information on soft delete costs, see [Pricing and billing](../storage/blobs/soft-delete-blob-overview.md). ### Can I change the RDB backup frequency after I create the cache? -Yes, you can change the backup frequency for RDB persistence on the **Data persistence** on the left. For instructions, see Configure Redis persistence. +Yes, you can change the backup frequency for RDB persistence using the Azure portal, CLI, or PowerShell. ### Why is there more than 60 minutes between backups when I have an RDB backup frequency of 60 minutes? The RDB persistence backup frequency interval doesn't start until the previous b ### What happens to the old RDB backups when a new backup is made? -All RDB persistence backups, except for the most recent one, are automatically deleted. This deletion might not happen immediately, but older backups aren't persisted indefinitely. If soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to reside in the soft delete state. +All RDB persistence backups, except for the most recent one, are automatically deleted. This deletion might not happen immediately, but older backups aren't persisted indefinitely. If you're using the Premium tier for persistence, and soft delete is turned on for your storage account, the soft delete setting applies, and existing backups continue to reside in the soft delete state. ### When should I use a second storage account? -Use a second storage account for AOF persistence when you believe you've higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits. +Use a second storage account for AOF persistence when you think you've higher than expected set operations on the cache. Setting up the secondary storage account helps ensure your cache doesn't reach storage bandwidth limits. This option is only available for Premium tier caches. -### Does AOF persistence affect throughout, latency, or performance of my cache? -AOF persistence affects throughput by about 15% – 20% when the cache is below maximum load (CPU and Server Load both under 90%). There shouldn't be latency issues when the cache is within these limits. However, the cache does reach these limits sooner with AOF enabled. ### How can I remove the second storage account? -You can remove the AOF persistence secondary storage account by setting the second storage account to be the same as the first storage account. For existing caches, the **Data persistence** on the left is accessed from the **Resource menu** for your cache. To disable AOF persistence, select **Disabled**. +You can remove the AOF persistence secondary storage account by setting the second storage account to be the same as the first storage account. For existing caches, access **Data persistence** from the **Resource menu** for your cache. To disable AOF persistence, select **Disabled**. ### What is a rewrite and how does it affect my cache? -When the AOF file becomes large enough, a rewrite is automatically queued on the cache. The rewrite resizes the AOF file with the minimal set of operations needed to create the current data set. During rewrites, you can expect to reach performance limits sooner, especially when dealing with large datasets. Rewrites occur less often as the AOF file becomes larger, but will take a significant amount of time when it happens. +When the AOF file becomes large enough, a rewrite is automatically queued on the cache. The rewrite resizes the AOF file with the minimal set of operations needed to create the current data set. During rewrites, you can expect to reach performance limits sooner, especially when dealing with large datasets. Rewrites occur less often as the AOF file becomes larger, but take a significant amount of time when it happens. ### What should I expect when scaling a cache with AOF enabled? -If the AOF file at the time of scaling is large, then expect the scale operation to take longer than expected because it will be reloading the file after scaling has finished. +If the AOF file at the time of scaling is large, then expect the scale operation to take longer than expected because it reloads the file after scaling has finished. For more information on scaling, see [What happens if I've scaled to a different size and a backup is restored that was made before the scaling operation?](#what-happens-if-ive-scaled-to-a-different-size-and-a-backup-is-restored-that-was-made-before-the-scaling-operation) ### How is my AOF data organized in storage? -Data stored in AOF files is divided into multiple page blobs per node to increase performance of saving the data to storage. The following table displays how many page blobs are used for each pricing tier: +When you use the Premium tier, data stored in AOF files is divided into multiple page blobs per node to increase performance of saving the data to storage. The following table displays how many page blobs are used for each pricing tier: | Premium tier | Blobs | |--|-| When clustering is enabled, each shard in the cache has its own set of page blob After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the background and append to the first set of files. Set operations, sent to the cache during the rewrite, append to the second set. A backup is temporarily stored during rewrites if there's a failure. The backup is promptly deleted after a rewrite finishes. If soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to stay in the soft delete state. -### Will having firewall exceptions on the storage account affect persistence +### Will having firewall exceptions on the storage account affect persistence? -Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process. +Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process. This only applies to persistence in the Premium tier. ### Can I have AOF persistence enabled if I have more than one replica? -No, you can't use Append-only File (AOF) persistence with multiple replicas (more than one replica). +With the Premium tier, you can't use Append-only File (AOF) persistence with multiple replicas. In the Enterprise and Enterprise Flash tiers, replica architecture is more complicated, but AOF persistence is supported when Enterprise caches are used in zone redundant deployment. ### How do I check if soft delete is enabled on my storage account? |
azure-cache-for-redis | Cache How To Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md | -Azure Cache for Redis has different cache offerings that provide flexibility in the choice of cache size and features. For a Basic, Standard or Premium cache, you can change its size and tier after creating it to match your application needs. This article shows you how to scale your cache using the Azure portal, and tools such as Azure PowerShell, and Azure CLI. +Azure Cache for Redis has different tier offerings that provide flexibility in the choice of cache size and features. Through scaling, you can change the size, tier, and number of nodes after creating a cache instance to match your application needs. This article shows you how to scale your cache using the Azure portal, plus tools such as Azure PowerShell and Azure CLI. -## When to scale +## Types of scaling -You can use the [monitoring](cache-how-to-monitor.md) features of Azure Cache for Redis to monitor the health and performance of your cache. Use that information determine when to scale the cache. +There are fundamentally two ways to scale an Azure Cache for Redis Instance: -You can monitor the following metrics to help determine if you need to scale. +- _Scaling up_ increases the size of the Virtual Machine (VM) running the Redis server, adding more memory, Virtual CPUs (vCPUs), and network bandwidth. Scaling up is also called _vertical scaling_. The opposite of scaling up is _Scaling down_. -- Redis Server Load- - Redis server is a single threaded process. High Redis server load means that the server is unable to keep pace with the requests from all the client connections. In such situations, it helps to enable clustering or increase shard count so overhead functions are distributed across multiple Redis processes. Clustering and larger shard counts distribute TLS encryption and decryption, and distribute TLS connection and disconnection. - - For more information, see [Set up clustering](cache-how-to-premium-clustering.md#set-up-clustering). -- Memory Usage- - High memory usage indicates that your data size is too large for the current cache size. Consider scaling to a cache size with larger memory. -- Client connections- - Each cache size has a limit to the number of client connections it can support. If your client connections are close to the limit for the cache size, consider scaling up to a larger tier. Scaling out using clustering does not increase the number of supported client connections. - - For more information on connection limits by cache size, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). -- Network Bandwidth- - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. Check "Cache Read" and "Cache Write" metrics to see how much server-side bandwidth is being used. If your Redis server is exceeding available network bandwidth, you should consider scaling up to a larger cache size with higher network bandwidth. - - For more information on network available bandwidth by cache size, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml). +- _Scaling out_ divides the cache instance into more nodes of the same size, increasing memory, vCPUs, and network bandwidth through parallelization. Scaling out is also referred to as _horizontal scaling_ or _sharding_. The opposite of scaling out is **Scaling in**. In the Redis community, scaling out is frequently called [_clustering_](https://redis.io/docs/management/scaling/). -If you determine your cache is no longer meeting your application's requirements, you can scale to an appropriate cache pricing tier for your application. You can choose a larger or smaller cache to match your needs. -For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml). +## Scope of availability -## Scale a cache +|Tier | Basic and Standard | Premium | Enterprise and Enterprise Flash | +||||-| +|Scale Up | Yes | Yes | Yes (preview) | +|Scale Down | Yes | Yes | No | +|Scale Out | No | Yes | Yes (preview) | +|Scale In | No | Yes | No | -1. To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** on the left. +## When to scale - :::image type="content" source="media/cache-how-to-scale/scale-a-cache.png" alt-text="scale on the resource menu"::: +You can use the [monitoring](cache-how-to-monitor.md) features of Azure Cache for Redis to monitor the health and performance of your cache. Use that information to determine when to scale the cache. -1. Choose a pricing tier on the right and then choose **Select**. - - :::image type="content" source="media/cache-how-to-scale/select-a-tier.png" alt-text="Azure Cache for Redis tiers"::: +You can monitor the following metrics to determine if you need to scale. ++- **Redis Server Load** + - High Redis server load means that the server is unable to keep pace with requests from all the clients. Because a Redis server is a single threaded process, it's typically more helpful to _scale out_ rather than _scale up_. Scaling out by enabling clustering helps distribute overhead functions across multiple Redis processes. Scaling out also helps distribute TLS encryption/decryption and connection/disconnection, speeding up cache instances using TLS. + - Scaling up can still be helpful in reducing server load because background tasks can take advantage of the more vCPUs and free up the thread for the main Redis server process. + - The Enterprise and Enterprise Flash tiers use Redis Enterprise rather than open source Redis. One of the advantages of these tiers is that the Redis server process can take advantage of multiple vCPUs. Because of that, both scaling up and scaling out in these tiers can be helpful in reducing server load. For more information, see [Best Practices for the Enterprise and Enterprise Flash tiers of Azure Cache for Redis](cache-best-practices-enterprise-tiers.md). + - For more information, see [Set up clustering](cache-how-to-premium-clustering.md#set-up-clustering). +- **Memory Usage** + - High memory usage indicates that your data size is too large for the current cache size. Consider scaling to a cache size with larger memory. Either _scaling up_ or _scaling out_ is effective here. +- **Client connections** + - Each cache size has a limit to the number of client connections it can support. If your client connections are close to the limit for the cache size, consider _scaling up_ to a larger tier. _Scaling out_ doesn't increase the number of supported client connections. + - For more information on connection limits by cache size, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). +- **Network Bandwidth** + - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. Check "Cache Read" and "Cache Write" metrics to see how much server-side bandwidth is being used. If your Redis server is exceeding available network bandwidth, you should consider scaling out or scaling up to a larger cache size with higher network bandwidth. + - For Enterprise tier caches using the _Enterprise cluster policy_, scaling out doesn't increase network bandwidth. + - For more information on network available bandwidth by cache size, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml). ++For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml). > [!NOTE]-> Scaling is currently not available with Enterprise Tier. +> For more information on how to optimize the scaling process, see the [best practices for scaling guide](cache-best-practices-scale.md) > -You can scale to a different pricing tier with the following restrictions: +## Prerequisites/limitations of scaling Azure Cache for Redis ++You can scale up/down to a different pricing tier with the following restrictions: - You can't scale from a higher pricing tier to a lower pricing tier.+ - You can't scale from an **Enterprise** or **Enterprise Flash** cache down to any other tier. - You can't scale from a **Premium** cache down to a **Standard** or a **Basic** cache. - You can't scale from a **Standard** cache down to a **Basic** cache. - You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can later do a scaling operation to the wanted size. - You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in the next scaling operation. - You can't scale from a larger size down to the **C0 (250 MB)** size. However, you can scale down to any other size within the same pricing tier. For example, you can scale down from C5 Standard to C1 Standard.+- You can't scale from a **Premium**, **Standard**, or **Basic** cache up to an **Enterprise** or **Enterprise Flash** cache. +- You can't scale between **Enterprise** and **Enterprise Flash**. -While the cache is scaling to the new tier, a **Scaling Redis Cache** notification is displayed. +You can scale out/in with the following restrictions: +- _Scale out_ is only supported on the **Premium**, **Enterprise**, and **Enterprise Flash** tiers. +- _Scale in_ is only supported on the **Premium** tier. +- On the **Premium** tier, clustering must be enabled first before scaling in or out. +- Only the **Enterprise** and **Enterprise Flash** tiers can scale up and scale out simultaneously. -When scaling is complete, the status changes from **Scaling** to **Running**. +## How to scale - Basic, Standard, and Premium tiers -## How to automate a scaling operation +### [Scale up and down with Basic, Standard, and Premium](#tab/scale-up-and-down-with-basic-standard-and-premium) -You can scale your cache instances in the Azure portal. And, you can scale using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML). +#### Scale up and down using the Azure portal -When you scale a cache up or down, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically get updated to 6 GB during scaling. When you scale down, the reverse happens. +1. To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** from the Resource menu. -> [!NOTE] -> When you scale a cache up or down programmatically, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed. + :::image type="content" source="media/cache-how-to-scale/scale-a-cache.png" alt-text="Screenshot showing Scale on the resource menu."::: +1. Choose a pricing tier in the working pane and then choose **Select**. + + :::image type="content" source="media/cache-how-to-scale/select-a-tier.png" alt-text="Screenshot showing the Azure Cache for Redis tiers."::: -- [Scale using PowerShell](#scale-using-powershell)-- [Scale using Azure CLI](#scale-using-azure-cli)-- [Scale using MAML](#scale-using-maml)+1. While the cache is scaling to the new tier, a **Scaling Redis Cache** notification is displayed. -### Scale using PowerShell + :::image type="content" source="media/cache-how-to-scale/scaling-notification.png" alt-text="Screenshot showing the notification of scaling."::: ++1. When scaling is complete, the status changes from **Scaling** to **Running**. ++ > [!NOTE] + > When you scale a cache up or down using the portal, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. + > For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically get updated to 6 GB during scaling. + > When you scale down, the reverse happens. + > ++#### Scale up and down using PowerShell [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] -You can scale your Azure Cache for Redis instances with PowerShell by using the [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache) cmdlet when the `Size`, `Sku`, or `ShardCount` properties are modified. The following example shows how to scale a cache named `myCache` to a 2.5-GB cache. +You can scale your Azure Cache for Redis instances with PowerShell by using the [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache) cmdlet when the `Size`or `Sku` properties are modified. The following example shows how to scale a cache named `myCache` to a 6-GB cache in the same tier. ```powershell- Set-AzRedisCache -ResourceGroupName myGroup -Name myCache -Size 2.5GB + Set-AzRedisCache -ResourceGroupName myGroup -Name myCache -Size 6GB +``` +For more information on scaling with PowerShell, see [To scale an Azure Cache for Redis using PowerShell](cache-how-to-manage-redis-cache-powershell.md#scale). ++#### Scale up and down using Azure CLI ++To scale your Azure Cache for Redis instances using Azure CLI, call the [az redis update](/cli/azure/redis#az-redis-update) command. Use the `sku.capcity` property to scale within a tier, for example from a Standard C0 to Standard C1 cache: ++```azurecli +az redis update --cluster-name myCache --resource-group myGroup --set "sku.capacity"="2" +``` ++Use the 'sku.name' and 'sku.family' properties to scale up to a different tier, for instance from a Standard C1 cache to a Premium P1 cache: ++```azurecli +az redis update --cluster-name myCache --resource-group myGroup --set "sku.name"="Premium" "sku.capacity"="1" "sku.family"="P" +``` ++For more information on scaling with Azure CLI, see [Change settings of an existing Azure Cache for Redis](cache-manage-cli.md#scale). ++> [!NOTE] +> When you scale a cache up or down programatically (e.g. using PowerShell or Azure CLI), any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed. +> ++### [Scale out and in - Premium only](#tab/scale-out-and-inpremium-only) ++#### Create a new cache that is scaled out using clustering ++Clustering is enabled during cache creation from the working pane, when you create a new Azure Cache for Redis. ++1. Use the [_Create an open-source Redis cache_ quickstart guide](quickstart-create-redis.md) to start creating a new cache using the Azure portal. ++1. In the **Advanced** tab for a **premium** cache instance, configure the settings for non-TLS port, clustering, and data persistence. To enable clustering, select **Enable**. ++ :::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering.png" alt-text="Clustering toggle."::: ++ You can have up to 10 shards in the cluster. After selecting **Enable**, slide the slider or type a number between 1 and 10 for **Shard count** and select **OK**. ++ Each shard is a primary/replica cache pair managed by Azure. The total size of the cache is calculated by multiplying the number of shards by the cache size selected in the pricing tier. ++ :::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering-selected.png" alt-text="Clustering toggle selected."::: ++ Once the cache is created, you connect to it and use it just like a nonclustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics), metrics are captured separately for each shard, and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis using the Resource menu. ++1. Finish creating the cache using the [quickstart guide](quickstart-create-redis.md). ++It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use. ++> [!NOTE] +> +> There are some minor differences required in your client application when clustering is configured. For more information, see [Do I need to make any changes to my client application to use clustering?](#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering) +> +++For sample code on working with clustering with the StackExchange.Redis client, see the [clustering.cs](https://github.com/rustd/RedisSamples/blob/master/HelloWorld/Clustering.cs) portion of the [Hello World](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample. ++#### Scale a running Premium cache in or out ++To change the cluster size on a premium cache that you created earlier, and is already running with clustering enabled, select **Cluster size** from the Resource menu. +++To change the cluster size, use the slider or type a number between 1 and 10 in the **Shard count** text box. Then, select **OK** to save. ++Increasing the cluster size increases max throughput and cache size. Increasing the cluster size doesn't increase the max. connections available to clients. ++#### Scale out and in using PowerShell ++You can scale out your Azure Cache for Redis instances with PowerShell by using the [Set-AzRedisCache](/powershell/module/az.rediscache/set-azrediscache) cmdlet when the `ShardCount` property is modified. The following example shows how to scale out a cache named `myCache` out to use three shards (that is, scale out by a factor of three) ++```powershell + Set-AzRedisCache -ResourceGroupName myGroup -Name myCache -ShardCount 3 ``` For more information on scaling with PowerShell, see [To scale an Azure Cache for Redis using PowerShell](cache-how-to-manage-redis-cache-powershell.md#scale). -### Scale using Azure CLI +#### Scale out and in using Azure CLI ++To scale your Azure Cache for Redis instances using Azure CLI, call the [az redis update](/cli/azure/redis#az-redis-update) command and use the `shard-count` property. The following example shows how to scale out a cache named `myCache` to use three shards (that is, scale out by a factor of three). -To scale your Azure Cache for Redis instances using Azure CLI, call the `azure rediscache set` command and pass in the configuration changes you want that include a new size, sku, or cluster size, depending on the scaling operation you wish. +```azurecli +az redis update --cluster-name myCache --resource-group myGroup --set shard-count=3 +``` For more information on scaling with Azure CLI, see [Change settings of an existing Azure Cache for Redis](cache-manage-cli.md#scale). -### Scale using MAML +> [!NOTE] +> When you scale a cache up or down programmatically (e.g. using PowerShell or Azure CLI), any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed. +> ++> [!NOTE] +> Scaling a cluster runs the [MIGRATE](https://redis.io/commands/migrate) command, which is an expensive command. For minimal impact, consider running this operation during non-peak hours. During the migration process, you see a spike in server load. Scaling a cluster is a long running process and the amount of time taken depends on the number of keys and size of the values associated with those keys. +> +> ++## How to scale up and out - Enterprise and Enterprise Flash tiers ++The Enterprise and Enterprise Flash tiers are able to scale up and scale out in one operation. Other tiers require separate operations for each action. ++> [!CAUTION] +> The Enterprise and Enterprise Flash tiers do not yet support _scale down_ or _scale in_ operations. +> +++### Scale using the Azure portal ++1. To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** from the Resource menu. + + :::image type="content" source="media/cache-how-to-scale/cache-enterprise-scale.png" alt-text="Screenshot showing Scale selected in the Resource menu for an Enterprise cache."::: ++1. To scale up, choose a different **Cache type** and then choose **Save**. + > [!IMPORTANT] + > You can only scale up at this time. You cannot scale down. ++ :::image type="content" source="media/cache-how-to-scale/cache-enterprise-scale-up.png" alt-text="Screenshot showing the Enterprise tiers in the working pane."::: ++1. To scale out, increase the **Capacity** slider. Capacity increases in increments of two. This number reflects how many underlying Redis Enterprise nodes are being added. This number is always a multiple of two to reflect nodes being added for both primary and replica shards. + > [!IMPORTANT] + > You can only scale out, increasing capacity, at this time. You cannot scale in. ++ :::image type="content" source="media/cache-how-to-scale/cache-enterprise-capacity.png" alt-text="Screenshot showing Capacity in the working pane a red box around it."::: ++1. While the cache is scaling to the new tier, a **Scaling Redis Cache** notification is displayed. ++ :::image type="content" source="media/cache-how-to-scale/cache-enterprise-notifications.png" alt-text="Screenshot showing notification of scaling an Enterprise cache."::: + -To scale your Azure Cache for Redis instances using the [Microsoft Azure Management Libraries (MAML)](https://azure.microsoft.com/updates/management-libraries-for-net-release-announcement/), call the `IRedisOperations.CreateOrUpdate` method and pass in the new size for the `RedisProperties.SKU.Capacity`. +1. When scaling is complete, the status changes from **Scaling** to **Running**. -```csharp - static void Main(string[] args) - { - // For instructions on getting the access token, see - // https://azure.microsoft.com/documentation/articles/cache-configure/#access-keys - string token = GetAuthorizationHeader(); - TokenCloudCredentials creds = new TokenCloudCredentials(subscriptionId,token); +### Scale using PowerShell - RedisManagementClient client = new RedisManagementClient(creds); - var redisProperties = new RedisProperties(); - // To scale, set a new size for the redisSKUCapacity parameter. - redisProperties.Sku = new Sku(redisSKUName,redisSKUFamily,redisSKUCapacity); - redisProperties.RedisVersion = redisVersion; - var redisParams = new RedisCreateOrUpdateParameters(redisProperties, redisCacheRegion); - client.Redis.CreateOrUpdate(resourceGroupName,cacheName, redisParams); - } +You can scale your Azure Cache for Redis instances with PowerShell by using the [Update-AzRedisEnterpriseCache](/powershell/module/az.redisenterprisecache/update-azredisenterprisecache) cmdlet. You can modify the `Sku` property to scale the instance up. You can modify the `Capacity` property to scale out the instance. The following example shows how to scale a cache named `myCache` to an Enterprise E20 (25 GB) instance with capacity of 4. ++```powershell + Update-AzRedisEnterpriseCache -ResourceGroupName myGroup -Name myCache -Sku Enterprise_E20 -Capacity 4 ``` -For more information, see the [Manage Azure Cache for Redis using MAML](https://github.com/rustd/RedisSamples/tree/master/ManageCacheUsingMAML) sample. +### Scale using Azure CLI ++To scale your Azure Cache for Redis instances using Azure CLI, call the [az redisenterprise update](/cli/azure/redisenterprise#az-redisenterprise-update) command. You can modify the `sku` property to scale the instance up. You can modify the `capacity` property to scale out the instance. The following example shows how to scale a cache named `myCache` to an Enterprise E20 (25 GB) instance with capacity of 4. ++```azurecli +az redisenterprise update --cluster-name "myCache" --resource-group "myGroup" --sku "Enterprise_E20" --capacity 4 +``` ## Scaling FAQ The following list contains answers to commonly asked questions about Azure Cach - [Can I scale to, from, or within a Premium cache?](#can-i-scale-to-from-or-within-a-premium-cache) - [After scaling, do I have to change my cache name or access keys?](#after-scaling-do-i-have-to-change-my-cache-name-or-access-keys) - [How does scaling work?](#how-does-scaling-work)-- [Will I lose data from my cache during scaling?](#will-i-lose-data-from-my-cache-during-scaling)+- [Do I lose data from my cache during scaling?](#do-i-lose-data-from-my-cache-during-scaling) - [Is my custom databases setting affected during scaling?](#is-my-custom-databases-setting-affected-during-scaling)-- [Will my cache be available during scaling?](#will-my-cache-be-available-during-scaling)+- [Is my cache be available during scaling?](#is-my-cache-be-available-during-scaling) - [Are there scaling limitations with geo-replication?](#are-there-scaling-limitations-with-geo-replication) - [Operations that aren't supported](#operations-that-arent-supported) - [How long does scaling take?](#how-long-does-scaling-take) - [How can I tell when scaling is complete?](#how-can-i-tell-when-scaling-is-complete)+- [Do I need to make any changes to my client application to use clustering?](#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering) +- [How are keys distributed in a cluster?](#how-are-keys-distributed-in-a-cluster) +- [What is the largest cache size I can create?](#what-is-the-largest-cache-size-i-can-create) +- [Do all Redis clients support clustering?](#do-all-redis-clients-support-clustering) +- [How do I connect to my cache when clustering is enabled?](#how-do-i-connect-to-my-cache-when-clustering-is-enabled) +- [Can I directly connect to the individual shards of my cache?](#can-i-directly-connect-to-the-individual-shards-of-my-cache) +- [Can I configure clustering for a previously created cache?](#can-i-configure-clustering-for-a-previously-created-cache) +- [Can I configure clustering for a basic or standard cache?](#can-i-configure-clustering-for-a-basic-or-standard-cache) +- [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers) +- [I'm getting MOVE exceptions when using StackExchange.Redis and clustering, what should I do?](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do) +- [What is the difference between OSS Clustering and Enterprise Clustering on Enterprise-tier caches?](#what-is-the-difference-between-oss-clustering-and-enterprise-clustering-on-enterprise-tier-caches) +- [How many shards do Enterprise tier caches use?](#how-many-shards-do-enterprise-tier-caches-use) ### Can I scale to, from, or within a Premium cache? - You can't scale from a **Premium** cache down to a **Basic** or **Standard** pricing tier. - You can scale from one **Premium** cache pricing tier to another. - You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a later scaling operation.+- You can't scale from a **Premium** cache to an **Enterprise** or **Enterprise Flash** cache. - If you enabled clustering when you created your **Premium** cache, you can [change the cluster size](cache-how-to-premium-clustering.md#set-up-clustering). If your cache was created without clustering enabled, you can configure clustering at a later time. -For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md). - ### After scaling, do I have to change my cache name or access keys? No, your cache name and keys are unchanged during a scaling operation. ### How does scaling work? -- When you scale a **Basic** cache to a different size, it's shut down and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.+- When you scale a **Basic** cache to a different size, it's shut down, and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost. - When you scale a **Basic** cache to a **Standard** cache, a replica cache is provisioned and the data is copied from the primary cache to the replica cache. The cache remains available during the scaling process.-- When you scale a **Standard** cache to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes.+- When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a different size, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes. - When you scale out a clustered cache, new shards are provisioned and added to the Redis server cluster. Data is then resharded across all shards. - When you scale in a clustered cache, data is first resharded and then cluster size is reduced to required shards.-- In some cases, such as scaling or migrating your cache to a different cluster, the underlying IP address of the cache can change. The DNS record for the cache changes and is transparent to most applications. However, if you use an IP address to configure the connection to your cache, or to configure NSGs, or firewalls allowing traffic to the cache, your application might have trouble connecting sometime after that the DNS record updates.+- In some cases, such as scaling or migrating your cache to a different cluster, the underlying IP address of the cache can change. The DNS record for the cache changes and is transparent to most applications. However, if you use an IP address to configure the connection to your cache, or to configure NSGs, or firewalls allowing traffic to the cache, your application might have trouble connecting sometime after the DNS record updates. -### Will I lose data from my cache during scaling? +### Do I lose data from my cache during scaling? - When you scale a **Basic** cache to a new size, all data is lost and the cache is unavailable during the scaling operation. - When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved.-- When you scale a **Standard** cache to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.+- When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy. ### Is my custom databases setting affected during scaling? If you configured a custom value for the `databases` setting during cache creati - If you're using a custom number of `databases` that exceeds the limits of the new tier, the `databases` setting is lowered to the limits of the new tier and all data in the removed databases is lost. - When you scale to a pricing tier with the same or higher `databases` limit than the current tier, your `databases` setting is kept and no data is lost. -While Standard and Premium caches have a 99.9% SLA for availability, there's no SLA for data loss. +While Standard, Premium, Enterprise, and Enterprise Flash caches have a SLA for availability, there's no SLA for data loss. -### Will my cache be available during scaling? +### Is my cache be available during scaling? -- **Standard** and **Premium** caches remain available during the scaling operation. However, connection blips can occur while scaling Standard and Premium caches, and also while scaling from Basic to Standard caches. These connection blips are expected to be small and redis clients can generally re-establish their connection instantly.+- **Standard**, **Premium**, **Enterprise**, and **Enterprise Flash** caches remain available during the scaling operation. However, connection blips can occur while scaling these caches, and also while scaling from **Basic** to **Standard** caches. These connection blips are expected to be small and redis clients can generally re-establish their connection instantly. +- For Enterprise and Enterprise Flash caches using active geo-replication, scaling only a subset of linked caches can introduce issues over time in some cases. We recommend scaling all caches in the geo-replication group together were possible. - **Basic** caches are offline during scaling operations to a different size. Basic caches remain available when scaling from **Basic** to **Standard** but might experience a small connection blip. If a connection blip occurs, Redis clients can generally re-establish their connection instantly. ### Are there scaling limitations with geo-replication? -With geo-replication configured, you might notice that you canΓÇÖt scale a cache or change the shards in a cluster. A geo-replication link between two caches prevents you from scaling operation or changing the number of shards in a cluster. You must unlink the cache to issue these commands. For more information, see [Configure Geo-replication](cache-how-to-geo-replication.md). +With [passive geo-replication](cache-how-to-geo-replication.md) configured, you might notice that you canΓÇÖt scale a cache or change the shards in a cluster. A geo-replication link between two caches prevents you from scaling operation or changing the number of shards in a cluster. You must unlink the cache to issue these commands. For more information, see [Configure Geo-replication](cache-how-to-geo-replication.md). ++With [active geo-replication](cache-how-to-active-geo-replication.md) configured, you can't scale a cache. All caches in a geo replication group must be the same size and capacity. ### Operations that aren't supported With geo-replication configured, you might notice that you canΓÇÖt scale a cache - You can't scale from a **Standard** cache down to a **Basic** cache. - You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can do a scaling operation to the size you want at a later time. - You can't scale from a **Basic** cache directly to a **Premium** cache. First scale from **Basic** to **Standard** in one scaling operation, and then scale from **Standard** to **Premium** in a later operation.+- You can't scale from a **Premium** cache to an **Enterprise** or **Enterprise Flash** cache. - You can't scale from a larger size down to the **C0 (250 MB)** size. If a scaling operation fails, the service tries to revert the operation, and the cache will revert to the original size. Generally, when you scale a cache with no data, it takes approximately 20 minute ### How can I tell when scaling is complete? In the Azure portal, you can see the scaling operation in progress. When scaling is complete, the status of the cache changes to **Running**.++### Do I need to make any changes to my client application to use clustering? ++* When clustering is enabled, only database 0 is available. If your client application uses multiple databases and it tries to read or write to a database other than 0, the following exception is thrown: `Unhandled Exception: StackExchange.Redis.RedisConnectionException: ProtocolFailure on GET >` `StackExchange.Redis.RedisCommandException: Multiple databases are not supported on this server; cannot switch to database: 6` + + For more information, see [Redis Cluster Specification - Implemented subset](https://redis.io/topics/cluster-spec#implemented-subset). +* If you're using [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/), you must use 1.0.481 or later. You connect to the cache using the same [endpoints, ports, and keys](cache-configure.md#properties) that you use when connecting to a cache where clustering is disabled. The only difference is that all reads and writes must be done to database 0. + + Other clients may have different requirements. See [Do all Redis clients support clustering?](#do-all-redis-clients-support-clustering) +* If your application uses multiple key operations batched into a single command, all keys must be located in the same shard. To locate keys in the same shard, see [How are keys distributed in a cluster?](#how-are-keys-distributed-in-a-cluster) +* If you're using Redis ASP.NET Session State provider, you must use 2.0.1 or higher. See [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers) ++> [!IMPORTANT] +> When using the Enterprise or Enterprise FLash tiers, you are given the choice of _OSS Cluster Mode_ or _Enterprise Cluster Mode_. OSS Cluster Mode is the same as clustering on the Premium tier and follows the open source clustering specification. Enterprise Cluster Mode can be less performant, but uses Redis Enterprise clustering which doesn't require any client changes to use. For more information, see [Clustering on Enterprise](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise). +> +> ++### How are keys distributed in a cluster? ++Per the Redis documentation on [Keys distribution model](https://redis.io/topics/cluster-spec#keys-distribution-model): The key space is split into 16,384 slots. Each key is hashed and assigned to one of these slots, which are distributed across the nodes of the cluster. You can configure which part of the key is hashed to ensure that multiple keys are located in the same shard using hash tags. ++* Keys with a hash tag - if any part of the key is enclosed in `{` and `}`, only that part of the key is hashed for the purposes of determining the hash slot of a key. For example, the following three keys would be located in the same shard: `{key}1`, `{key}2`, and `{key}3` since only the `key` part of the name is hashed. For a complete list of keys hash tag specifications, see [Keys hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags). +* Keys without a hash tag - the entire key name is used for hashing, resulting in a statistically even distribution across the shards of the cache. ++For best performance and throughput, we recommend distributing the keys evenly. If you're using keys with a hash tag, it's the application's responsibility to ensure the keys are distributed evenly. ++For more information, see [Keys distribution model](https://redis.io/topics/cluster-spec#keys-distribution-model), [Redis Cluster data sharding](https://redis.io/topics/cluster-tutorial#redis-cluster-data-sharding), and [Keys hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags). ++For sample code about working with clustering and locating keys in the same shard with the StackExchange.Redis client, see the [clustering.cs](https://github.com/rustd/RedisSamples/blob/master/HelloWorld/Clustering.cs) portion of the [Hello World](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample. ++### What is the largest cache size I can create? ++The largest cache size you can have is 4.5 TB. This result is a clustered F1500 cache with capacity 9. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). ++### Do all Redis clients support clustering? ++Many clients libraries support Redis clustering but not all. Check the documentation for the library you're using to verify you're using a library and version that support clustering. StackExchange.Redis is one library that does support clustering, in its newer versions. For more information on other clients, see the [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) section of the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial). ++The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as 'MOVED' na 'CROSSSLOTS'. When you attempt to use a client library that doesn't support clustering, with a cluster mode cache, the result can be many [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection), or just break your application, if you're doing cross-slot multi-key requests. ++> [!NOTE] +> If you're using StackExchange.Redis as your client, verify that you are using the latest version of [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/) 1.0.481 or later for clustering to work correctly. For more information on any issues with move exceptions, see [move exceptions](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do). +> +### How do I connect to my cache when clustering is enabled? ++You can connect to your cache using the same [endpoints](cache-configure.md#properties), [ports](cache-configure.md#properties), and [keys](cache-configure.md#access-keys) that you use when connecting to a cache that doesn't have clustering enabled. Redis manages the clustering on the backend so you don't have to manage it from your client. ++### Can I directly connect to the individual shards of my cache? ++The clustering protocol requires the client to make the correct shard connections, so the client should make share connections for you. With that said, each shard consists of a primary/replica cache pair, collectively known as a cache instance. You can connect to these cache instances using the redis-cli utility in the [unstable](https://redis.io/download) branch of the Redis repository at GitHub. This version implements basic support when started with the `-c` switch. For more information, see [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) on [https://redis.io](https://redis.io) in the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial). ++You need to use the `-p` switch to specify the correct port to connect to. Use the [CLUSTER NODES](https://redis.io/commands/cluster-nodes/) command to determine the exact ports used for the primary and replica nodes. The following port ranges are used: ++- For non-TLS Premium tier caches, ports are available in the `130XX` range +- For TLS enabled Premium tier caches, ports are available in the `150XX` range +- For Enterprise and Enterprise Flash caches using OSS clustering, the initial connection is through port 10000. Connecting to individual nodes can be done using ports in the 85XX range. The 85xx ports will change over time and shouldn't be hardcoded into your application. ++### Can I configure clustering for a previously created cache? ++Yes. First, ensure that your cache is premium by scaling it up. Next, you can see the cluster configuration options, including an option to enable cluster. Change the cluster size after the cache is created, or after you have enabled clustering for the first time. ++>[!IMPORTANT] +>You can't undo enabling clustering. And a cache with clustering enabled and only one shard behaves *differently* than a cache of the same size with *no* clustering. ++All Enterprise and Enterprise Flash tier caches are always clustered. ++### Can I configure clustering for a basic or standard cache? ++Clustering is only available for Premium, Enterprise, and Enterprise Flash caches. ++### Can I use clustering with the Redis ASP.NET Session State and Output Caching providers? ++* **Redis Output Cache provider** - no changes required. +* **Redis Session State provider** - to use clustering, you must use [RedisSessionStateProvider](https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider) 2.0.1 or higher or an exception is thrown, which is a breaking change. For more information, see [v2.0.0 Breaking Change Details](https://github.com/Azure/aspnet-redis-providers/wiki/v2.0.0-Breaking-Change-Details). ++### I'm getting MOVE exceptions when using StackExchange.Redis and clustering, what should I do? +If you're using StackExchange.Redis and receive `MOVE` exceptions when using clustering, ensure that you're using [StackExchange.Redis 1.1.603](https://www.nuget.org/packages/StackExchange.Redis/) or later. For instructions on configuring your .NET applications to use StackExchange.Redis, see [Configure the cache clients](cache-dotnet-how-to-use-azure-redis-cache.md#configure-the-cache-client). ++### What is the difference between OSS Clustering and Enterprise Clustering on Enterprise tier caches? ++OSS Cluster Mode is the same as clustering on the Premium tier and follows the open source clustering specification. Enterprise Cluster Mode can be less performant, but uses Redis Enterprise clustering, which doesn't require any client changes to use. For more information, see [Clustering on Enterprise](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise). ++### How many shards do Enterprise tier caches use? ++Unlike Basic, Standard, and Premium tier caches, Enterprise and Enterprise Flash caches can take advantage of multiple shards on a single node. For more information, see [Sharding and CPU utilization](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization). ++## Next steps ++- [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting) +- [[Best practices for scaling](cache-best-practices-scale.md)] |
azure-cache-for-redis | Cache Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md | Azure Cache for Redis improves application performance by supporting common appl | | -- | | [Data cache](cache-web-app-cache-aside-leaderboard.md) | Databases are often too large to load directly into a cache. It's common to use the [cache-aside](/azure/architecture/patterns/cache-aside) pattern to load data into the cache only as needed. When the system makes changes to the data, the system can also update the cache, which is then distributed to other clients. Additionally, the system can set an expiration on data, or use an eviction policy to trigger data updates into the cache.| | [Content cache](cache-aspnet-output-cache-provider.md) | Many web pages are generated from templates that use static content such as headers, footers, banners. These static items shouldn't change often. Using an in-memory cache provides quick access to static content compared to backend datastores. This pattern reduces processing time and server load, allowing web servers to be more responsive. It can allow you to reduce the number of servers needed to handle loads. Azure Cache for Redis provides the Redis Output Cache Provider to support this pattern with ASP.NET.|-| [Session store](cache-aspnet-session-state-provider.md) | This pattern is commonly used with shopping carts and other user history data that a web application might associate with user cookies. Storing too much in a cookie can have a negative effect on performance as the cookie size grows and is passed and validated with every request. A typical solution uses the cookie as a key to query the data in a database. Using an in-memory cache, like Azure Cache for Redis, to associate information with a user is much faster than interacting with a full relational database. | +| [Session store](cache-aspnet-session-state-provider.md) | This pattern is commonly used with shopping carts and other user history data that a web application might associate with user cookies. Storing too much in a cookie can have a negative effect on performance as the cookie size grows and is passed and validated with every request. A typical solution uses the cookie as a key to query the data in a database. When you use an in-memory cache, like Azure Cache for Redis, to associate information with a user is faster than interacting with a full relational database. | | Job and message queuing | Applications often add tasks to a queue when the operations associated with the request take time to execute. Longer running operations are queued to be processed in sequence, often by another server. This method of deferring work is called task queuing. Azure Cache for Redis provides a distributed queue to enable this pattern in your application.| | Distributed transactions | Applications sometimes require a series of commands against a backend data-store to execute as a single atomic operation. All commands must succeed, or all must be rolled back to the initial state. Azure Cache for Redis supports executing a batch of commands as a single [transaction](https://redis.io/topics/transactions). | Azure Cache for Redis is available in these tiers: | Tier | Description | |||-| Basic | An OSS Redis cache running on a single VM. This tier has no service-level agreement (SLA) and is ideal for development/test and non-critical workloads. | +| Basic | An OSS Redis cache running on a single VM. This tier has no service-level agreement (SLA) and is ideal for development/test and noncritical workloads. | | Standard | An OSS Redis cache running on two VMs in a replicated configuration. | | Premium | High-performance OSS Redis caches. This tier offers higher throughput, lower latency, better availability, and more features. Premium caches are deployed on more powerful VMs compared to the VMs for Basic or Standard caches. | | Enterprise | High-performance caches powered by Redis Inc.'s Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, RedisJSON, and RedisTimeSeries. Also, it offers even higher availability than the Premium tier. |-| Enterprise Flash | Cost-effective large caches powered by Redis Inc.'s Redis Enterprise software. This tier extends Redis data storage to non-volatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. | +| Enterprise Flash | Cost-effective large caches powered by Redis Inc.'s Redis Enterprise software. This tier extends Redis data storage to nonvolatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. | ### Feature comparison Consider the following options when choosing an Azure Cache for Redis tier: - **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tiers 12 GB - 14 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md). - **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).-- **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation, which will cause timeouts in your application.+- **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation that cause timeouts in your application. - **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance). - **Maximum number of client connections**: The Premium and Enterprise tiers offer the maximum numbers of clients that can connect to Redis, offering higher numbers of connections for larger sized caches. Clustering increases the total amount of network bandwidth available for a clustered cache. - **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss. Consider the following options when choosing an Azure Cache for Redis tier: - **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md). - **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/), [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/), and [RedisJSON](https://docs.redis.com/latest/modules/redisjson/) (preview). These modules add new data types and functionality to Redis. -You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation). +You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to scale - Basic, Standard, and Premium tiers](cache-how-to-scale.md#how-to-scalebasic-standard-and-premium-tiers). ### Special considerations for Enterprise tiers |
azure-cache-for-redis | Cache Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md | You can restrict public access to the private endpoint of your cache by disablin >[!Important] > There is a `publicNetworkAccess` flag which is `Disabled` by default. > You can set the value to `Disabled` or `Enabled`. When set to enabled, this flag allows both public and private endpoint access to the cache. When set to `Disabled`, it allows only private endpoint access. For more information on how to change the value, see the [FAQ](#how-can-i-change-my-private-endpoint-to-be-disabled-or-enabled-from-public-network-access).++>[!Important] +> Private endpoint is supported on cache tiers Basic, Standard, Premium, and Enterprise. We recommend using private endpoint instead of VNets. Private endpoints are easy to set up or remove, are supported on all tiers, and can connect your cache to multiple different VNets at once. > > |
azure-functions | Create First Function Cli Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md | zone_pivot_groups: functions-nodejs-model In this article, you use command-line tools to create a JavaScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. ->[!NOTE] ->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md). -Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account. +Note that completion will incur a small cost of a few USD cents or less in your Azure account. There is also a [Visual Studio Code-based version](create-first-function-vs-code-node.md) of this article. Before you begin, you must have the following: ::: zone-end ::: zone pivot="nodejs-model-v4" -+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5085 or above ++ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5095 or above ::: zone-end + One of the following tools for creating Azure resources: Verify your prerequisites, which depend on whether you are using Azure CLI or Az ::: zone-end ::: zone pivot="nodejs-model-v4" -+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above. ++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ::: zone-end + Run `az --version` to check that the Azure CLI version is 2.4 or later. Verify your prerequisites, which depend on whether you are using Azure CLI or Az ::: zone-end ::: zone pivot="nodejs-model-v4" -+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above. ++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ::: zone-end + Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. |
azure-functions | Create First Function Cli Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md | zone_pivot_groups: functions-nodejs-model In this article, you use command-line tools to create a TypeScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. ->[!NOTE] ->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md). -Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account. +Note that completion will incur a small cost of a few USD cents or less in your Azure account. There's also a [Visual Studio Code-based version](create-first-function-vs-code-typescript.md) of this article. Before you begin, you must have the following: + The [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x. ::: zone-end ::: zone pivot="nodejs-model-v4" -+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5085 or above ++ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5095 or above ::: zone-end + One of the following tools for creating Azure resources: Verify your prerequisites, which depend on whether you're using Azure CLI or Azu ::: zone-end ::: zone pivot="nodejs-model-v4" -+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above. ++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ::: zone-end + Run `az --version` to check that the Azure CLI version is 2.4 or later. Verify your prerequisites, which depend on whether you're using Azure CLI or Azu ::: zone-end ::: zone pivot="nodejs-model-v4" -+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.4915 or above. ++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ::: zone-end + Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. |
azure-functions | Create First Function Vs Code Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md | zone_pivot_groups: functions-nodejs-model Use Visual Studio Code to create a JavaScript function that responds to HTTP requests. Test the code locally, then deploy it to the serverless environment of Azure Functions. ->[!NOTE] ->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md). -Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account. +Note that completion will incur a small cost of a few USD cents or less in your Azure account. There's also a [CLI-based version](create-first-function-cli-node.md) of this article. |
azure-functions | Create First Function Vs Code Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md | zone_pivot_groups: functions-nodejs-model In this article, you use Visual Studio Code to create a TypeScript function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. ->[!NOTE] ->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](functions-reference-node.md). -Use the selector at the top to choose the programming model of your choice for completing this quickstart. Note that completion will incur a small cost of a few USD cents or less in your Azure account. +Note that completion will incur a small cost of a few USD cents or less in your Azure account. There's also a [CLI-based version](create-first-function-cli-typescript.md) of this article. Before you get started, make sure you have the following requirements in place: + [Azure Functions Core Tools 4.x](functions-run-local.md#install-the-azure-functions-core-tools). ::: zone-end ::: zone pivot="nodejs-model-v4" -+ [Azure Functions Core Tools v4.0.5085 or above](functions-run-local.md#install-the-azure-functions-core-tools). ++ [Azure Functions Core Tools v4.0.5095 or above](functions-run-local.md#install-the-azure-functions-core-tools). ::: zone-end ## <a name="create-an-azure-functions-project"></a>Create your local project |
azure-functions | Durable Functions Cloud Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-cloud-backup.md | -> [!NOTE] -> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). -> -> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. [!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)] |
azure-functions | Durable Functions Error Handling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-error-handling.md | ms.devlang: csharp, javascript, powershell, python, java Durable Function orchestrations are implemented in code and can use the programming language's built-in error-handling features. There really aren't any new concepts you need to learn to add error handling and compensation into your orchestrations. However, there are a few behaviors that you should be aware of. -> [!NOTE] -> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). -> -> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. ## Errors in activity functions |
azure-functions | Durable Functions Orchestrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-orchestrations.md | When an orchestration function is given more work to do (for example, a response The event-sourcing behavior of the Durable Task Framework is closely coupled with the orchestrator function code you write. Suppose you have an activity-chaining orchestrator function, like the following orchestrator function: -> [!NOTE] -> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). -> -> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. # [C# (InProc)](#tab/csharp-inproc) |
azure-functions | Durable Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md | Durable Functions is designed to work with all Azure Functions programming langu | Java | Functions 4.0+ | Java 8+ | 4.x bundles | > [!NOTE]-> The new programming models for authoring Functions in Python (V2) and Node.js (V4) are currently in preview. Compared to the current models, the new experiences are designed to be more idiomatic and intuitive for Python and JavaScript/TypeScript developers. To learn more, see Azure Functions [Python developer guide](../functions-reference-python.md?pivots=python-mode-decorators) and [Node.js developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). +> The new programming models for authoring Functions in Python (V2) and Node.js (V4) are currently in preview. Compared to the current models, the new experiences are designed to be more flexible and intuitive for Python and JavaScript/TypeScript developers. Learn more about the differences between the models in the [Python developer guide](../functions-reference-python.md?pivots=python-mode-decorators) and [Node.js upgrade guide](../functions-node-upgrade-v4.md). > > In the following code snippets, Python (PM2) denotes programming model V2, and JavaScript (PM4) denotes programming model V4, the new experiences. |
azure-functions | Durable Functions Phone Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-phone-verification.md | This sample demonstrates how to build a [Durable Functions](durable-functions-ov This sample implements an SMS-based phone verification system. These types of flows are often used when verifying a customer's phone number or for multi-factor authentication (MFA). It is a powerful example because the entire implementation is done using a couple small functions. No external data store, such as a database, is required. -> [!NOTE] -> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). -> -> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. [!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)] |
azure-functions | Durable Functions Sequence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sequence.md | Function chaining refers to the pattern of executing a sequence of functions in [!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)] -> [!NOTE] -> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). -> -> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. ## The functions |
azure-functions | Durable Functions Sub Orchestrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sub-orchestrations.md | Sub-orchestrator functions behave just like activity functions from the caller's > [!NOTE] > Sub-orchestrations are not yet supported in PowerShell. -> [!NOTE] -> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for JavaScript and TypeScript developers. To learn more, see the Azure Functions Node.js [developer guide](../functions-reference-node.md?pivots=nodejs-model-v4). -> -> In the following code snippets, JavaScript (PM4) denotes programming model V4, the new experience. ## Example |
azure-functions | Quickstart Js Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-js-vscode.md | zone_pivot_groups: functions-nodejs-model In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chain together calls to other functions. You then publish the function code to Azure. -->[!NOTE] ->The v4 programming model for authoring Functions in Node.js is currently in preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](../functions-reference-node.md). -> ->Use the selector at the top to choose the programming model of your choice for completing this quickstart.  To complete this tutorial: * Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). ::: zone-end ::: zone pivot="nodejs-model-v4"-* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5085` or above. +* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5095` or above. ::: zone-end * Durable Functions require an Azure storage account. You need an Azure subscription. |
azure-functions | Quickstart Ts Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-ts-vscode.md | zone_pivot_groups: functions-nodejs-model In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chain together calls to other functions. You then publish the function code to Azure. -->[!NOTE] ->The v4 programming model for authoring Functions in Node.js is currently in Preview. Compared to the current v3 model, the v4 model is designed to have a more idiomatic and intuitive experience for JavaScript and TypeScript developers. To learn more, see the [Developer Reference Guide](../functions-reference-node.md). -> ->Use the selector at the top to choose the programming model of your choice for completing this quickstart.  To complete this tutorial: * Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). ::: zone-end ::: zone pivot="nodejs-model-v4"-* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5085` or above. +* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5095` or above. ::: zone-end * Durable Functions require an Azure storage account. You need an Azure subscription. |
azure-functions | Functions Bindings Twilio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md | You can add the extension to your project by explicitly installing the [NuGet pa ::: zone-end --- ## Example Unless otherwise noted, these examples are specific to version 2.x and later version of the Functions runtime. public static async Task Run(string myQueueItem, IAsyncCollector<CreateMessageOp - ::: zone-end ::: zone pivot="programming-language-javascript" The following example shows a Twilio output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. In version 2.x, you set the `to` value in your code. > [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md) [extension bundle]: ./functions-bindings-register.md#extension-bundles-[Update your extensions]: ./functions-bindings-register.md +[Update your extensions]: ./functions-bindings-register.md |
azure-functions | Functions How To Use Azure Function App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md | You can use either the Azure portal or Azure CLI commands to migrate a function + Migration isn't supported on Linux. + The source plan and the target plan must be in the same resource group and geographical region. For more information, see [Move an app to another App Service plan](../app-service/app-service-plan-manage.md#move-an-app-to-another-app-service-plan). + The specific CLI commands depend on the direction of the migration.++ Downtime in your function executions occur as the function app is migrated between plans.++ State and other app-specific content is maintained, since the same Azure Files share is used by the app both before and after migration. ### Migration in the portal |
azure-functions | Functions Node Upgrade V4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md | Version 4 was designed with the following goals in mind: Version 4 of the Node.js programming model requires the following minimum versions: -- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0-alpha.8++- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0-alpha.9+ - [Node.js](https://nodejs.org/en/download/releases/) v18+ - [TypeScript](https://www.typescriptlang.org/) v4+ - [Azure Functions Runtime](./functions-versions.md) v4.16+-- [Azure Functions Core Tools](./functions-run-local.md) v4.0.4915+ (if running locally)+- [Azure Functions Core Tools](./functions-run-local.md) v4.0.5095+ (if running locally) ++## Enable v4 programming model ++The following application setting is required to run the v4 programming model while it is in preview: +- Name: `AzureWebJobsFeatureFlags` +- Value: `EnableWorkerIndexing` ++If you're running locally using [Azure Functions Core Tools](functions-run-local.md), you should add this setting to your `local.settings.json` file. If you're running in Azure, follow these steps with the tool of your choice: ++# [Azure CLI](#tab/azure-cli-set-indexing-flag) ++Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. ++```azurecli +az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing +``` ++# [Azure PowerShell](#tab/azure-powershell-set-indexing-flag) ++Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. ++```azurepowershell +Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"} +``` ++# [VS Code](#tab/vs-code-set-indexing-flag) ++1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed +1. Press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. +1. Choose your subscription and function app when prompted +1. For the name, type `AzureWebJobsFeatureFlags` and press <kbd>Enter</kbd>. +1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. ++ ## Include the npm package The http request and response types are now a subset of the [fetch standard](htt ## Troubleshooting -If you see the following error, make sure you [set the `EnableWorkerIndexing` flag](./functions-reference-node.md#enable-v4-programming-model) and you're using the minimum version of all [requirements](#requirements): +If you see the following error, make sure you [set the `EnableWorkerIndexing` flag](#enable-v4-programming-model) and you're using the minimum version of all [requirements](#requirements): > No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.). |
azure-functions | Functions Reference Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md | The following table shows each version of the Node.js programming model along wi | [Programming Model Version](https://www.npmjs.com/package/@azure/functions?activeTab=versions) | Support Level | [Functions Runtime Version](./functions-versions.md) | [Node.js Version](https://github.com/nodejs/release#release-schedule) | Description | | - | - | | | |-| 4.x | Preview | 4.x | 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. | +| 4.x | Preview | 4.16+ | 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. | | 3.x | GA | 4.x | 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file | | 2.x | GA (EOL) | 3.x | 14.x, 12.x, 10.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | | 1.x | GA (EOL) | 2.x | 10.x, 8.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. | At the root of the project, there's a shared [host.json](functions-host-json.md) ::: zone pivot="nodejs-model-v4" -## Enable v4 programming model --The following application setting is required to run the v4 programming model while it is in preview: -- Name: `AzureWebJobsFeatureFlags`-- Value: `EnableWorkerIndexing`--If you're running locally using [Azure Functions Core Tools](functions-run-local.md), you should add this setting to your `local.settings.json` file. If you're running in Azure, follow these steps with the tool of your choice: --# [Azure CLI](#tab/azure-cli-set-indexing-flag) --Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. --```azurecli -az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing -``` --# [Azure PowerShell](#tab/azure-powershell-set-indexing-flag) --Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. --```azurepowershell -Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"} -``` --# [VS Code](#tab/vs-code-set-indexing-flag) --1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed -1. Press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. -1. Choose your subscription and function app when prompted -1. For the name, type `AzureWebJobsFeatureFlags` and press <kbd>Enter</kbd>. -1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. --- ## Folder structure The recommended folder structure for a JavaScript project looks like the following example: |
azure-maps | Geocoding Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md | -However, the [Search service] doesn't have the same level of information and accuracy for all regions and countries. Use this article to determine what kind of locations you can reliably search for in each region. +However, the [Search service] doesn't have the same level of information and accuracy for all countries/regions. Use this article to determine what kind of locations you can reliably search for in each region. The ability to geocode in a country/region is dependent upon the road data coverage and geocoding precision of the geocoding service. The following categorizations are used to specify the level of geocoding support in each country/region. The ability to geocode in a country/region is dependent upon the road data cover | Sweden | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Switzerland | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Tajikistan | | | Γ£ô | Γ£ô | Γ£ô |-| Turkey | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | +| T├╝rkiye | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Turkmenistan | | | | Γ£ô | Γ£ô | | Ukraine | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | United Kingdom | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
azure-maps | Render Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md | The render coverage tables below list the countries that support Azure Maps road | Spain | Γ£ô | | Sweden | Γ£ô | | Switzerland | Γ£ô |-| Turkey | Γ£ô | +| T├╝rkiye | Γ£ô | | Ukraine | Γ£ô | | United Kingdom | Γ£ô | | Vatican City | Γ£ô | |
azure-maps | Routing Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md | The following tables provide coverage information for Azure Maps routing. | Sweden | Γ£ô | Γ£ô | Γ£ô | | Switzerland | Γ£ô | Γ£ô | Γ£ô | | Tajikistan | Γ£ô | | |-| Turkey | Γ£ô | Γ£ô | Γ£ô | +| T├╝rkiye | Γ£ô | Γ£ô | Γ£ô | | Turkmenistan | Γ£ô | | | | Ukraine | Γ£ô | Γ£ô | | | United Kingdom | Γ£ô | Γ£ô | Γ£ô | |
azure-maps | Traffic Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/traffic-coverage.md | The following tables provide information about what kind of traffic information | Spain | Γ£ô | Γ£ô | | Sweden | Γ£ô | Γ£ô | | Switzerland | Γ£ô | Γ£ô |-| Turkey | Γ£ô | Γ£ô | +| T├╝rkiye | Γ£ô | Γ£ô | | Ukraine | Γ£ô | Γ£ô | | United Kingdom | Γ£ô | Γ£ô | |
azure-maps | Weather Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md | Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned | Svalbard | Γ£ô | | | Γ£ô | | Sweden | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Switzerland | Γ£ô | Γ£ô | Γ£ô | Γ£ô |-| Turkey | Γ£ô | Γ£ô | | Γ£ô | +| T├╝rkiye | Γ£ô | Γ£ô | | Γ£ô | | Ukraine | Γ£ô | Γ£ô | | Γ£ô | | United Kingdom | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Vatican City | Γ£ô | | Γ£ô | Γ£ô | |
azure-monitor | Agents Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md | description: Overview of the Azure Monitor Agent, which collects monitoring data Previously updated : 2/21/2023 Last updated : 3/24/2023 In addition to the generally available data collection listed above, Azure Monit | Azure Monitor feature | Current support | Other extensions installed | More information | | : | : | : | : |-| [VM insights](../vm/vminsights-overview.md) | Public preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights overview](../vm/vminsights-enable-overview.md) | +| [VM insights](../vm/vminsights-overview.md) | Public preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights](../vm/vminsights-enable-overview.md) | +| [Container insights](../containers/container-insights-overview.md) | Public preview | Containerized Azure Monitor agent | [Enable Container Insights](../containers/container-insights-onboard.md) | In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure services in preview: In addition to the generally available data collection listed above, Azure Monit | [Change Tracking](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |+| Azure Stack HCI Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) | +| Azure Virtual Desktop (AVD) Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) | > [!NOTE] > Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available. In addition to the generally available data collection listed above, Azure Monit ## Supported regions -Azure Monitor Agent is available in all public regions and Azure Government clouds, for generally available features. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true®ions=all). +Azure Monitor Agent is available in all public regions, Azure Government anmd China clouds, for generally available features. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true®ions=all). ## Costs -There's no cost for the Azure Monitor Agent, but you might incur charges for the data ingested. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). +There's no cost for the Azure Monitor Agent, but you might incur charges for the data ingested and stored. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). ## Compare to legacy agents |
azure-monitor | Alerts Manage Alert Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md | Manage your alert rules in the Azure portal, or using the CLI or PowerShell. ## Manage alert rules in the Azure portal 1. In the [portal](https://portal.azure.com/), select **Monitor**, then **Alerts**.-1. From the top command bar, select **Alert rules**. The page shows all your alert rules across on all subscriptions. +1. From the top command bar, select **Alert rules**. The page shows all your alert rules on all subscriptions. :::image type="content" source="media/alerts-managing-alert-instances/alerts-rules-page.png" alt-text="Screenshot of alerts rules page."::: Manage your alert rules in the Azure portal, or using the CLI or PowerShell. > [!NOTE] > If you filter on a `target resource type` scope, the alerts rules list doesnΓÇÖt include resource health alert rules. To see the resource health alert rules, remove the `Target resource type` filter, or filter the rules based on the `Resource group` or `Subscription`. -1. Select the alert rule that you want to edit. You can select multiple alert rules and enable or disable them. Multi-selecting rules can be useful when you want to perform maintenance on specific resources. -1. Edit any of the fields in the following sections. You can't edit the **Alert Rule Name**, or the **Signal type** of an existing alert rule. +1. Select an alert rule or use the checkboxes on the left to select multiple alert rules. +1. If you select multiple alert rules, you can enable or disable the selected rules. Selecting multiple rules can be useful when you want to perform maintenance on specific resources. +1. If you select a single alert rule, you can edit, disable, duplicate, or delete the rule in the alert rule pane. ++ :::image type="content" source="media/alerts-managing-alert-instances/alerts-rules-pane.png" alt-text="Screenshot of alerts rules pane."::: ++1. To edit an alert rule, select **Edit**, and then edit any of the fields in the following sections. You can't edit the **Alert Rule Name**, or the **Signal type** of an existing alert rule. - **Scope**. You can edit the scope for all alert rules **other than**: - Log alert rules - Metric alert rules that monitor a custom metric |
azure-monitor | Itsmc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md | Depending on your integration, start connecting to your ITSM tool with these ste - For ServiceNow ITSM, use the ITSM action: 1. Connect to your ITSM. For more information, see the [ServiceNow connection instructions](./itsmc-connections-servicenow.md).- 1. (Optional) Set up the IP ranges. To list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, list the whole public IP range of an Azure region where the Log Analytics workspace belongs. For more information, see the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/EUS2/WUS2/US South Central, the customer can list the ActionGroup network tag only. + 1. (Optional) Set up the IP ranges. To list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, list the whole public IP range of an Azure region where the Log Analytics workspace belongs. For more information, see the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/WUS2/US South Central, the customer can list the ActionGroup network tag only. 1. [Configure your Azure ITSM solution and create the ITSM connection](./itsmc-definition.md#install-it-service-management-connector). 1. [Configure an action group to use the ITSM connector](./itsmc-definition.md#define-a-template). |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Application Insights is an extension of [Azure Monitor](../overview.md) and prov 1. *Proactively* understand how an application is performing. 1. *Reactively* review application execution data to determine the cause of an incident. + In addition to collecting [Metrics](standard-metrics.md) and application [Telemetry](data-model-complete.md) data, which describe application activities and health, Application Insights can also be used to collect and store application [trace logging data](asp-net-trace-logs.md). The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs; the logging framework rarely needs to be changed. - Application Insights provides other features including, but not limited to: - [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment |
azure-monitor | Ilogger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md | In this article, you'll learn how to capture logs with Application Insights in . ## ASP.NET Core applications -To add Application Insights logging to ASP.NET Core applications, use the `Microsoft.Extensions.Logging.ApplicationInsights` NuGet provider package. +To add Application Insights logging to ASP.NET Core applications: -1. Install the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] NuGet package. +1. Install the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai]. 1. Add `ApplicationInsightsLoggerProvider`: - ```csharp - using Microsoft.AspNetCore.Hosting; - using Microsoft.Extensions.DependencyInjection; - using Microsoft.Extensions.Hosting; - using Microsoft.Extensions.Logging; - using Microsoft.Extensions.Logging.ApplicationInsights; - - namespace WebApplication +# [.NET 6.0+](#tab/dotnet6) ++```csharp +using Microsoft.Extensions.Logging.ApplicationInsights; ++var builder = WebApplication.CreateBuilder(args); ++// Add services to the container. ++builder.Services.AddControllers(); +// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle +builder.Services.AddEndpointsApiExplorer(); +builder.Services.AddSwaggerGen(); ++builder.Logging.AddApplicationInsights( + configureTelemetryConfiguration: (config) => + config.ConnectionString = builder.Configuration.GetConnectionString("APPLICATIONINSIGHTS_CONNECTION_STRING"), + configureApplicationInsightsLoggerOptions: (options) => { } + ); ++builder.Logging.AddFilter<ApplicationInsightsLoggerProvider>("your-category", LogLevel.Trace); ++var app = builder.Build(); ++// Configure the HTTP request pipeline. +if (app.Environment.IsDevelopment()) +{ + app.UseSwagger(); + app.UseSwaggerUI(); +} ++app.UseHttpsRedirection(); ++app.UseAuthorization(); ++app.MapControllers(); ++app.Run(); +``` ++# [.NET 5.0](#tab/dotnet5) ++```csharp +using Microsoft.AspNetCore.Hosting; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Extensions.Hosting; +using Microsoft.Extensions.Logging; +using Microsoft.Extensions.Logging.ApplicationInsights; ++namespace WebApplication +{ + public class Program {- public class Program + public static void Main(string[] args) {- public static void Main(string[] args) - { - var host = CreateHostBuilder(args).Build(); - - var logger = host.Services.GetRequiredService<ILogger<Program>>(); - logger.LogInformation("From Program, running the host now."); - - host.Run(); - } - - public static IHostBuilder CreateHostBuilder(string[] args) => - Host.CreateDefaultBuilder(args) - .ConfigureWebHostDefaults(webBuilder => - { - webBuilder.UseStartup<Startup>(); - }) - .ConfigureLogging((context, builder) => - { - builder.AddApplicationInsights( - configureTelemetryConfiguration: (config) => config.ConnectionString = context.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"], - configureApplicationInsightsLoggerOptions: (options) => { } - ); - - // Capture all log-level entries from Startup - builder.AddFilter<ApplicationInsightsLoggerProvider>( - typeof(Startup).FullName, LogLevel.Trace); - }); + var host = CreateHostBuilder(args).Build(); ++ var logger = host.Services.GetRequiredService<ILogger<Program>>(); + logger.LogInformation("From Program, running the host now."); ++ host.Run(); }++ public static IHostBuilder CreateHostBuilder(string[] args) => + Host.CreateDefaultBuilder(args) + .ConfigureWebHostDefaults(webBuilder => + { + webBuilder.UseStartup<Startup>(); + }) + .ConfigureLogging((context, builder) => + { + builder.AddApplicationInsights( + configureTelemetryConfiguration: (config) => config.ConnectionString = context.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"], + configureApplicationInsightsLoggerOptions: (options) => { } + ); ++ // Capture all log-level entries from Startup + builder.AddFilter<ApplicationInsightsLoggerProvider>( + typeof(Startup).FullName, LogLevel.Trace); + }); }- ``` +} +``` ++ With the NuGet package installed, and the provider being registered with dependency injection, the app is ready to log. With constructor injection, either <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601> is required. When these implementations are resolved, `ApplicationInsightsLoggerProvider` will provide them. Logged messages or exceptions will be sent to Application Insights. For more information, see [Logging in ASP.NET Core](/aspnet/core/fundamentals/lo ## Console application -To add Application Insights logging to console applications, first install the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] NuGet provider package. +To add Application Insights logging to console applications, first install the following NuGet packages: ++* [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] +* [`Microsoft.Extensions.DependencyInjection`][nuget-ai] The following example uses the Microsoft.Extensions.Logging.ApplicationInsights package and demonstrates the default behavior for a console application. The Microsoft.Extensions.Logging.ApplicationInsights package should be used in a console application or whenever you want a bare minimum implementation of Application Insights without the full feature set such as metrics, distributed tracing, sampling, and telemetry initializers. -Here are the installed packages: +# [.NET 6.0+](#tab/dotnet6) ++```csharp +using Microsoft.ApplicationInsights.Channel; +using Microsoft.ApplicationInsights.Extensibility; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Extensions.Logging; ++using var channel = new InMemoryChannel(); ++try +{ + IServiceCollection services = new ServiceCollection(); + services.Configure<TelemetryConfiguration>(config => config.TelemetryChannel = channel); + services.AddLogging(builder => + { + // Only Application Insights is registered as a logger provider + builder.AddApplicationInsights( + configureTelemetryConfiguration: (config) => config.ConnectionString = "<YourConnectionString>", + configureApplicationInsightsLoggerOptions: (options) => { } + ); + }); ++ IServiceProvider serviceProvider = services.BuildServiceProvider(); + ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>(); -```xml -<ItemGroup> - <PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="5.0.0" /> - <PackageReference Include="Microsoft.Extensions.Logging.ApplicationInsights" Version="2.17.0"/> -</ItemGroup> + logger.LogInformation("Logger is working..."); +} +finally +{ + // Explicitly call Flush() followed by Delay, as required in console apps. + // This ensures that even if the application terminates, telemetry is sent to the back end. + channel.Flush(); ++ await Task.Delay(TimeSpan.FromMilliseconds(1000)); +} ``` +# [.NET 5.0](#tab/dotnet5) + ```csharp using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.Extensibility; namespace ConsoleApp ``` ++ ## Frequently asked questions ### Why do some ILogger logs not have the same properties as others? |
azure-monitor | Ip Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md | Content-Length: 54 ## Telemetry initializer -If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field. --# [.NET](#tab/net) --### ASP.NET or ASP.NET Core +If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field. The code for this class is the same across .NET versions. ```csharp using Microsoft.ApplicationInsights.Channel; namespace MyWebApp > [!NOTE] > If you can't access `ISupportProperties`, make sure you're running the latest stable release of the Application Insights SDK. `ISupportProperties` is intended for high cardinality values. `GlobalProperties` is more appropriate for low cardinality values like region name and environment name. -### Enable the telemetry initializer for ASP.NET ++# [.NET 6.0+](#tab/framework) ```csharp-using Microsoft.ApplicationInsights.Extensibility; + using Microsoft.ApplicationInsights.Extensibility; + using CustomInitializer.Telemetry; ++builder.services.AddSingleton<ITelemetryInitializer, CloneIPAddress>(); +``` +# [.NET 5.0](#tab/dotnet5) ++```csharp + using Microsoft.ApplicationInsights.Extensibility; + using CustomInitializer.Telemetry; ++ public void ConfigureServices(IServiceCollection services) +{ + services.AddSingleton<ITelemetryInitializer, CloneIPAddress>(); +} +``` ++# [ASP.NET Framework](#tab/dotnet6) ++```csharp +using Microsoft.ApplicationInsights.Extensibility; namespace MyWebApp { namespace MyWebApp ``` -### Enable the telemetry initializer for ASP.NET Core --You can create your telemetry initializer the same way for ASP.NET Core as for ASP.NET. To enable the initializer, use the following example for reference: --```csharp - using Microsoft.ApplicationInsights.Extensibility; - using CustomInitializer.Telemetry; - public void ConfigureServices(IServiceCollection services) -{ - services.AddSingleton<ITelemetryInitializer, CloneIPAddress>(); -} -``` + # [Node.js](#tab/nodejs) |
azure-monitor | Activity Log Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log-schema.md | This category contains the record of all create, update, delete, and action oper | eventName | Friendly name of the Administrative event. | | category | Always "Administrative" | | httpRequest |Blob describing the Http Request. Usually includes the ΓÇ£clientRequestIdΓÇ¥, ΓÇ£clientIpAddressΓÇ¥ and ΓÇ£methodΓÇ¥ (HTTP method. For example, PUT). |-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, and ΓÇ£InformationalΓÇ¥ | +| level |[Severity level](#severity-level) of the event. | | resourceGroupName |Name of the resource group for the impacted resource. | | resourceProviderName |Name of the resource provider for the impacted resource | | resourceType | The type of resource that was affected by an Administrative event. | This category contains the record of any resource health events that have occurr | eventDataId |Unique identifier of the alert event. | | category | Always "ResourceHealth" | | eventTimestamp |Timestamp when the event was generated by the Azure service processing the request corresponding the event. |-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, or ΓÇ£InformationalΓÇ¥ (other levels are not supported) | +| level |[Severity level](#severity-level) of the event. | | operationId |A GUID shared among the events that correspond to a single operation. | | operationName |Name of the operation. | | resourceGroupName |Name of the resource group that contains the resource. | This category contains the record of all activations of classic Azure alerts. An | description |Static text description of the alert event. | | eventDataId |Unique identifier of the alert event. | | category | Always "Alert" |-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, and ΓÇ£InformationalΓÇ¥ | +| level |[Severity level](#severity-level) of the event. | | resourceGroupName |Name of the resource group for the impacted resource if it is a metric alert. For other alert types, it is the name of the resource group that contains the alert itself. | | resourceProviderName |Name of the resource provider for the impacted resource if it is a metric alert. For other alert types, it is the name of the resource provider for the alert itself. | | resourceId | Name of the resource ID for the impacted resource if it is a metric alert. For other alert types, it is the resource ID of the alert resource itself. | This category contains the record of any events related to the operation of the | correlationId | A GUID in the string format. | | description |Static text description of the autoscale event. | | eventDataId |Unique identifier of the autoscale event. |-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, and ΓÇ£InformationalΓÇ¥ | +| level |[Severity level](#severity-level) of the event. | | resourceGroupName |Name of the resource group for the autoscale setting. | | resourceProviderName |Name of the resource provider for the autoscale setting. | | resourceId |Resource ID of the autoscale setting. | This category contains the record any alerts generated by Microsoft Defender for | eventName |Friendly name of the security event. | | category | Always "Security" | | ID |Unique resource identifier of the security event. |-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, or ΓÇ£InformationalΓÇ¥ | +| level |[Severity level](#severity-level) of the event.| | resourceGroupName |Name of the resource group for the resource. | | resourceProviderName |Name of the resource provider for Microsoft Defender for Cloud. Always "Microsoft.Security". | | resourceType |The type of resource that generated the security event, such as "Microsoft.Security/locations/alerts" | This category contains the record of any new recommendations that are generated | eventDataId | Unique identifier of the recommendation event. | | category | Always "Recommendation" | | ID |Unique resource identifier of the recommendation event. |-| level |Level of the event. One of the following values: ΓÇ£CriticalΓÇ¥, ΓÇ£ErrorΓÇ¥, ΓÇ£WarningΓÇ¥, or ΓÇ£InformationalΓÇ¥ | +| level |[Severity level](#severity-level) of the event.| | operationName |Name of the operation. Always "Microsoft.Advisor/generateRecommendations/action"| | resourceGroupName |Name of the resource group for the resource. | | resourceProviderName |Name of the resource provider for the resource that this recommendation applies to, such as "MICROSOFT.COMPUTE" | This category contains records of all effect action operations performed by [Azu | category | Declares the activity log event as belonging to "Policy". | | eventTimestamp | Timestamp when the event was generated by the Azure service processing the request corresponding the event. | | ID | Unique identifier of the event on the specific resource. |-| level | Level of the event. Audit uses "Warning" and Deny uses "Error". An auditIfNotExists or deployIfNotExists error can generate "Warning" or "Error" depending on severity. All other Policy events use "Informational". | +| level | [Severity level](#severity-level) of the event. Audit uses "Warning" and Deny uses "Error". An auditIfNotExists or deployIfNotExists error can generate "Warning" or "Error" depending on severity. All other Policy events use "Informational". | | operationId | A GUID shared among the events that correspond to a single operation. | | operationName | Name of the operation and directly correlates to the Policy effect. | | resourceGroupName | Name of the resource group for the evaluated resource. | Following is an example of an event using this schema: "records": [ { "time": "2019-01-21T22:14:26.9792776Z",- "resourceId": "/subscriptions/s1/resourceGroups/MSSupportGroup/providers/microsoft.support/supporttickets/115012112305841", + "resourceId": "/subscriptions/s1/resourceGroups/MSSupportGroup/providers/microsoft.support/supporttickets/123456112305841", "operationName": "microsoft.support/supporttickets/write", "category": "Write", "resultType": "Success", Following is an example of an event using this schema: "callerIpAddress": "111.111.111.11", "correlationId": "c776f9f4-36e5-4e0e-809b-c9b3c3fb62a8", "identity": {- "authorization": { - "scope": "/subscriptions/s1/resourceGroups/MSSupportGroup/providers/microsoft.support/supporttickets/115012112305841", - "action": "microsoft.support/supporttickets/write", - "evidence": { - "role": "Subscription Admin" - } - }, + "authorization": { + "scope": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-001/providers/Microsoft.Storage/storageAccounts/ msftstorageaccount", + "action": "Microsoft.Storage/storageAccounts/listAccountSas/action", + "evidence": { + "role": "Azure Eventhubs Service Role", + "roleAssignmentScope": "/subscriptions/00000000-0000-0000-0000-000000000000", + "roleAssignmentId": "123abc2a6c314b0ab03a891259123abc", + "roleDefinitionId": "123456789de042a6a64b29b123456789", + "principalId": "abcdef038c6444c18f1c31311fabcdef", + "principalType": "ServicePrincipal" + } + }, "claims": { "aud": "https://management.core.windows.net/",- "iss": "https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/", + "iss": "https://sts.windows.net/abcde123-86f1-41af-91ab-abcde1234567/", "iat": "1421876371", "nbf": "1421876371", "exp": "1421880271", "ver": "1.0", "http://schemas.microsoft.com/identity/claims/tenantid": "00000000-0000-0000-0000-000000000000", "http://schemas.microsoft.com/claims/authnmethodsreferences": "pwd",- "http://schemas.microsoft.com/identity/claims/objectidentifier": "2468adf0-8211-44e3-95xq-85137af64708", + "http://schemas.microsoft.com/identity/claims/objectidentifier": "123abc45-8211-44e3-95xq-85137af64708", "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn": "admin@contoso.com", "puid": "20030000801A118C",- "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "9vckmEGF7zDKk1YzIY8k0t1_EAPaXoeHyPRn6f413zM", + "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "9876543210DKk1YzIY8k0t1_EAPaXoeHyPRn6f413zM", "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname": "John", "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname": "Smith", "name": "John Smith",- "groups": "cacfe77c-e058-4712-83qw-f9b08849fd60,7f71d11d-4c41-4b23-99d2-d32ce7aa621c,31522864-0578-4ea0-9gdc-e66cc564d18c", + "groups": "12345678-cacfe77c-e058-4712-83qw-f9b08849fd60,12345678-4c41-4b23-99d2-d32ce7aa621c,12345678-0578-4ea0-9gdc-e66cc564d18c", "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": " admin@contoso.com",- "appid": "c44b4083-3bq0-49c1-b47d-974e53cbdf3c", + "appid": "12345678-3bq0-49c1-b47d-974e53cbdf3c", "appidacr": "2", "http://schemas.microsoft.com/identity/claims/scope": "user_impersonation", "http://schemas.microsoft.com/claims/authnclassreference": "1" Following is an example of an event using this schema: "location": "global", "properties": { "statusCode": "Created",- "serviceRequestId": "50d5cddb-8ca0-47ad-9b80-6cde2207f97c" + "serviceRequestId": "12345678-8ca0-47ad-9b80-6cde2207f97c" } } ] Following is an example of an event using this schema: -- ## Next steps * [Learn more about the Activity Log](./platform-logs-overview.md) * [Create a diagnostic setting to send Activity Log to Log Analytics workspace, Azure storage, or event hubs](./diagnostic-settings.md) |
azure-monitor | Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md | This section discusses requirements and limitations. ### Time before telemetry gets to destination -Once you have set up a diagnostic setting, data should start flowing to your selected destination(s) with 90 minutes. If you get no information within 24 hours, then either +Once you have set up a diagnostic setting, data should start flowing to your selected destination(s) within 90 minutes. If you get no information within 24 hours, then either - no logs are being generated or - something is wrong in the underlying routing mechanism. Try disabling the configuration and then reenabling it. Contact Azure support through the Azure portal if you continue to have issues. |
azure-monitor | Custom Logs Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md | If all of these conditions aren't true, then you can use DCR-based log collectio ## Migration procedure If the table that you're targeting with DCR-based log collection fits the criteria above, then you must perform the following steps: -1. Configure your data collection rule (DCR) following procedures at [Send custom logs to Azure Monitor Logs using Resource Manager templates](tutorial-logs-ingestion-api.md) or [Add transformation in workspace data collection rule to Azure Monitor using resource manager templates](tutorial-workspace-transformations-api.md). +1. Configure your data collection rule (DCR) following procedures at [Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](tutorial-logs-ingestion-api.md) or [Add transformation in workspace data collection rule to Azure Monitor using Resource Manager templates](tutorial-workspace-transformations-api.md). -1. If using the Logs ingestion API, also [configure the data collection endpoint (DCE)](tutorial-logs-ingestion-api.md#create-a-data-collection-endpoint) and the agent or component that will be sending data to the API. +1. If using the Logs ingestion API, also [configure the data collection endpoint (DCE)](tutorial-logs-ingestion-api.md#create-data-collection-endpoint) and the agent or component that will be sending data to the API. 1. Issue the following API call against your table. This call is idempotent, so there will be no effect if the table has already been migrated. |
azure-monitor | Logs Dedicated Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md | eligible for commitment tier discount. Availability zones aren't currently supported in all regions. New clusters you create in supported regions have availability zones enabled by default. ## Cluster pricing model-Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. +Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. + ## Required permissions To perform cluster-related actions, you need these permissions: The same as for 'clusters in a resource group', but in subscription scope. ## Update commitment tier in cluster -When the data volume to your linked workspaces changes over time, you can update the Commitment Tier level appropriately. The tier is specified in units of GB and can have values of 500, 1000, 2000 or 5000 GB/day. You don't have to provide the full REST request body, but you must include the sku. +When the data volume to linked workspaces changes over time, you can update the Commitment Tier level appropriately to optimize cost. The tier is specified in units of Gigabytes (GB) and can have values of 500, 1000, 2000 or 5000 GB per day. You don't have to provide the full REST request body, but you must include the sku. ++During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period. #### [CLI](#tab/cli) Content-type: application/json ### Unlink a workspace from cluster -You can unlink a workspace from a cluster at any time. The workspace pricing tier is changed to per-GB, data ingested to cluster before the unlink operation remains in the cluster, and new data to workspace get ingested to Log Analytics. You can query data as usual and the service performs cross-cluster queries seamlessly. If cluster was configured with Customer-managed key (CMK), data remains encrypted with your key and accessible, while your key and permissions to Key Vault remain. +You can unlink a workspace from a cluster at any time. The workspace pricing tier is changed to per-GB, data ingested to cluster before the unlink operation remains in the cluster, and new data to workspace get ingested to Log Analytics. ++> [!WARNING] +> Unlinking a workspace does not move workspace data out of the cluster. Any data collected for workspace while linked to cluster, remains in cluster for the retention period defined in workspace, and accessible as long as cluster isn't deleted. ++Queries aren't affected when workspace is unlinked and service performs cross-cluster queries seamlessly. If cluster was configured with Customer-managed key (CMK), data ingested to workspace while was linked, remains encrypted with your key and accessible, while your key and permissions to Key Vault remain. > [!NOTE] -> There is a limit of two link operations for a specific workspace within a month to prevent data distribution across clusters. Contact support if you reach limit. +> - There is a limit of two link operations for a specific workspace within a month to prevent data distribution across clusters. Contact support if you reach the limit. +> - Unlinked workspaces are moved to Pay-As-You-Go pricing tier. Use the following commands to unlink a workspace from cluster: N/A You need to have *write* permissions on the cluster resource. -When deleting a cluster, you're losing access to all data, which was ingested from workspaces that were linked to it. This operation isn't reversible. -The cluster's billing stops when cluster is deleted, regardless of the 30-days commitment tier defined in cluster. +Cluster deletion operation should be done with caution, since operation is non-recoverable. All ingested data to cluster from linked workspaces, gets permanently deleted. ++The cluster's billing stops when cluster is deleted, regardless of the 31-days commitment tier defined in cluster. -If you delete your cluster while workspaces are linked, workspaces get automatically unlinked from the cluster before the cluster delete, and new data to workspaces gets ingested to Log Analytics clusters instead. You can query workspace for the time range before it was linked to the cluster, and after the unlink, and the service performs cross-cluster queries seamlessly. +If you delete a cluster that has linked workspaces, workspaces get automatically unlinked from the cluster, workspaces are moved to Pay-As-You-Go pricing tier, and new data to workspaces is ingested to Log Analytics clusters instead. You can query workspace for the time range before it was linked to the cluster, and after the unlink, and the service performs cross-cluster queries seamlessly. > [!NOTE] > - There is a limit of seven clusters per subscription and region, five active, plus two that were deleted in past two weeks.-> - Cluster's name remain reserved for 14 days after deletion, and can't be used for creating a new cluster. +> - Cluster's name remain reserved two weeks after deletion, and can't be used for creating a new cluster. Use the following commands to delete a cluster: Authorization: Bearer <token> - If you create a cluster and get an error "region-name doesn't support Double Encryption for clusters.", you can still create the cluster without Double encryption by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body. - Double encryption setting can't be changed after the cluster has been created. -- Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) a workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.+- Deleting a workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, workspace returns to previous state and remains linked to cluster. ++- During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period. ## Troubleshooting |
azure-monitor | Logs Ingestion Api Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md | Title: Logs Ingestion API in Azure Monitor -description: Send data to a Log Analytics workspace by using a REST API. +description: Send data to a Log Analytics workspace using REST API or client libraries. Last updated 06/27/2022 # Logs Ingestion API in Azure Monitor+The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace using either a [REST API call](#rest-api-call) or [client libraries](#client-libraries). By using this API, you can send data to [supported Azure tables](#supported-tables) or to [custom tables that you create](../logs/create-custom-table.md#create-a-custom-table). You can even [extend the schema of Azure tables with custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column) to accept additional data. -The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace from any REST API client. By using this API, you can send data from almost any source to [supported Azure tables](#supported-tables) or to [custom tables that you create](../logs/create-custom-table.md#create-a-custom-table). You can even [extend the schema of Azure tables with custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column). --> [!NOTE] -> The Logs Ingestion API was previously referred to as the custom logs API. ## Basic operation Your application sends data to a [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md), which is a unique connection point for your subscription. The payload of your API call includes the source data formatted in JSON. The call: - Specifies a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) that understands the format of the source data.-- Potentially filters and transforms it for the target table.-- Directs it to a specific table in a specific workspace.+- Potentially filters and transforms the data for the target table. +- Directs the data to a specific table in a specific workspace. -You can modify the target table and workspace by modifying the DCR without any change to the REST API call or source data. +You can modify the target table and workspace by modifying the DCR without any change to the API call or source data. :::image type="content" source="media/data-ingestion-api-overview/data-ingestion-api-overview.png" lightbox="media/data-ingestion-api-overview/data-ingestion-api-overview.png" alt-text="Diagram that shows an overview of logs ingestion API."::: > [!NOTE] > To migrate solutions from the [Data Collector API](data-collector-api.md), see [Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs](custom-logs-migrate.md). -## Supported tables --### Custom tables +## Components -The Logs Ingestion API can send data to any custom table that you create and to certain Azure tables in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix. +The Log ingestion API requires the following components to be created before you can send data. Each of these components must all be located in the same region. -### Azure tables +| Component | Description | +|:|:| +| Data collection endpoint (DCE) | The DCE provides an endpoint for the application to send to. A single DCE can support multiple DCRs. | +| Data collection rule (DCR) | [Data collection rules](../essentials/data-collection-rule-overview.md) define data collected by Azure Monitor and specify how and where that data should be sent or stored. The API call must specify a DCR to use. The DCR must understand the structure of the input data and the structure of the target table. If the two don't match, it can include a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You can also use the transformation to filter source data and perform any other calculations or conversions. +| Log Analytics workspace | The Log Analytics workspace contains the tables that will receive the data. The target tables are specific in the DCR. See [Support tables](#supported-tables) for the tables that the ingestion API can send to. | -The Logs Ingestion API can send data to the following Azure tables. Other tables may be added to this list as support for them is implemented. +## Supported tables +The following tables can receive data from the ingestion API. -- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)-- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)-- [Syslog](/azure/azure-monitor/reference/tables/syslog)-- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent)+| Tables | Description | +|:|:| +| Custom tables | The Logs Ingestion API can send data to any custom table that you create in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix. | +| Azure tables | The Logs Ingestion API can send data to the following Azure tables. Other tables may be added to this list as support for them is implemented.<br><br>- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)<br>- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)<br>- [Syslog](/azure/azure-monitor/reference/tables/syslog)<br>- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent) > [!NOTE] > Column names must start with a letter and can consist of up to 45 alphanumeric characters and the characters `_` and `-`. The following are reserved column names: `Type`, `TenantId`, `resource`, `resourceid`, `resourcename`, `resourcetype`, `subscriptionid`, `tenanted`. Custom columns you add to an Azure table must have the suffix `_CF`. Authentication for the Logs Ingestion API is performed at the DCE, which uses st The source data sent by your application is formatted in JSON and must match the structure expected by the DCR. It doesn't necessarily need to match the structure of the target table because the DCR can include a [transformation](../essentials//data-collection-transformations.md) to convert the data to match the table's structure. -## Data collection rule --[Data collection rules](../essentials/data-collection-rule-overview.md) define data collected by Azure Monitor and specify how and where that data should be sent or stored. The REST API call must specify a DCR to use. A single DCE can support multiple DCRs, so you can specify a different DCR for different sources and target tables. +## Client libraries +You can use the following client libraries to send data to the Logs ingestion API. -The DCR must understand the structure of the input data and the structure of the target table. If the two don't match, it can use a [transformation](../essentials/data-collection-transformations.md) to convert the source data to match the target table. You can also use the transformation to filter source data and perform any other calculations or conversions. +- [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme) +- [Java](/java/api/overview/azure/monitor-ingestion-readme) +- [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme) +- [Python](/python/api/overview/azure/monitor-ingestion-readme) -## Send data -To send data to Azure Monitor with the Logs Ingestion API, make a POST call to the DCE over HTTP. Details of the call are described in the following sections. +## REST API call +To send data to Azure Monitor with a REST API call, make a POST call to the DCE over HTTP. Details of the call are described in the following sections. ### Endpoint URI- The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. `Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#custom-logs) in the DCR that should handle the custom data. ``` The endpoint URI uses the following format, where the `Data Collection Endpoint` The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR. Additionally, it is important to ensure that the request body is properly encoded in UTF-8 to prevent any issues with data transmission. -## Sample call -For sample data and an API call using the Logs Ingestion API, see either [Send custom logs to Azure Monitor Logs using the Azure portal](tutorial-logs-ingestion-portal.md) or [Send custom logs to Azure Monitor Logs using Resource Manager templates](tutorial-logs-ingestion-api.md). ## Limits and restrictions For limits related to the Logs Ingestion API, see [Azure Monitor service limits] ## Next steps -- [Walk through a tutorial sending custom logs using the Azure portal](tutorial-logs-ingestion-portal.md)+- [Walk through a tutorial configuring the i using the Azure portal](tutorial-logs-ingestion-portal.md) - [Walk through a tutorial sending custom logs using Resource Manager templates and REST API](tutorial-logs-ingestion-api.md) - Get guidance on using the client libraries for the Logs ingestion API for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme). |
azure-monitor | Manage Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md | In addition to using the built-in roles for a Log Analytics workspace, you can c ## Set table-level read access -To create a [custom role](../../role-based-access-control/custom-roles.md) that lets specific users or groups read data from specific tables in a workspace: --1. Create a custom role that grants users permission to execute queries in the Log Analytics workspace, based on the built-in Azure Monitor Logs **Reader** role: - - 1. Navigate to your workspace and select **Access control (IAM)** > **Roles**. - - 1. Right-click the **Reader** role and select **Clone**. - - :::image type="content" source="media/manage-access/access-control-clone-role.png" alt-text="Screenshot that shows the Roles tab of the Access control screen with the clone button highlighted for the Reader role." lightbox="media/manage-access/access-control-clone-role.png"::: - - This opens the **Create a custom role** screen. -- 1. On the **Basics** tab of the screen, enter a **Custom role name** value and, optionally, provide a description. -- :::image type="content" source="media/manage-access/manage-access-create-custom-role.png" alt-text="Screenshot that shows the Basics tab of the Create a custom role screen with the Custom role name and Description fields highlighted." lightbox="media/manage-access/manage-access-create-custom-role.png"::: -- 1. Select the **JSON** tab > **Edit**:: - - 1. In the `"actions"` section, add: - - - `Microsoft.OperationalInsights/workspaces/read` - - `Microsoft.OperationalInsights/workspaces/query/read` - - `Microsoft.OperationalInsights/workspaces/analytics/query/action` - - `Microsoft.OperationalInsights/workspaces/search/action` - - 1. In the `"not actions"` section, add `Microsoft.OperationalInsights/workspaces/sharedKeys/read`. -- :::image type="content" source="media/manage-access/manage-access-create-custom-role-json.png" alt-text="Screenshot that shows the JSON tab of the Create a custom role screen with the actions section of the JSON file highlighted." lightbox="media/manage-access/manage-access-create-custom-role-json.png"::: - - 1. Select **Save** > **Review + Create** at the bottom of the screen, and then **Create** on the next page. --1. Assign your custom role to the relevant users or groups: - 1. Select **Access control (AIM)** > **Add** > **Add role assignment**. - - :::image type="content" source="media/manage-access/manage-access-add-role-assignment-button.png" alt-text="Screenshot that shows the Access control screen with the Add role assignment button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-button.png"::: -- 1. Select the custom role you created and select **Next**. - - :::image type="content" source="media/manage-access/manage-access-add-role-assignment-screen.png" alt-text="Screenshot that shows the Add role assignment screen with a custom role and the Next button highlighted." lightbox="media/manage-access/manage-access-add-role-assignment-screen.png"::: - - - This opens the **Members** tab of the **Add custom role assignment** screen. - - 1. Click **+ Select members** to open the **Select members** screen. - - :::image type="content" source="media/manage-access/manage-access-add-role-assignment-select-members.png" alt-text="Screenshot that shows the Select members screen." lightbox="media/manage-access/manage-access-add-role-assignment-select-members.png"::: - - 1. Search for and select the relevant user or group and click **Select**. - 1. Select **Review and assign**. --1. Grant the users or groups read access to specific tables in a workspace by calling the `https://management.azure.com/batch?api-version=2020-06-01` POST API and sending the following details in the request body: -- ```json - { - "requests": [ - { - "content": { - "Id": "<GUID_1>", - "Properties": { - "PrincipalId": "<user_object_ID>", - "PrincipalType": "User", - "RoleDefinitionId": "/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7", - "Scope": "/subscriptions/<subscription_ID>/resourceGroups/<resource_group_name>/providers/Microsoft.OperationalInsights/workspaces/<workspace_name>/Tables/<table_name>", - "Condition": null, - "ConditionVersion": null - } - }, - "httpMethod": "PUT", - "name": "<GUID_2>", - "requestHeaderDetails": { - "commandName": "Microsoft_Azure_AD." - }, - "url": "/subscriptions/<subscription_ID>/resourceGroups/<resource_group_name>/providers/Microsoft.OperationalInsights/workspaces/<workspace_name>/Tables/<table_name>/providers/Microsoft.Authorization/roleAssignments/<GUID_1>?api-version=2020-04-01-preview" - } - ] - } - ``` -- Where: - - You can generate a GUID for `<GUID 1>` and `<GUID 2>` using any GUID generator. - - `<user_object_ID>` is the object ID of the user to which you want to grant table read access. - - `<subscription_ID>` is the ID of the subscription related to the workspace. - - `<resource_group_name>` is the resource group of the workspace. - - `<workspace_name>` is the name of the workspace. - - `<table_name>` is the name of the table to which you want to assign the user or group permission to read data from. --### Legacy method of setting table-level read access --[Azure custom roles](../../role-based-access-control/custom-roles.md) let you grant access to specific tables in the workspace, although we recommend defining [table-level read access](#set-table-level-read-access) as described above. --Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode). +[Azure custom roles](../../role-based-access-control/custom-roles.md) let you grant specific users or groups access to specific tables in the workspace. Azure custom roles apply to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode). To define access to a particular table, create a [custom role](../../role-based-access-control/custom-roles.md): To define access to a particular table, create a [custom role](../../role-based- * Use `Microsoft.OperationalInsights/workspaces/query/*` to grant access to all tables. * To exclude access to specific tables when you use a wildcard in **Actions**, list the tables excluded tables in the **NotActions** section of the role definition. -#### Examples +### Examples Here are examples of custom role actions to grant and deny access to specific tables. Grant access to all tables except the _SecurityAlert_ table: ], ``` -#### Custom tables +### Custom tables - Custom tables store data you collect from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). To identify the table type, [view table information in Log Analytics](./log-analytics-tutorial.md#view-table-information). +Custom tables store data you collect from data sources such as [text logs](../agents/data-sources-custom-logs.md) and the [HTTP Data Collector API](data-collector-api.md). To identify the table type, [view table information in Log Analytics](./log-analytics-tutorial.md#view-table-information). > [!NOTE] > Tables created by the [Logs ingestion API](../essentials/../logs/logs-ingestion-api-overview.md) don't yet support table-level RBAC. - You can't grant access to individual custom log tables, but you can grant access to all custom logs. To create a role with access to all custom log tables, create a custom role by using the following actions: +You can't grant access to individual custom log tables at the table level, but you can grant access to all custom log tables. To create a role with access to all custom log tables, create a custom role by using the following actions: ``` "Actions": [ Some custom logs come from sources that aren't directly associated to a specific For example, if a specific firewall is sending custom logs, create a resource group called *MyFireWallLogs*. Make sure that the API requests contain the resource ID of *MyFireWallLogs*. The firewall log records are then accessible only to users who were granted access to *MyFireWallLogs* or those users with full workspace access. -#### Considerations +### Considerations - If a user is granted global read permission with the standard Reader or Contributor roles that include the _\*/read_ action, it will override the per-table access control and give them access to all log data. - If a user is granted per-table access but no other permissions, they can access log data from the API but not from the Azure portal. To provide access from the Azure portal, use Log Analytics Reader as its base role. |
azure-monitor | Tutorial Logs Ingestion Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md | Title: 'Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates)' -description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor by using the REST API Azure Resource Manager template version. + Title: 'Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Resource Manager templates)' +description: Tutorial on how sending data to a Log Analytics workspace in Azure Monitor using the Logs ingestion API. Supporting components configured using Resource Manager templates. Previously updated : 02/01/2023 Last updated : 03/20/2023 -# Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates) -The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send external data to a Log Analytics workspace with a REST API. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of a new table and a sample application to send log data to Azure Monitor. +# Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates) +The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme). > [!NOTE]-> This tutorial uses ARM templates and a REST API to configure custom logs. For a similar tutorial using the Azure portal, see [Tutorial: Send data to Azure Monitor Logs using REST API (Azure portal)](tutorial-logs-ingestion-portal.md). -> +> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components. -In this tutorial, you learn to: --> [!div class="checklist"] -> * Create a custom table in a Log Analytics workspace. -> * Create a data collection endpoint (DCE) to receive data over HTTP. -> * Create a data collection rule (DCR) that transforms incoming data to match the schema of the target table. -> * Create a sample application to send custom data to Azure Monitor. --> [!NOTE] -> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls by using the Azure Monitor **Tables** API and the Azure portal to install ARM templates. You can use any other method to make these calls. -> -> See [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme) for guidance on using the Logs ingestion API client libraries for other languages. +The steps required to configure the Logs ingestion API are as follows: +1. [Create an Azure AD application](#create-azure-ad-application) to authenticate against the API. +3. [Create a data collection endpoint (DCE)](#create-data-collection-endpoint) to receive data. +2. [Create a custom table in a Log Analytics workspace](#create-new-table-in-log-analytics-workspace). This is the table you'll be sending data to. +4. [Create a data collection rule (DCR)](#create-data-collection-rule) to direct the data to the target table. +5. [Give the AD application access to the DCR](#assign-permissions-to-a-dcr). +6. See [Sample code to send data to Azure Monitor using Logs ingestion API](tutorial-logs-ingestion-code.md) for sample code to send data to using the Logs ingestion API. ## Prerequisites To complete this tutorial, you need: To complete this tutorial, you need: - A Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac). - [Permissions to create DCR objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. + ## Collect workspace details Start by gathering information that you'll need from your workspace. Go to your workspace in the **Log Analytics workspaces** menu in the Azure porta :::image type="content" source="media/tutorial-logs-ingestion-api/workspace-resource-id.png" lightbox="media/tutorial-logs-ingestion-api/workspace-resource-id.png" alt-text="Screenshot that shows the workspace resource ID."::: -## Configure an application +## Create Azure AD application Start by registering an Azure Active Directory application to authenticate against the API. Any Resource Manager authentication scheme is supported, but this tutorial follows the [Client Credential Grant Flow scheme](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). 1. On the **Azure Active Directory** menu in the Azure portal, select **App registrations** > **New registration**. Start by registering an Azure Active Directory application to authenticate again :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" alt-text="Screenshot that shows the secret value for the new app."::: -## Create a new table in a Log Analytics workspace -The custom table must be created before you can send data to it. The table for this tutorial will include three columns, as described in the following schema. The `name`, `type`, and `description` properties are mandatory for each column. The properties `isHidden` and `isDefaultDisplay` both default to `false` if not explicitly specified. Possible data types are `string`, `int`, `long`, `real`, `boolean`, `dateTime`, `guid`, and `dynamic`. --Use the **Tables - Update** API to create the table with the following PowerShell code. --> [!IMPORTANT] -> Custom tables must use a suffix of `_CL`. --1. Select the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**. -- :::image type="content" source="media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot that shows opening Cloud Shell."::: --1. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it. -- ```PowerShell - $tableParams = @' - { - "properties": { - "schema": { - "name": "MyTable_CL", - "columns": [ - { - "name": "TimeGenerated", - "type": "datetime", - "description": "The time at which the data was generated" - }, - { - "name": "AdditionalContext", - "type": "dynamic", - "description": "Additional message properties" - }, - { - "name": "CounterName", - "type": "string", - "description": "Name of the counter" - }, - { - "name": "CounterValue", - "type": "real", - "description": "Value collected for the counter" - } - ] - } - } - } - '@ -- Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/MyTable_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams - ``` --## Create a data collection endpoint -A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accept the data being sent to Azure Monitor. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE must be located in the same region as the Log Analytics workspace where the data will be sent. +## Create data collection endpoint +A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accept the data being sent to Azure Monitor. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE must be located in the same region as the DCR and the Log Analytics workspace where the data will be sent. 1. In the Azure portal's search box, enter **template** and then select **Deploy a custom template**. A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accep "location": { "type": "string", "defaultValue": "westus2",- "allowedValues": [ - "westus2", - "eastus2", - "eastus2euap" - ], "metadata": {- "description": "Specifies the location in which to create the Data Collection Endpoint." + "description": "Specifies the location for the Data Collection Endpoint." } } }, A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accep 1. Select **Review + create** and then select **Create** after you review the details. -1. After the DCE is created, select it so that you can view its properties. Note the **Logs ingestion URI** because you'll need it in a later step. +1. Select **JSON View** to view other details for the DCE. Copy the **Resource ID** and the **logsIngestion endpoint** which you'll need in a later step. - :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" alt-text="Screenshot that shows the DCE URI."::: + :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" alt-text="Screenshot that shows the DCE resource ID."::: -1. Select **JSON View** to view other details for the DCE. Copy the **Resource ID** because you'll need it in a later step. - :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" alt-text="Screenshot that shows the DCE resource ID."::: +## Create new table in Log Analytics workspace +The custom table must be created before you can send data to it. The table for this tutorial will include five columns shown in the schema below. The `name`, `type`, and `description` properties are mandatory for each column. The properties `isHidden` and `isDefaultDisplay` both default to `false` if not explicitly specified. Possible data types are `string`, `int`, `long`, `real`, `boolean`, `dateTime`, `guid`, and `dynamic`. -## Create a data collection rule -The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of data that's being sent to the HTTP endpoint and the [transformation](../essentials/data-collection-transformations.md) that will be applied to it before it's sent to the workspace. The DCR also defines the destination workspace and table the transformed data will be sent to. +> [!NOTE] +> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls by using the Azure Monitor **Tables** API. You can use any other valid method to make these calls. ++> [!IMPORTANT] +> Custom tables must use a suffix of `_CL`. ++1. Select the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**. ++ :::image type="content" source="media/tutorial-workspace-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-workspace-transformations-api/open-cloud-shell.png" alt-text="Screenshot that shows opening Cloud Shell."::: ++1. Copy the following PowerShell code and replace the variables in the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the Cloud Shell prompt to run it. ++ ```PowerShell + $tableParams = @' + { + "properties": { + "schema": { + "name": "MyTable_CL", + "columns": [ + { + "name": "TimeGenerated", + "type": "datetime", + "description": "The time at which the data was generated" + }, + { + "name": "Computer", + "type": "string", + "description": "The computer that generated the data" + }, + { + "name": "AdditionalContext", + "type": "dynamic", + "description": "Additional message properties" + }, + { + "name": "CounterName", + "type": "string", + "description": "Name of the counter" + }, + { + "name": "CounterValue", + "type": "real", + "description": "Value collected for the counter" + } + ] + } + } + } + '@ ++ Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/MyTable_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams + ``` ++## Create data collection rule +The [DCR](../essentials/data-collection-rule-overview.md) defines how the data will be handled once it's received. This includes: ++- Schema of data that's being sent to the endpoint +- [Transformation](../essentials/data-collection-transformations.md) that will be applied to the data before it's sent to the workspace +- Destination workspace and table the transformed data will be sent to 1. In the Azure portal's search box, enter **template** and then select **Deploy a custom template**. :::image type="content" source="media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows how to deploy a custom template."::: -1. Select **Build your own template in the editor**. +2. Select **Build your own template in the editor**. :::image type="content" source="media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows how to build a template in the editor."::: -1. Paste the following ARM template into the editor and then select **Save**. +3. Paste the following ARM template into the editor and then select **Save**. :::image type="content" source="media/tutorial-workspace-transformations-api/edit-template.png" lightbox="media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows how to edit an ARM template."::: Notice the following details in the DCR defined in this template: - - `dataCollectionEndpointId`: Identifies the Resource ID of the data collection endpoint. - - `streamDeclarations`: Defines the columns of the incoming data. - - `destinations`: Specifies the destination workspace. + - `dataCollectionEndpointId`: Resource ID of the data collection endpoint. + - `streamDeclarations`: Column definitions of the incoming data. + - `destinations`: Destination workspace. - `dataFlows`: Matches the stream with the destination workspace and specifies the transformation query and the destination table. The output of the destination query is what will be sent to the destination table. ```json The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of "logAnalytics": [ { "workspaceResourceId": "[parameters('workspaceResourceId')]",- "name": "clv2ws1" + "name": "myworkspace" } ] }, The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of "Custom-MyTableRawData" ], "destinations": [- "clv2ws1" + "myworkspace" ], "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, CounterName=tostring(jsonContext.CounterName), CounterValue=toreal(jsonContext.CounterValue)", "outputStream": "Custom-MyTable_CL" The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of } ``` -1. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the DCR. Then provide values defined in the template. The values include a **Name** for the DCR and the **Workspace Resource ID** that you collected in a previous step. The **Location** should be the same location as the workspace. The **Region** will already be populated and will be used for the location of the DCR. +4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the DCR. Then provide values defined in the template. The values include a **Name** for the DCR and the **Workspace Resource ID** that you collected in a previous step. The **Location** should be the same location as the workspace. The **Region** will already be populated and will be used for the location of the DCR. :::image type="content" source="media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot that shows how to edit custom deployment values."::: -1. Select **Review + create** and then select **Create** after you review the details. +5. Select **Review + create** and then select **Create** after you review the details. -1. When the deployment is complete, expand the **Deployment details** box and select your DCR to view its details. Select **JSON View**. +6. When the deployment is complete, expand the **Deployment details** box and select your DCR to view its details. Select **JSON View**. :::image type="content" source="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" lightbox="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" alt-text="Screenshot that shows DCR details."::: After the DCR has been created, the application needs to be given permission to :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" alt-text="Screenshot that shows saving the DCR role assignment."::: -## Send sample data -The following PowerShell code sends data to the endpoint by using HTTP REST fundamentals. --> [!NOTE] -> This tutorial uses commands that require PowerShell v7.0 or later. Make sure your local installation of PowerShell is up to date or execute this script by using Azure Cloud Shell. --1. Run the following PowerShell command, which adds a required assembly for the script. -- ```powershell - Add-Type -AssemblyName System.Web - ``` --1. Replace the parameters in the **Step 0** section with values from the resources that you created. You might also want to replace the sample data in the **Step 2** section with your own. -- ```powershell - ################## - ### Step 0: Set parameters required for the rest of the script. - ################## - #information needed to authenticate to AAD and obtain a bearer token - $tenantId = "00000000-0000-0000-0000-000000000000"; #Tenant ID the data collection endpoint resides in - $appId = "00000000-0000-0000-0000-000000000000"; #Application ID created and granted permissions - $appSecret = "00000000000000000000000"; #Secret created for the application -- #information needed to send data to the DCR endpoint - $dcrImmutableId = "dcr-000000000000000"; #the immutableId property of the DCR object - $dceEndpoint = "https://my-dcr-name.westus2-1.ingest.monitor.azure.com"; #the endpoint property of the Data Collection Endpoint object - $streamName = "Custom-MyTableRawData"; #name of the stream in the DCR that represents the destination table -- ################## - ### Step 1: Obtain a bearer token used later to authenticate against the DCE. - ################## - $scope= [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default") - $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials"; - $headers = @{"Content-Type"="application/x-www-form-urlencoded"}; - $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" -- $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token -- ################## - ### Step 2: Load up some sample data. - ################## - $currentTime = Get-Date ([datetime]::UtcNow) -Format O - $staticData = @" - [ - { - "Time": "$currentTime", - "Computer": "Computer1", - "AdditionalContext": { - "InstanceName": "user1", - "TimeZone": "Pacific Time", - "Level": 4, - "CounterName": "AppMetric1", - "CounterValue": 15.3 - } - }, - { - "Time": "$currentTime", - "Computer": "Computer2", - "AdditionalContext": { - "InstanceName": "user2", - "TimeZone": "Central Time", - "Level": 3, - "CounterName": "AppMetric1", - "CounterValue": 23.5 - } - } - ] - "@; -- ################## - ### Step 3: Send the data to Log Analytics via the DCE. - ################## - $body = $staticData; - $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"}; - $uri = "$dceEndpoint/dataCollectionRules/$dcrImmutableId/streams/$($streamName)?api-version=2021-11-01-preview" -- $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers - ``` -- > [!NOTE] - > If you receive an `Unable to find type [System.Web.HttpUtility].` error, run the last line in section 1 of the script for a fix and execute it. Executing it uncommented as part of the script won't resolve the issue. The command must be executed separately. --1. After you execute this script, you should see an `HTTP - 204` response. In a few minutes, the data arrives to your Log Analytics workspace. --## Troubleshooting -This section describes different error conditions you might receive and how to correct them. --### Script returns error code 403 -Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate. --### Script returns error code 413 or warning of TimeoutExpired with the message ReadyBody_ClientConnectionAbort in the response -The message is too large. The maximum message size is currently 1 MB per call. --### Script returns error code 429 -API limits have been exceeded. For information on the current limits, see [Service limits for the Logs Ingestion API](../service-limits.md#logs-ingestion-api). --### Script returns error code 503 -Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate. -### You don't receive an error, but data doesn't appear in the workspace -The data might take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes. +## Sample code +See [Sample code to send data to Azure Monitor using Logs ingestion API](tutorial-logs-ingestion-code.md) for sample code using the components created in this tutorial. -### IntelliSense in Log Analytics doesn't recognize the new table -The cache that drives IntelliSense might take up to 24 hours to update. ## Next steps |
azure-monitor | Tutorial Logs Ingestion Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-code.md | + + Title: 'Sample code to send data to Azure Monitor using Logs ingestion API' +description: Sample code using REST API and client libraries for Logs ingestion API in Azure Monitor. + Last updated : 03/21/2023+++# Sample code to send data to Azure Monitor using Logs ingestion API +This article provides sample code using the [Logs ingestion API](logs-ingestion-api-overview.md). Each sample requires the following components to be created before the code is run. See [Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](tutorial-logs-ingestion-api.md) for a complete walkthrough of creating these components configured to support each of these samples. +++- Custom table in a Log Analytics workspace +- Data collection endpoint (DCE) to receive data +- Data collection rule (DCR) to direct the data to the target table +- AD application with access to the DCR ++## Sample code ++## [PowerShell](#tab/powershell) ++The following PowerShell code sends data to the endpoint by using HTTP REST fundamentals. ++> [!NOTE] +> This sample requires PowerShell v7.0 or later. ++1. Run the following sample PowerShell command, which adds a required assembly for the script. ++ ```powershell + Add-Type -AssemblyName System.Web + ``` ++1. Replace the parameters in the **Step 0** section with values from your application, DCE, and DCR. You might also want to replace the sample data in the **Step 2** section with your own. ++ ```powershell + ### Step 0: Set variables required for the rest of the script. + + # information needed to authenticate to AAD and obtain a bearer token + $tenantId = "00000000-0000-0000-00000000000000000" #Tenant ID the data collection endpoint resides in + $appId = " 000000000-0000-0000-00000000000000000" #Application ID created and granted permissions + $appSecret = "0000000000000000000000000000000000000000" #Secret created for the application + + # information needed to send data to the DCR endpoint + $dceEndpoint = "https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com" #the endpoint property of the Data Collection Endpoint object + $dcrImmutableId = "dcr-00000000000000000000000000000000" #the immutableId property of the DCR object + $streamName = "Custom-MyTableRawData" #name of the stream in the DCR that represents the destination table + + + ### Step 1: Obtain a bearer token used later to authenticate against the DCE. + + $scope= [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default") + $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials"; + $headers = @{"Content-Type"="application/x-www-form-urlencoded"}; + $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" + + $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token + + + ### Step 2: Create some sample data. + + $currentTime = Get-Date ([datetime]::UtcNow) -Format O + $staticData = @" + [ + { + "Time": "$currentTime", + "Computer": "Computer1", + "AdditionalContext": { + "InstanceName": "user1", + "TimeZone": "Pacific Time", + "Level": 4, + "CounterName": "AppMetric1", + "CounterValue": 15.3 + } + }, + { + "Time": "$currentTime", + "Computer": "Computer2", + "AdditionalContext": { + "InstanceName": "user2", + "TimeZone": "Central Time", + "Level": 3, + "CounterName": "AppMetric1", + "CounterValue": 23.5 + } + } + ] + "@; + + + ### Step 3: Send the data to the Log Analytics workspace via the DCE. + + $body = $staticData; + $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"}; + $uri = "$dceEndpoint/dataCollectionRules/$dcrImmutableId/streams/$($streamName)?api-version=2021-11-01-preview" + + $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers + ``` ++ > [!NOTE] + > If you receive an `Unable to find type [System.Web.HttpUtility].` error, run the last line in section 1 of the script for a fix and execute it. Executing it uncommented as part of the script won't resolve the issue. The command must be executed separately. ++3. Execute the script, and you should see an `HTTP - 204` response. The data should arrive in your Log Analytics workspace within a few minutes. +++## [Python](#tab/python) ++The following sample code uses the [Azure Monitor Ingestion client library for Python](/python/api/overview/azure/monitor-ingestion-readme). +++1. Use [pip](https://pypi.org/project/pip/) to install the Azure Monitor Ingestion and Azure Identity client libraries for Python. The Azure Identity library is required for the authentication used in this sample. ++ ```bash + pip install azure-monitor-ingestion + pip install azure-identity + ``` ++2. Create the following environment variables with values for your Azure AD application. These values are used by `DefaultAzureCredential` in the Azure Identity library. ++ - AZURE_TENANT_ID + - AZURE_CLIENT_ID + - AZURE_CLIENT_SECRET ++3. Replace the variables in the following sample code with values from your DCE and DCR. You might also want to replace the sample data in the **Step 2** section with your own. +++ ```python + # information needed to send data to the DCR endpoint + dce_endpoint = "https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com" # ingestion endpoint of the Data Collection Endpoint object + dcr_immutableid = "dcr-00000000000000000000000000000000" # immutableId property of the Data Collection Rule + stream_name = "Custom-MyTableRawData" #name of the stream in the DCR that represents the destination table + + # Import required modules + import os + from azure.identity import DefaultAzureCredential + from azure.monitor.ingestion import LogsIngestionClient + from azure.core.exceptions import HttpResponseError + + credential = DefaultAzureCredential() + client = LogsIngestionClient(endpoint=dce_endpoint, credential=credential, logging_enable=True) + + body = [ + { + "Time": "2023-03-12T15:04:48.423211Z", + "Computer": "Computer1", + "AdditionalContext": { + "InstanceName": "user1", + "TimeZone": "Pacific Time", + "Level": 4, + "CounterName": "AppMetric2", + "CounterValue": 35.3 + } + }, + { + "Time": "2023-03-12T15:04:48.794972Z", + "Computer": "Computer2", + "AdditionalContext": { + "InstanceName": "user2", + "TimeZone": "Central Time", + "Level": 3, + "CounterName": "AppMetric2", + "CounterValue": 43.5 + } + } + ] + + try: + client.upload(rule_id=dcr_immutableid, stream_name=stream_name, logs=body) + except HttpResponseError as e: + print(f"Upload failed: {e}") + ``` ++3. Execute the code, and the data should arrive in your Log Analytics workspace within a few minutes. ++## [JavaScript](#tab/javascript) ++The following sample code uses the [Azure Monitor Ingestion client library for JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme). +++1. Use [npm](https://www.npmjs.com/) to install the Azure Monitor Ingestion and Azure Identity client libraries for JavaScript. The Azure Identity library is required for the authentication used in this sample. +++ ```bash + npm install --save @azure/monitor-ingestion + npm install --save @azure/identity + ``` ++2. Create the following environment variables with values for your Azure AD application. These values are used by `DefaultAzureCredential` in the Azure Identity library. ++ - AZURE_TENANT_ID + - AZURE_CLIENT_ID + - AZURE_CLIENT_SECRET ++3. Replace the variables in the following sample code with values from your DCE and DCR. You might also want to replace the sample data with your own. ++ ```javascript + const { isAggregateLogsUploadError, DefaultAzureCredential } = require("@azure/identity"); + const { LogsIngestionClient } = require("@azure/monitor-ingestion"); + + require("dotenv").config(); + + async function main() { + const logsIngestionEndpoint = "https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com"; + const ruleId = "dcr-00000000000000000000000000000000"; + const streamName = "Custom-MyTableRawData"; + const credential = new DefaultAzureCredential(); + const client = new LogsIngestionClient(logsIngestionEndpoint, credential); + const logs = [ + { + Time: "2021-12-08T23:51:14.1104269Z", + Computer: "Computer1", + AdditionalContext: { + "InstanceName": "user1", + "TimeZone": "Pacific Time", + "Level": 4, + "CounterName": "AppMetric2", + "CounterValue": 35.3 + } + }, + { + Time: "2021-12-08T23:51:14.1104269Z", + Computer: "Computer2", + AdditionalContext: { + "InstanceName": "user2", + "TimeZone": "Pacific Time", + "Level": 4, + "CounterName": "AppMetric2", + "CounterValue": 43.5 + } + }, + ]; + try{ + await client.upload(ruleId, streamName, logs); + } + catch(e){ + let aggregateErrors = isAggregateLogsUploadError(e) ? e.errors : []; + if (aggregateErrors.length > 0) { + console.log("Some logs have failed to complete ingestion"); + for (const error of aggregateErrors) { + console.log(`Error - ${JSON.stringify(error.cause)}`); + console.log(`Log - ${JSON.stringify(error.failedLogs)}`); + } + } else { + console.log(e); + } + } + } + + main().catch((err) => { + console.error("The sample encountered an error:", err); + process.exit(1); + }); + ``` ++4. Execute the code, and the data should arrive in your Log Analytics workspace within a few minutes. ++## [Java](#tab/java) +The following sample code uses the [Azure Monitor Ingestion client library for Java](/java/api/overview/azure/monitor-ingestion-readme). +++1. Include the Logs ingestion package and the `azure-identity` package from the [Azure Identity library](https://github.com/Azure/azure-sdk-for-java/tree/azure-monitor-ingestion_1.0.1/sdk/identity/azure-identity). The Azure Identity library is required for the authentication used in this sample. ++ > [!NOTE] + > See the Maven repositories for [Microsoft Azure Client Library For Identity](https://mvnrepository.com/artifact/com.azure/azure-identity) and [Microsoft Azure SDK For Azure Monitor Data Ingestion](https://mvnrepository.com/artifact/com.azure/azure-monitor-ingestion) for the latest versions. ++ ```xml + <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-monitor-ingestion</artifactId> + <version>{get-latest-version}</version> + <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-identity</artifactId> + <version>{get-latest-version}</version> + </dependency> + ``` +++3. Create the following environment variables with values for your Azure AD application. These values are used by `DefaultAzureCredential` in the Azure Identity library. ++ - AZURE_TENANT_ID + - AZURE_CLIENT_ID + - AZURE_CLIENT_SECRET ++4. Replace the variables in the following sample code with values from your DCE and DCR. You may also want to replace the sample data with your own. ++ ```java + import com.azure.identity.DefaultAzureCredentialBuilder; + import com.azure.monitor.ingestion.models.LogsUploadException; ++ import java.time.OffsetDateTime; + import java.util.Arrays; + import java.util.List; ++ public class LogsUploadSample { + public static void main(String[] args) { + + LogsIngestionClient client = new LogsIngestionClientBuilder() + .endpoint("https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com") + .credential(new DefaultAzureCredentialBuilder().build()) + .buildClient(); + + List<Object> dataList = Arrays.asList( + new Object() { + OffsetDateTime time = OffsetDateTime.now(); + String computer = "Computer1"; + Object additionalContext = new Object() { + String instanceName = "user4"; + String timeZone = "Pacific Time"; + int level = 4; + String counterName = "AppMetric1"; + double counterValue = 15.3; + }; + }, + new Object() { + OffsetDateTime time = OffsetDateTime.now(); + String computer = "Computer2"; + Object additionalContext = new Object() { + String instanceName = "user2"; + String timeZone = "Central Time"; + int level = 3; + String counterName = "AppMetric2"; + double counterValue = 43.5; + }; + }); + + try { + client.upload("dcr-00000000000000000000000000000000", "Custom-MyTableRawData", dataList); + System.out.println("Logs uploaded successfully"); + } catch (LogsUploadException exception) { + System.out.println("Failed to upload logs "); + exception.getLogsUploadErrors() + .forEach(httpError -> System.out.println(httpError.getMessage())); + } + } + } + ``` ++5. Execute the code, and the data should arrive in your Log Analytics workspace within a few minutes. +++## [.NET](#tab/net) ++The following script uses the [Azure Monitor Ingestion client library for .NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme). ++1. Install the Azure Monitor Ingestion client library and the Azure Identity library. The Azure Identity library is required for the authentication used in this sample. + + ```dotnetcli + dotnet add package Azure.Identity + dotnet add package Azure.Monitor.Ingestion + ``` ++3. Create the following environment variables with values for your Azure AD application. These values are used by `DefaultAzureCredential` in the Azure Identity library. ++ - AZURE_TENANT_ID + - AZURE_CLIENT_ID + - AZURE_CLIENT_SECRET ++2. Replace the variables in the following sample code with values from your DCE and DCR. You may also want to replace the sample data with your own. ++ ```csharp + using Azure; + using Azure.Core; + using Azure.Identity; + using Azure.Monitor.Ingestion; ++ // Initialize variables + var endpoint = new Uri("https://logs-ingestion-rzmk.eastus2-1.ingest.monitor.azure.com"); + var ruleId = "dcr-00000000000000000000000000000000"; + var streamName = "Custom-MyTableRawData"; + + // Create credential and client + var credential = new DefaultAzureCredential(); + LogsIngestionClient client = new(endpoint, credential); + + DateTimeOffset currentTime = DateTimeOffset.UtcNow; + + // Use BinaryData to serialize instances of an anonymous type into JSON + BinaryData data = BinaryData.FromObjectAsJson( +     new[] { +         new +         { +             Time = currentTime, +             Computer = "Computer1", +             AdditionalContext = new +             { +                 InstanceName = "user1", +                 TimeZone = "Pacific Time", +                 Level = 4, +                 CounterName = "AppMetric1", +                 CounterValue = 15.3 +             } +         }, +         new +         { +             Time = currentTime, +             Computer = "Computer2", +             AdditionalContext = new +             { +                 InstanceName = "user2", +                 TimeZone = "Central Time", +                 Level = 3, +                 CounterName = "AppMetric1", +                 CounterValue = 23.5 +             } +         }, +     }); + + // Upload logs + try + { +     Response response = client.Upload(ruleId, streamName, RequestContent.Create(data)); + } + catch (Exception ex) + { +     Console.WriteLine("Upload failed with Exception " + ex.Message); + } + + // Logs can also be uploaded in a List + var entries = new List<Object>(); + for (int i = 0; i < 10; i++) + { +     entries.Add( +         new { +             Time = recordingNow, +             Computer = "Computer" + i.ToString(), +             AdditionalContext = i +         } +     ); + } + + // Make the request + LogsUploadOptions options = new LogsUploadOptions(); + bool isTriggered = false; + options.UploadFailed += Options_UploadFailed; + await client.UploadAsync(TestEnvironment.DCRImmutableId, TestEnvironment.StreamName, entries, options).ConfigureAwait(false); + + Task Options_UploadFailed(LogsUploadFailedEventArgs e) + { +     isTriggered = true; +     Console.WriteLine(e.Exception); +     foreach (var log in e.FailedLogs) +     { +         Console.WriteLine(log); +     } +     return Task.CompletedTask; + } + ``` ++3. Execute the code, and the data should arrive in your Log Analytics workspace within a few minutes. ++++++## Troubleshooting +This section describes different error conditions you might receive and how to correct them. ++### Script returns error code 403 +Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate. ++### Script returns error code 413 or warning of TimeoutExpired with the message ReadyBody_ClientConnectionAbort in the response +The message is too large. The maximum message size is currently 1 MB per call. ++### Script returns error code 429 +API limits have been exceeded. The limits are currently set to 500 MB of data per minute for both compressed and uncompressed data and 300,000 requests per minute. Retry after the duration listed in the `Retry-After` header in the response. ++### Script returns error code 503 +Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate. ++### You don't receive an error, but data doesn't appear in the workspace +The data might take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes. ++### IntelliSense in Log Analytics doesn't recognize the new table +The cache that drives IntelliSense might take up to 24 hours to update. ++## Next steps ++- [Learn more about data collection rules](../essentials/data-collection-rule-overview.md) +- [Learn more about writing transformation queries](../essentials//data-collection-transformations.md) + |
azure-monitor | Tutorial Logs Ingestion Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md | Title: 'Tutorial: Send data to Azure Monitor Logs by using a REST API (Azure portal)' -description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor by using a REST API (Azure portal version). + Title: 'Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)' +description: Tutorial on how sending data to a Log Analytics workspace in Azure Monitor using the Logs ingestion API. Supporting components configured using the Azure portal. + Last updated : 03/20/2023 - Previously updated : 07/15/2022+ -# Tutorial: Send data to Azure Monitor Logs by using a REST API (Azure portal) -The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send external data to a Log Analytics workspace with a REST API. This tutorial uses the Azure portal to walk through configuration of a new table and a sample application to send log data to Azure Monitor. +# Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal) +The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send external data to a Log Analytics workspace with a REST API. This tutorial uses the Azure portal to walk through configuration of a new table and a sample application to send log data to Azure Monitor. The sample application collects entries from a text file and > [!NOTE]-> This tutorial uses the Azure portal. For a similar tutorial that uses Azure Resource Manager templates, see [Tutorial: Send data to Azure Monitor Logs by using a REST API (Resource Manager templates)](tutorial-logs-ingestion-api.md). +> This tutorial uses the Azure portal to configure the components to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components and that has sample code for client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme). -In this tutorial, you learn to: -> [!div class="checklist"] -> * Create a custom table in a Log Analytics workspace. -> * Create a data collection endpoint (DCE) to receive data over HTTP. -> * Create a data collection rule (DCR) that transforms incoming data to match the schema of the target table. -> * Create a sample application to send custom data to Azure Monitor. +The steps required to configure the Logs ingestion API are as follows: ++1. [Create an Azure AD application](#create-azure-ad-application) to authenticate against the API. +3. [Create a data collection endpoint (DCE)](#create-data-collection-endpoint) to receive data. +2. [Create a custom table in a Log Analytics workspace](#create-new-table-in-log-analytics-workspace). This is the table you'll be sending data to. As part of this process, you will create a data collection rule (DCR) to direct the data to the target table. +5. [Give the AD application access to the DCR](#assign-permissions-to-the-dcr). +6. [Use sample code to send data to using the Logs ingestion API](#send-sample-data). -> [!NOTE] -> This tutorial uses PowerShell to call the Logs ingestion API. See [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), or [Python](/python/api/overview/azure/monitor-ingestion-readme) for guidance on using the client libraries for other languages. ## Prerequisites To complete this tutorial, you need: In this tutorial, you'll use a PowerShell script to send sample Apache access lo After the configuration is finished, you'll send sample data from the command line, and then inspect the results in Log Analytics. -## Configure the application +## Create Azure AD application Start by registering an Azure Active Directory application to authenticate against the API. Any Resource Manager authentication scheme is supported, but this tutorial will follow the [Client Credential Grant Flow scheme](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). 1. On the **Azure Active Directory** menu in the Azure portal, select **App registrations** > **New registration**. Start by registering an Azure Active Directory application to authenticate again :::image type="content" source="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" lightbox="media/tutorial-logs-ingestion-portal/new-app-secret-value.png" alt-text="Screenshot that shows the secret value for the new app."::: -## Create a data collection endpoint -A [data collection endpoint](../essentials/data-collection-endpoint-overview.md) is required to accept the data from the script. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE must be located in the same region as the VM being associated, but it does not need to be in the same region as the Log Analytics workspace where the data will be sent or the data collection rule being used. +## Create data collection endpoint +A [data collection endpoint](../essentials/data-collection-endpoint-overview.md) is required to accept the data from the script. After you configure the DCE and link it to a DCR, you can send data over HTTP from your application. The DCE does not need to be in the same region as the Log Analytics workspace where the data will be sent or the data collection rule being used. 1. To create a new DCE, go to the **Monitor** menu in the Azure portal. Select **Data Collection Endpoints** and then select **Create**. A [data collection endpoint](../essentials/data-collection-endpoint-overview.md) :::image type="content" source="media/tutorial-logs-ingestion-portal/data-collection-endpoint-uri.png" lightbox="media/tutorial-logs-ingestion-portal/data-collection-endpoint-uri.png" alt-text="Screenshot that shows DCE URI."::: -## Generate sample data -> [!IMPORTANT] -> You must be using PowerShell version 7.2 or later. --The following PowerShell script generates sample data to configure the custom table and sends sample data to the logs ingestion API to test the configuration. --1. Run the following PowerShell command, which adds a required assembly for the script: -- ```powershell - Add-Type -AssemblyName System.Web - ``` --1. Update the values of `$tenantId`, `$appId`, and `$appSecret` with the values you noted for **Directory (tenant) ID**, **Application (client) ID**, and secret **Value**. Then save it with the file name *LogGenerator.ps1*. -- ``` PowerShell - param ([Parameter(Mandatory=$true)] $Log, $Type="file", $Output, $DcrImmutableId, $DceURI, $Table) - ################ - ##### Usage - ################ - # LogGenerator.ps1 - # -Log <String> - Log file to be forwarded - # [-Type "file|API"] - Whether the script should generate sample JSON file or send data via - # API call. Data will be written to a file by default. - # [-Output <String>] - Path to resulting JSON sample - # [-DcrImmutableId <string>] - DCR immutable ID - # [-DceURI] - Data collection endpoint URI - # [-Table] - The name of the custom log table, including "_CL" suffix --- ##### >>>> PUT YOUR VALUES HERE <<<<< - # Information needed to authenticate to Azure Active Directory and obtain a bearer token - $tenantId = "<put tenant ID here>"; #the tenant ID in which the Data Collection Endpoint resides - $appId = "<put application ID here>"; #the app ID created and granted permissions - $appSecret = "<put secret value here>"; #the secret created for the above app - never store your secrets in the source code - ##### >>>> END <<<<< --- $file_data = Get-Content $Log - if ("file" -eq $Type) { - ############ - ## Convert plain log to JSON format and output to .json file - ############ - # If not provided, get output file name - if ($null -eq $Output) { - $Output = Read-Host "Enter output file name" - }; -- # Form file payload - $payload = @(); - $records_to_generate = [math]::min($file_data.count, 500) - for ($i=0; $i -lt $records_to_generate; $i++) { - $log_entry = @{ - # Define the structure of log entry, as it will be sent - Time = Get-Date ([datetime]::UtcNow) -Format O - Application = "LogGenerator" - RawData = $file_data[$i] - } - $payload += $log_entry - } - # Write resulting payload to file - New-Item -Path $Output -ItemType "file" -Value ($payload | ConvertTo-Json) -Force -- } else { - ############ - ## Send the content to the data collection endpoint - ############ - if ($null -eq $DcrImmutableId) { - $DcrImmutableId = Read-Host "Enter DCR Immutable ID" - }; -- if ($null -eq $DceURI) { - $DceURI = Read-Host "Enter data collection endpoint URI" - } - if ($null -eq $Table) { - $Table = Read-Host "Enter the name of custom log table" - } -- ## Obtain a bearer token used to authenticate against the data collection endpoint - $scope = [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default") - $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials"; - $headers = @{"Content-Type" = "application/x-www-form-urlencoded" }; - $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" - $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token -- ## Generate and send some data - foreach ($line in $file_data) { - # We are going to send log entries one by one with a small delay - $log_entry = @{ - # Define the structure of log entry, as it will be sent - Time = Get-Date ([datetime]::UtcNow) -Format O - Application = "LogGenerator" - RawData = $line - } - # Sending the data to Log Analytics via the DCR! - $body = $log_entry | ConvertTo-Json -AsArray; - $headers = @{"Authorization" = "Bearer $bearerToken"; "Content-Type" = "application/json" }; - $uri = "$DceURI/dataCollectionRules/$DcrImmutableId/streams/Custom-$Table"+"?api-version=2021-11-01-preview"; - $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers; -- # Let's see how the response looks - Write-Output $uploadResponse - Write-Output "" -- # Pausing for 1 second before processing the next entry - Start-Sleep -Seconds 1 - } - } - ``` --1. Copy the sample log data from [sample data](#sample-data) or copy your own Apache log data into a file called `sample_access.log`. --1. To read the data in the file and create a JSON file called `data_sample.json` that you can send to the logs ingestion API, run: -- ```PowerShell - .\LogGenerator.ps1 -Log "sample_access.log" -Type "file" -Output "data_sample.json" - ``` --## Add a custom log table +## Create new table in Log Analytics workspace Before you can send data to the workspace, you need to create the custom table where the data will be sent. 1. Go to the **Log Analytics workspaces** menu in the Azure portal and select **Tables**. The tables in the workspace will appear. Select **Create** > **New custom log (DCR based)**. The final step is to give the application permission to use the DCR. Any applica :::image type="content" source="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" lightbox="media/tutorial-logs-ingestion-portal/add-role-assignment-save.png" alt-text="Screenshot that shows saving the DCR role assignment."::: +## Generate sample data ++The following PowerShell script generates sample data to configure the custom table and sends sample data to the logs ingestion API to test the configuration. ++1. Run the following PowerShell command, which adds a required assembly for the script: ++ ```powershell + Add-Type -AssemblyName System.Web + ``` ++1. Update the values of `$tenantId`, `$appId`, and `$appSecret` with the values you noted for **Directory (tenant) ID**, **Application (client) ID**, and secret **Value**. Then save it with the file name *LogGenerator.ps1*. ++ ``` PowerShell + param ([Parameter(Mandatory=$true)] $Log, $Type="file", $Output, $DcrImmutableId, $DceURI, $Table) + ################ + ##### Usage + ################ + # LogGenerator.ps1 + # -Log <String> - Log file to be forwarded + # [-Type "file|API"] - Whether the script should generate sample JSON file or send data via + # API call. Data will be written to a file by default. + # [-Output <String>] - Path to resulting JSON sample + # [-DcrImmutableId <string>] - DCR immutable ID + # [-DceURI] - Data collection endpoint URI + # [-Table] - The name of the custom log table, including "_CL" suffix +++ ##### >>>> PUT YOUR VALUES HERE <<<<< + # Information needed to authenticate to Azure Active Directory and obtain a bearer token + $tenantId = "<put tenant ID here>"; #the tenant ID in which the Data Collection Endpoint resides + $appId = "<put application ID here>"; #the app ID created and granted permissions + $appSecret = "<put secret value here>"; #the secret created for the above app - never store your secrets in the source code + ##### >>>> END <<<<< +++ $file_data = Get-Content $Log + if ("file" -eq $Type) { + ############ + ## Convert plain log to JSON format and output to .json file + ############ + # If not provided, get output file name + if ($null -eq $Output) { + $Output = Read-Host "Enter output file name" + }; ++ # Form file payload + $payload = @(); + $records_to_generate = [math]::min($file_data.count, 500) + for ($i=0; $i -lt $records_to_generate; $i++) { + $log_entry = @{ + # Define the structure of log entry, as it will be sent + Time = Get-Date ([datetime]::UtcNow) -Format O + Application = "LogGenerator" + RawData = $file_data[$i] + } + $payload += $log_entry + } + # Write resulting payload to file + New-Item -Path $Output -ItemType "file" -Value ($payload | ConvertTo-Json) -Force ++ } else { + ############ + ## Send the content to the data collection endpoint + ############ + if ($null -eq $DcrImmutableId) { + $DcrImmutableId = Read-Host "Enter DCR Immutable ID" + }; ++ if ($null -eq $DceURI) { + $DceURI = Read-Host "Enter data collection endpoint URI" + } ++ if ($null -eq $Table) { + $Table = Read-Host "Enter the name of custom log table" + } ++ ## Obtain a bearer token used to authenticate against the data collection endpoint + $scope = [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default") + $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials"; + $headers = @{"Content-Type" = "application/x-www-form-urlencoded" }; + $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" + $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token ++ ## Generate and send some data + foreach ($line in $file_data) { + # We are going to send log entries one by one with a small delay + $log_entry = @{ + # Define the structure of log entry, as it will be sent + Time = Get-Date ([datetime]::UtcNow) -Format O + Application = "LogGenerator" + RawData = $line + } + # Sending the data to Log Analytics via the DCR! + $body = $log_entry | ConvertTo-Json -AsArray; + $headers = @{"Authorization" = "Bearer $bearerToken"; "Content-Type" = "application/json" }; + $uri = "$DceURI/dataCollectionRules/$DcrImmutableId/streams/Custom-$Table"+"?api-version=2021-11-01-preview"; + $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers; ++ # Let's see how the response looks + Write-Output $uploadResponse + Write-Output "" ++ # Pausing for 1 second before processing the next entry + Start-Sleep -Seconds 1 + } + } + ``` ++1. Copy the sample log data from [sample data](#sample-data) or copy your own Apache log data into a file called `sample_access.log`. ++1. To read the data in the file and create a JSON file called `data_sample.json` that you can send to the logs ingestion API, run: ++ ```PowerShell + .\LogGenerator.ps1 -Log "sample_access.log" -Type "file" -Output "data_sample.json" + ``` ++ ## Send sample data Allow at least 30 minutes for the configuration to take effect. You might also experience increased latency for the first few entries, but this activity should normalize. Allow at least 30 minutes for the configuration to take effect. You might also e 1. From Log Analytics, query your newly created table to verify that data arrived and that it's transformed properly. ## Troubleshooting-This section describes different error conditions you might receive and how to correct them. --### Script returns error code 403 -Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate. --### Script returns error code 413 or warning of TimeoutExpired with the message ReadyBody_ClientConnectionAbort in the response -The message is too large. The maximum message size is currently 1 MB per call. --### Script returns error code 429 -API limits have been exceeded. The limits are currently set to 500 MB of data per minute for both compressed and uncompressed data and 300,000 requests per minute. Retry after the duration listed in the `Retry-After` header in the response. --### Script returns error code 503 -Ensure that you have the correct permissions for your application to the DCR. You might also need to wait up to 30 minutes for permissions to propagate. --### You don't receive an error, but data doesn't appear in the workspace -The data might take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes. --### IntelliSense in Log Analytics doesn't recognize the new table -The cache that drives IntelliSense might take up to 24 hours to update. +See the [Troubleshooting](tutorial-logs-ingestion-code.md#troubleshooting) section of the sample code article if your code doesn't work as expected. ## Sample data You can use the following sample data for the tutorial. Alternatively, you can use your own data if you have your own Apache access logs. |
azure-monitor | Profiler Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-azure-functions.md | In this article, you'll use the Azure portal to: ||-| |APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 | |DiagnosticServices_EXTENSION_VERSION | ~3 |+|APPINSIGHTS_INSTRUMENTATIONKEY | Unique value from your App Insights resource. | ## Add app settings to your Azure Functions app |
azure-netapp-files | Large Volumes Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md | To enroll in the preview for large volumes, use the [large volumes preview sign- * You can't create a large volume with application volume groups. * Large volumes aren't currently supported with cross-zone replication. * The SDK for large volumes isn't currently available. -* Large volumes aren't currently supported with cool access tier. +* Currently, large volumes are not suited for database (HANA, Oracle, SQL Server, etc) data and log volumes. For database workloads requiring more than a single volumeΓÇÖs throughput limit, consider deploying multiple regular volumes. * Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You're able to grow to 500 TiB with the throughput ceiling per the following table: | Capacity tier | Volume size (TiB) | Throughput (MiB/s) | |
azure-netapp-files | Performance Linux Mount Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-mount-options.md | For example, [Deploy a SAP HANA scale-out system with standby node on Azure VMs ``` sudo vi /etc/fstab # Add the following entries-10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 -10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 -10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 -10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 -10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 +10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0 +10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0 +10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0 +10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0 +10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,noatime,_netdev,sec=sys 0 0 ``` For example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased read-ahead for the NFS mounts. See [NFS read-ahead best practices for Azure NetApp Files](performance-linux-nfs-read-ahead.md) for details. |
azure-resource-manager | Concepts Custom Role Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/concepts-custom-role-definition.md | - Title: Overview of custom role definitions -description: Describes the concept of creating custom role definitions for managed applications. --- Previously updated : 09/16/2019---# Custom role definition artifact in Azure Managed Applications --Custom role definition is an optional artifact in managed applications. It's used to determine what permissions the managed application needs to perform its functions. --This article provides an overview of the custom role definition artifact and its capabilities. --## Custom role definition artifact --You need to name the custom role definition artifact customRoleDefinition.json. Place it at the same level as createUiDefinition.json and mainTemplate.json in the .zip package that creates a managed application definition. To learn how to create the .zip package and publish a managed application definition, see [Publish a managed application definition.](publish-service-catalog-app.md) --## Custom role definition schema --The customRoleDefinition.json file has a top-level `roles` property that's an array of roles. These roles are the permissions that the managed application needs to function. Currently, only built-in roles are allowed, but you can specify multiple roles. A role can be referenced by the ID of the role definition or by the role name. --Sample JSON for custom role definition: --```json -{ - "contentVersion": "0.0.0.1", - "roles": [ - { - "properties": { - "roleName": "Contributor" - } - }, - { - "id": "acdd72a7-3385-48ef-bd42-f606fba81ae7" - }, - { - "id": "/providers/Microsoft.Authorization/roledefinitions/9980e02c-c2be-4d73-94e8-173b1dc7cf3c" - } - ] -} -``` --## Roles --A role is composed of either a `$.properties.roleName` or an `id`: --```json -{ - "id": null, - "properties": { - "roleName": "Contributor" - } -} -``` --> [!NOTE] -> You can use either the `id` or `roleName` field. Only one is required. These fields are used to look up the role definition that should be applied. If both are supplied, the `id` field will be used. --|Property|Required?|Description| -|||| -|id|Yes|The ID of the built-in role. You can use the full ID or just the GUID.| -|roleName|Yes|The name of the built-in role.| |
azure-web-pubsub | Quickstart Use Client Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-use-client-sdk.md | Title: Quickstart - Pub-sub using Azure Web PubSub client SDK + Title: Quickstart - Create a client using the Azure Web PubSub client SDK (preview) description: Quickstart showing how to use the Azure Web PubSub client SDK Previously updated : 02/7/2023 Last updated : 03/15/2023 ms.devlang: azurecli -# Quickstart: Pub-sub using Web PubSub client SDK +# Quickstart: Create a client using the Azure Web PubSub client SDK (preview) ++Get started with the Azure Web PubSub client SDK for .NET or JavaScript to create a Web PubSub client +that: ++* connects to a Web PubSub service instance +* subscribes a Web PubSub group. +* publishes a message to the Web PubSub group. ++[API reference documentation](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub-client) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub-client/src) | [Package (JavaScript npm)](https://www.npmjs.com/package/@azure/web-pubsub-client) | [Samples](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub-client/samples-dev/helloworld.ts) ++[API reference documentation](https://github.com/Azure/azure-sdk-for-net#azure-sdk-for-net) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/webpubsub/Azure.Messaging.WebPubSub.Client/src) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Messaging.WebPubSub.Client) | [Samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/webpubsub/Azure.Messaging.WebPubSub.Client/samples) -This quickstart guide demonstrates how to construct a project using the Web PubSub client SDK, connect to the Web PubSub, subscribe to messages from groups and publish a message to the group. > [!NOTE] -> The client SDK is still in preview version. The interface may change in later versions +> The client SDK is still in preview version. The interface may change in later versions. ## Prerequisites -- A Web PubSub instance. If you haven't created one, you can follow the guidance: [Create a Web PubSub instance from Azure portal](./howto-develop-create-instance.md)+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A file editor such as Visual Studio Code. -Install the dependencies for the language you're using: +## Setting up ++### Create an Azure Web PubSub service instance ++1. In the Azure portal **Home** page, select **Create a resource**. +1. In the **Search the Marketplace** box, enter *Web PubSub*. +1. Select **Web PubSub** from the results. +1. Select **Create**. +1. Create a new resource group + 1. Select **Create new**. + 1. Enter the name and select **OK**. +1. Enter a **Resource Name** for the service instance. +1. Select **Pricing tier**. You can choose **Free** for testing. +1. Select **Create**, then **Create** again to confirm the new service instance. +1. Once deployment is complete, select **Go to resource**. ++### Generate the client URL ++A client uses a Client Access URL to connect and authenticate with the service, which follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. ++To give the client permission to send messages to and join a specific group, you must generate a Client Access URL with the **Send To Groups** and **Join/Leave Groups** permissions. ++1. In the Azure portal, go to your Web PubSub service resource page. +1. Select **Keys** from the menu. +1. In the **Client URL Generator** section: + 1. Select **Send To Groups** + 1. Select **Allow Sending To Specific Groups**. + 1. Enter *group1* in the **Group Name** field and select **Add**. + 1. Select **Join/Leave Groups**. + 1. Select **Allow Joining/Leaving Specific Groups**. + 1. Enter *group1* in the **Group Name** field and select **Add**. + 1. Copy and save the **Client Access URL** for use later in this article. +++### Install programming language ++This quickstart uses the Azure Web PubSub client SDK for JavaScript or C#. Open a terminal window and install the dependencies for the language you're using. # [JavaScript](#tab/javascript) Install both the .NET Core SDK and dotnet runtime. -## Add the Web PubSub client SDK +### Install the package ++Install the Azure Web PubSub client SDK for the language you're using. # [JavaScript](#tab/javascript) -The SDK is available as an [npm module](https://www.npmjs.com/package/@azure/web-pubsub-client) +The SDK is available as an [npm module](https://www.npmjs.com/package/@azure/web-pubsub-client). ++Open a terminal window and install the Web PubSub client SDK using the following command. ```bash npm install @azure/web-pubsub-client ``` +Note that the SDK is available as an [npm module](https://www.npmjs.com/package/@azure/web-pubsub-client). + # [C#](#tab/csharp) -The SDK is available as an [NuGet packet](https://www.nuget.org/packages/Azure.Messaging.WebPubSub.Client) +Open a terminal window to create your project and install the Web PubSub client SDK. ```bash+# create project directory +mkdir webpubsub-client ++# change to the project directory +cd webpubsub-client + # Add a new .net project dotnet new console dotnet new console dotnet add package Azure.Messaging.WebPubSub.Client --prerelease ``` +Note that the SDK is available as a [NuGet packet](https://www.nuget.org/packages/Azure.Messaging.WebPubSub.Client). + -## Connect to Web PubSub +## Code examples -A client uses a Client Access URL to connect and authenticate with the service, which follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown as the following diagram. - +### Create and connect to the Web PubSub service -As shown in the diagram above, the client has the permissions to send messages to and join a specific group named `group1`. +This code example creates a Web PubSub client that connects to the Web PubSub service instance. A client uses a Client Access URL to connect and authenticate with the service. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand. +For this example, you can use the Client Access URL you generated in the portal. # [JavaScript](#tab/javascript) -Add a file with name `index.js` and add following codes: +In the terminal window, create a new directory for your project and change to that directory. ++```bash +mkdir webpubsub-client +cd webpubsub-client +``` ++Create a file with name `index.js` and enter following code: ```javascript const { WebPubSubClient } = require("@azure/web-pubsub-client");-// Instantiates the client object. <client-access-url> is copied from Azure portal mentioned above. -const client = new WebPubSubClient("<client-access-url>"); +// Instantiates the client object. env.process.env.WebPubSubClientURL +// env.process.env.WebPubSubClientURL is the Client Access URL from Azure portal +const client = new WebPubSubClient(env.process.env.WebPubSubClientURL); ``` # [C#](#tab/csharp) -Edit the `Program.cs` file and add following codes: +Edit the `Program.cs` file and add following code: ```csharp using Azure.Messaging.WebPubSub.Clients;-// Instantiates the client object. <client-access-uri> is copied from Azure portal mentioned above. -var client = new WebPubSubClient(new Uri("<client-access-uri>")); +// Client Access URL from Azure portal +var clientURL = Environment.GetEnvironmentVariable("WebPubSubClientURL")); +// Instantiates the client object. +var client = new WebPubSubClient(new Uri(clientURL)); ``` -## Subscribe to a group +### Subscribe to a group -To receive message from groups, you need to add a callback to handle messages you receive from the group, and you must join the group before you can receive messages from it. The following code subscribes the client to a group called `group1`. +To receive message from a group, you need to subscribe to the group and add a callback to handle messages you receive from the group. The following code subscribes the client to a group called `group1`. # [JavaScript](#tab/javascript) +Add this following code to the `index.js` file: + ```javascript // callback to group messages. client.on("group-message", (e) => { client.joinGroup("group1"); # [C#](#tab/csharp) +Add the following code to the `Program.cs` file: + ```csharp // callback to group messages. client.GroupMessageReceived += eventArgs => await client.StartAsync(); // join a group to subscribe message from the group await client.JoinGroupAsync("group1"); ```+ -## Publish a message to a group +### Publish a message to a group -Then you can send messages to the group and as the client has joined the group before, you can receive the message you've sent. +After your client has subscribed to the group, it can send messages to and receive the message from the group. # [JavaScript](#tab/javascript) +Add the following code to the `index.js` file: + ```javascript client.sendToGroup("group1", "Hello World", "text"); ``` # [C#](#tab/csharp) +Add the following code to the `Program.cs` file: + ```csharp await client.SendToGroupAsync("group1", BinaryData.FromString("Hello World"), WebPubSubDataType.Text); ``` -## Repository and Samples +## Run the code ++Run the client in your terminal. To verify the client is sending and receiving messages, you can open a second terminal and start the client from the same directory. You can see the message you sent from the second client in the first client's terminal window. ++# [JavaScript](#tab/javascript) ++To start the client go the terminal and run the following command. Replace the `<Client Access URL>` with the client access URL you copied from the portal. ++```bash +export WebPubSubClientURL="<Client Access URL>" +node index.js +``` ++# [C#](#tab/csharp) ++To start the client, run the following command in your terminal replacing the `<client-access-url>` with the client access URL you copied from the portal: ++```bash +export WebPubSubClientURL="<Client Access URL>" +dotnet run <client-access-url> +``` ++++## Clean up resources ++To delete the resources you created in this quickstart, you can delete the resource group you created. Go to the Azure portal, select your resource group, and select **Delete resource group**. ++## Next steps ++To learn more the Web PubSub service client SDKs, see the following resources: # [JavaScript](#tab/javascript) await client.SendToGroupAsync("group1", BinaryData.FromString("Hello World"), We [.NET SDK repository on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/webpubsub/Azure.Messaging.WebPubSub.Client) [Log streaming sample](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/logstream/sdk)----## Next steps --This quickstart provides you with a basic idea of how to connect to the Web PubSub with client SDK and how to subscribe to group messages and publish messages to groups. - |
azure-web-pubsub | Quickstart Use Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-use-sdk.md | Azure Web PubSub helps you manage WebSocket clients. This quickstart shows you h ## Prerequisites - An Azure subscription, if you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- a Bash and PowerShell command shell. The Python, Javascript and Java samples require a Bash command shell.+- a Bash and PowerShell command shell. The Python, JavaScript and Java samples require a Bash command shell. - A file editor such as VSCode. - Azure CLI: [install the Azure CLI](/cli/azure/install-azure-cli) Install both the .NET Core SDK and the `aspnetcore` and dotnet runtime. ## 1. Setup -To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process. If you are using Cloud Shell it is not necessary to sign in. +To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process. If you're using Cloud Shell, it isn't necessary to sign in. ```azurecli az login The connection to the Web PubSub service is established when you see a JSON mess ## 4. Publish messages using service SDK You'll use the Azure Web PubSub SDK to publish a message to all the clients connected to the hub. -You can choose between C#, JavaScript, Python and Java. The dependencies for each language are installed in the steps for that language. Note that Python, JavaScript and Java require a bash shell to run the commands in this quickstart. +You can choose between C#, JavaScript, Python and Java. The dependencies for each language are installed in the steps for that language. Python, JavaScript and Java require a bash shell to run the commands in this quickstart. ### Set up the project to publish messages 1. Open a new command shell for this project.-1. Save the connection string from the client shell: +1. Save the connection string from the client shell. Replace the `<your_connection_string>` placeholder with the connection string you displayed in an earlier step. # [Bash](#tab/bash) ```azurecli- Connection_String="<your_connection_string>" + connection_string="<your_connection_string>" ``` # [Azure PowerShell](#tab/azure-powershell) |
backup | Backup Center Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-actions.md | To stop protection, navigate to the Backup center and select the **Backup Instan  - [Learn more](backup-azure-manage-vms.md#stop-protecting-a-vm) about stopping backup for Azure Virtual Machines.-- [Learn more](manage-azure-managed-disks.md#stop-protection-preview) about stopping backup for a disk.+- [Learn more](manage-azure-managed-disks.md#stop-protection) about stopping backup for a disk. - [Learn more](manage-azure-database-postgresql.md#stop-protection) about stopping backup for Azure Database for PostgreSQL Server. ## Resume backup |
backup | Blob Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-overview.md | You won't incur any management charges or instance fee when using operational ba ### Vaulted backup (preview) -You won't incur backup storage charges or instance fees during the preview. However, you'll incur the source side cost, [associated with Object replication](/storage/blobs/object-replication-overview#billing), on the backed-up source account. +You won't incur backup storage charges or instance fees during the preview. However, you'll incur the source side cost, [associated with Object replication](../storage/blobs/object-replication-overview.md#billing), on the backed-up source account. ## Next steps |
backup | Manage Azure Managed Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-managed-disks.md | Title: Manage Azure Managed Disks description: Learn about managing Azure Managed Disk from the Azure portal. Previously updated : 01/20/2023 Last updated : 03/27/2023 After you trigger the restore operation, the backup service creates a job for tr This section describes several Azure Backup supported management operations that make it easy to manage Azure Managed disks. -### Stop Protection (Preview) +### Stop Protection There are three ways by which you can stop protecting an Azure Disk: There are three ways by which you can stop protecting an Azure Disk: 1. From the list of disk backup instances, select the instance that you want to retain. -1. Select **Stop Backup (Preview)**. +1. Select **Stop Backup**. :::image type="content" source="./media/manage-azure-managed-disks/select-disk-backup-instance-to-stop-inline.png" alt-text="Screenshot showing the selection of the Azure disk backup instance to be stopped." lightbox="./media/manage-azure-managed-disks/select-disk-backup-instance-to-stop-expanded.png"::: |
cdn | Cdn Pop Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md | This article lists current metros containing point-of-presence (POP) locations, | Africa | Johannesburg, South Africa <br/> Nairobi, Kenya | South Africa | | Middle East | Muscat, Oman<br />Fujirah, United Arab Emirates | Qatar<br />United Arab Emirates | | India | Bengaluru (Bangalore), India<br />Chennai, India<br />Mumbai, India<br />New Delhi, India<br /> | India |-| Asia | Hong Kong<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong<br />Indonesia<br />Israel<br />Japan<br />Macau<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />Turkey<br />Vietnam | +| Asia | Hong Kong<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong<br />Indonesia<br />Israel<br />Japan<br />Macau<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />T├╝rkiye<br />Vietnam | | Australia and New Zealand | Melbourne, Australia<br />Sydney, Australia<br />Auckland, New Zealand | Australia<br />New Zealand | ## Next steps |
cdn | Cdn Restrict Access By Country Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-restrict-access-by-country-region.md | In the country/region filtering rules table, select the delete icon next to a ru * Only one rule can be applied to the same relative path. That is, you can't create multiple country/region filters that point to the same relative path. However, because country/region filters are recursive, a folder can have multiple country/region filters. In other words, a subfolder of a previously configured folder can be assigned a different country/region filter. -* The geo-filtering feature uses [country/region codes](microsoft-pop-abbreviations.md) codes to define the countries/regions from which a request is allowed or blocked for a secured directory. **Azure CDN from Verizon** and **Azure CDN from Akamai** profiles use ISO 3166-1 alpha-2 country codes to define the countries from which a request are allowed or blocked for a secured directory. +* The geo-filtering feature uses [country/region codes](microsoft-pop-abbreviations.md) codes to define the countries/regions from which a request is allowed or blocked for a secured directory. **Azure CDN from Verizon** and **Azure CDN from Akamai** profiles use ISO 3166-1 alpha-2 country codes to define the countries/regions from which a request are allowed or blocked for a secured directory. |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates | MS16-139 |[3199720] |Security Update for Windows Kernel |2.57 |Nov 8.2016 | | MS16-140 |[3193479] |Security Update For Boot Manager |5.3, 4.38, 3.45 |Nov 8, 2016 | | MS16-142 |[3198467] |Cumulative Security Update for Internet Explorer |2.57, 4.38, 5.3 |Nov 8, 2016 |-| N/A |[3192321] |Turkey ends DST observance |5.3, 4.38, 3.45, 2.57 |Nov 8, 2016 | +| N/A |[3192321] |T├╝rkiye ends DST observance |5.3, 4.38, 3.45, 2.57 |Nov 8, 2016 | | N/A |[3185330] |October 2016 security monthly quality rollup for Windows 7 SP1 and Windows Server 2008 R2 SP1 |2.57 |Nov 8, 2016 | | N/A |[3192403] |October 2016 Preview of Monthly Quality Rollup for Windows 7 SP1 and Windows Server 2008 R2 SP1 |2.57 |Nov 8, 2016 | | N/A |[3177467] |Servicing stack update for Windows 7 SP1 and Windows Server 2008 R2 SP1: September 20, 2016 |2.57 |Nov 8, 2016 | |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/language-support.md | The `Accept-Language` header and the `setLang` query parameter are mutually excl |Sweden|SE| |Switzerland|CH| |Taiwan|TW|-|Turkey|TR| +|T├╝rkiye|TR| |United Kingdom|GB| |United States|US| The `Accept-Language` header and the `setLang` query parameter are mutually excl |Switzerland|French|fr-CH| |Switzerland|German|de-CH| |Taiwan|Traditional Chinese|zh-TW|-|Turkey|Turkish|tr-TR| +|T├╝rkiye|Turkish|tr-TR| |United Kingdom|English|en-GB| |United States|English|en-US| |United States|Spanish|es-US| |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/language-support.md | Alternatively, you can specify the country/region using the `cc` query parameter |Sweden|SE| |Switzerland|CH| |Taiwan|TW|-|Turkey|TR| +|T├╝rkiye|TR| |United Kingdom|GB| |United States|US| Alternatively, you can specify the country/region using the `cc` query parameter |Switzerland|French|fr-CH| |Switzerland|German|de-CH| |Taiwan|Traditional Chinese|zh-TW|-|Turkey|Turkish|tr-TR| +|T├╝rkiye|Turkish|tr-TR| |United Kingdom|English|en-GB| |United States|English|en-US| |United States|Spanish|es-US| |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-News-Search/language-support.md | For a list of country/region codes that you may specify in the `cc` query parame |Brazil|Portuguese|pt-BR| |Russia|Russian|ru-RU| |Sweden|Swedish|sv-SE| -|Turkey|Turkish|tr-TR| +|T├╝rkiye|Turkish|tr-TR| ## Supported markets for news endpoint For the `/news` endpoint, the following table lists the market code values that you may use to specify the `mkt` query parameter. Bing returns content for only these markets. The list is subject to change. The following are the country/region codes that you may specify in the `cc` quer |Sweden|SE| |Switzerland|CH| |Taiwan|TW| -|Turkey|TR| +|T├╝rkiye|TR| |United Kingdom|GB| |United States|US| |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/language-support.md | Alternatively, you can specify the market with the `mkt` query parameter, and a |Sweden|SE| |Switzerland|CH| |Taiwan|TW|-|Turkey|TR| +|T├╝rkiye|TR| |United Kingdom|GB| |United States|US| Alternatively, you can specify the market with the `mkt` query parameter, and a |Switzerland|French|fr-CH| |Switzerland|German|de-CH| |Taiwan|Traditional Chinese|zh-TW|-|Turkey|Turkish|tr-TR| +|T├╝rkiye|Turkish|tr-TR| |United Kingdom|English|en-GB| |United States|English|en-US| |United States|Spanish|es-US| |
cognitive-services | Batch Transcription Audio Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-audio-data.md | You could otherwise specify individual files in the container. You must generate - [Batch transcription overview](batch-transcription.md) - [Create a batch transcription](batch-transcription-create.md) - [Get batch transcription results](batch-transcription-get.md)+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/) |
cognitive-services | Batch Transcription Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md | The [Trusted Azure services security mechanism](batch-transcription-audio-data.m - [Batch transcription overview](batch-transcription.md) - [Locate audio files for batch transcription](batch-transcription-audio-data.md)-- [Get batch transcription results](batch-transcription-get.md)+- [Get batch transcription results](batch-transcription-get.md) +- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/) |
cognitive-services | Batch Transcription Get | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md | Depending in part on the request parameters set when you created the transcripti - [Batch transcription overview](batch-transcription.md) - [Locate audio files for batch transcription](batch-transcription-audio-data.md) - [Create a batch transcription](batch-transcription-create.md)+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/) |
cognitive-services | Batch Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md | Batch transcription jobs are scheduled on a best-effort basis. You can't estimat - [Locate audio files for batch transcription](batch-transcription-audio-data.md) - [Create a batch transcription](batch-transcription-create.md) - [Get batch transcription results](batch-transcription-get.md)+- [See batch transcription code samples at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/) |
cognitive-services | Custom Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md | Custom Neural Voice (CNV) is a text-to-speech feature that lets you create a one > [!IMPORTANT] > Custom Neural Voice access is [limited](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).+> +> Access to [Custom Neural Voice (CNV) Lite](custom-neural-voice-lite.md) is available for anyone to demo and evaluate CNV before investing in professional recordings to create a higher-quality voice. Out of the box, [text-to-speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=tts). The prebuilt neural voices work very well in most text-to-speech scenarios if a unique voice isn't required. |
cognitive-services | How To Migrate To Prebuilt Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md | More than 75 prebuilt standard voices are available in over 45 languages and loc | Tamil (India) | `ta-IN` | Male | `ta-IN-Valluvar`| | Telugu (India) | `te-IN` | Female | `te-IN-Chitra`| | Thai (Thailand) | `th-TH` | Male | `th-TH-Pattara`|-| Turkish (Turkey) | `tr-TR` | Female | `tr-TR-SedaRUS`| +| Turkish (T├╝rkiye) | `tr-TR` | Female | `tr-TR-SedaRUS`| | Vietnamese (Vietnam) | `vi-VN` | Male | `vi-VN-An` | > [!IMPORTANT] |
cognitive-services | Create Sas Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-sas-tokens.md | -In this article, you'll learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account. +In this article, you learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account. ++>[!TIP] +> +> [Managed identities](create-use-managed-identities.md) provide an alternate method for you to grant access to your storage data without the need to include SAS tokens with your HTTP requests. *See*, [Managed identities for Document Translation](create-use-managed-identities.md). +> +> * You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. +> * Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your source and target URLs. +> * There's no added cost to use managed identities in Azure. At a high level, here's how SAS tokens work: Azure Blob Storage offers three resource types: ## Prerequisites -To get started, you'll need the following resources: +To get started, you need the following resources: * An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/). * A [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource. -* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts: +* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts: * [Create a storage account](../../../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field. * [Create a container](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window. Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your co 1. Specify the signed key **Start** and **Expiry** times. * When you create a shared access signature (SAS), the default duration is 48 hours. After 48 hours, you'll need to create a new token.- * Consider setting a longer duration period for the time you'll be using your storage account for Translator Service operations. + * Consider setting a longer duration period for the time you're using your storage account for Translator Service operations. * The value for the expiry time is a maximum of seven days from the creation of the SAS token. -1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, it won't be authorized. +1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. 1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS. 1. Review then select **Generate SAS token and URL**. -1. The **Blob SAS token** query string and **Blob SAS URL** will be displayed in the lower area of window. +1. The **Blob SAS token** query string and **Blob SAS URL** are displayed in the lower area of window. 1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.** Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your co Azure Storage Explorer is a free standalone app that enables you to easily manage your Azure cloud storage resources from your desktop. -* You'll need the [**Azure Storage Explorer**](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment. +* You need the [**Azure Storage Explorer**](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md) app installed in your Windows, macOS, or Linux development environment. * After the Azure Storage Explorer app is installed, [connect it to the storage account](../../../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#connect-to-a-storage-account-or-service) you're using for Document Translation. Follow these steps to create tokens for a storage container or specific blob file: Azure Storage Explorer is a free standalone app that enables you to easily manag * Define your container **Permissions** by checking and/or clearing the appropriate check box. * Review and select **Create**. -1. A new window will appear with the **Container** name, **URI**, and **Query string** for your container. +1. A new window appears with the **Container** name, **URI**, and **Query string** for your container. 1. **Copy and paste the container, URI, and query string values in a secure location. They'll only be displayed once and can't be retrieved once the window is closed.** 1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service. Azure Storage Explorer is a free standalone app that enables you to easily manag * Select **key1** or **key2**. * Review and select **Create**. -1. A new window will appear with the **Blob** name, **URI**, and **Query string** for your blob. +1. A new window appears with the **Blob** name, **URI**, and **Query string** for your blob. 1. **Copy and paste the blob, URI, and query string values in a secure location. They will only be displayed once and cannot be retrieved once the window is closed.** 1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service. Azure Storage Explorer is a free standalone app that enables you to easily manag ### Use your SAS URL to grant access -The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the resources may be accessed by the client. +The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the client accesses the resources. You can include your SAS URL with REST API requests in two ways: |
cognitive-services | Create Use Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-managed-identities.md | -Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to data without having SAS tokens included with your HTTP requests. + Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to data without the need to include SAS tokens with your HTTP requests. :::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC)."::: |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md | recommendations: false Document Translation is a cloud-based feature of the [Azure Translator](../translator-overview.md) service and is part of the Azure Cognitive Service family of REST APIs. The Document Translation API can be used to translate multiple and complex documents across all [supported languages and dialects](../../language-support.md), while preserving original document structure and data format. -This documentation contains the following article types: --* [**Quickstarts**](get-started-with-document-translation.md) are getting-started instructions to guide you through making requests to the service. -* [**How-to guides**](create-sas-tokens.md) contain instructions for using the feature in more specific or customized ways. -* [**Reference**](reference/rest-api-guide.md) provide REST API settings, values, keywords, and configuration. - ## Document Translation key features | Feature | Description | You can add Document Translation to your applications using the REST API or a cl ## Get started -In our how-to guide, you'll learn how to quickly get started using Document Translation. To begin, you'll need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free). +In our quickstart, you learn how to rapidly get started using Document Translation. To begin, you need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free). > [!div class="nextstepaction"] > [Start here](get-started-with-document-translation.md "Learn how to use Document Translation with HTTP REST") ## Supported document formats -The following document file types are supported by Document Translation: +Document Translation supports the following document file types: | File type| File extension|Description| |||--|-|Adobe PDF|pdf|Portable document file format. Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.| -|Comma-Separated Values |csv| A comma-delimited raw-data file used by spreadsheet programs.| -|HTML|html, htm|Hyper Text Markup Language.| +|Adobe PDF|`pdf`|Portable document file format. Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.| +|Comma-Separated Values |`csv`| A comma-delimited raw-data file used by spreadsheet programs.| +|HTML|`html`, `htm`|Hyper Text Markup Language.| |Localization Interchange File Format|xlf| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|-|Markdown| markdown, mdown, mkdn, md, mkd, mdwn, mdtxt, mdtext, rmd| A lightweight markup language for creating formatted text.| -|MHTML|mthml, mht| A web page archive format used to combine HTML code and its companion resources.| -|Microsoft Excel|xls, xlsx|A spreadsheet file for data analysis and documentation.| -|Microsoft Outlook|msg|An email message created or saved within Microsoft Outlook.| -|Microsoft PowerPoint|ppt, pptx| A presentation file used to display content in a slideshow format.| -|Microsoft Word|doc, docx| A text document file.| -|OpenDocument Text|odt|An open-source text document file.| -|OpenDocument Presentation|odp|An open-source presentation file.| -|OpenDocument Spreadsheet|ods|An open-source spreadsheet file.| -|Rich Text Format|rtf|A text document containing formatting.| -|Tab Separated Values/TAB|tsv/tab| A tab-delimited raw-data file used by spreadsheet programs.| -|Text|txt| An unformatted text document.| +|Markdown| `markdown`, `mdown`, `mkdn`, `md`, `mkd`, `mdwn`, `mdtxt`, `mdtext`, `rmd`| A lightweight markup language for creating formatted text.| +|M​HTML|`mthml`, `mht`| A web page archive format used to combine HTML code and its companion resources.| +|Microsoft Excel|`xls`, `xlsx`|A spreadsheet file for data analysis and documentation.| +|Microsoft Outlook|`msg`|An email message created or saved within Microsoft Outlook.| +|Microsoft PowerPoint|`ppt`, `pptx`| A presentation file used to display content in a slideshow format.| +|Microsoft Word|`doc`, `docx`| A text document file.| +|OpenDocument Text|`odt`|An open-source text document file.| +|OpenDocument Presentation|`odp`|An open-source presentation file.| +|OpenDocument Spreadsheet|`ods`|An open-source spreadsheet file.| +|Rich Text Format|`rtf`|A text document containing formatting.| +|Tab Separated Values/TAB|`tsv`/`tab`| A tab-delimited raw-data file used by spreadsheet programs.| +|Text|`txt`| An unformatted text document.| ### Legacy file types -Source file types will be preserved during the document translation with the following **exceptions**: +Source file types are preserved during the document translation with the following **exceptions**: | Source file extension | Translated file extension| | | | Source file types will be preserved during the document translation with the fol ## Supported glossary formats -The following glossary file types are supported by Document Translation: +Document Translation supports the following glossary file types: | File type| File extension|Description| |||--|-|Comma-Separated Values| csv |A comma-delimited raw-data file used by spreadsheet programs.| -|Localization Interchange File Format| xlf , xliff| A parallel document format, export of Translation Memory systems The languages used are defined inside the file.| -|Tab-Separated Values/TAB|tsv, tab| A tab-delimited raw-data file used by spreadsheet programs.| +|Comma-Separated Values| `csv` |A comma-delimited raw-data file used by spreadsheet programs.| +|Localization Interchange File Format| `xlf` , `xliff`| A parallel document format, export of Translation Memory systems The languages used are defined inside the file.| +|Tab-Separated Values/TAB|`tsv`, `tab`| A tab-delimited raw-data file used by spreadsheet programs.| ## Next steps |
cognitive-services | V3 0 Languages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-languages.md | -Gets the set of languages currently supported by other operations of the Translator. +Gets the set of languages currently supported by other operations of the Translator. ## Request URL Request parameters passed on the query string are: </tr> <tr> <td>scope</td>- <td>*Optional parameter*.<br/>A comma-separated list of names defining the group of languages to return. Allowed group names are: `translation`, `transliteration` and `dictionary`. If no scope is given, then all groups are returned, which is equivalent to passing `scope=translation,transliteration,dictionary`. To decide which set of supported languages is appropriate for your scenario, see the description of the [response object](#response-body).</td> + <td>*Optional parameter*.<br/>A comma-separated list of names defining the group of languages to return. Allowed group names are: `translation`, `transliteration` and `dictionary`. If no scope is given, then all groups are returned, which is equivalent to passing `scope=translation,transliteration,dictionary`.</td> </tr>-</table> +</table> ++*See* [response body](#response-body). Request headers are: Request headers are: <th>Description</th> <tr> <td>Accept-Language</td>- <td>*Optional request header*.<br/>The language to use for user interface strings. Some of the fields in the response are names of languages or names of regions. Use this parameter to define the language in which these names are returned. The language is specified by providing a well-formed BCP 47 language tag. For instance, use the value `fr` to request names in French or use the value `zh-Hant` to request names in Chinese Traditional.<br/>Names are provided in the English language when a target language is not specified or when localization is not available. + <td>*Optional request header*.<br/>The language to use for user interface strings. Some of the fields in the response are names of languages or names of regions. Use this parameter to define the language in which these names are returned. The language is specified by providing a well-formed BCP 47 language tag. For instance, use the value `fr` to request names in French or use the value `zh-Hant` to request names in Chinese Traditional.<br/>Names are provided in the English language when a target language isn't specified or when localization isn't available. </td> </tr> <tr> <td>X-ClientTraceId</td> <td>*Optional request header*.<br/>A client-generated GUID to uniquely identify the request.</td> </tr>-</table> +</table> Authentication isn't required to get language resources. The value for each property is as follows. * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages. An example is:- + ```json { "translation": { The value for each property is as follows. * `nativeName`: Display name of the target language in the locale native for the target language. * `dir`: Directionality, which is `rtl` for right-to-left languages or `ltr` for left-to-right languages.- + * `code`: Language code identifying the target language. An example is: The value for each property is as follows. }, ``` -The structure of the response object will not change without a change in the version of the API. For the same version of the API, the list of available languages may change over time because Microsoft Translator continually extends the list of languages supported by its services. +The structure of the response object doesn't change without a change in the version of the API. For the same version of the API, the list of available languages may change over time because Microsoft Translator continually extends the list of languages supported by its services. -The list of supported languages will not change frequently. To save network bandwidth and improve responsiveness, a client application should consider caching language resources and the corresponding entity tag (`ETag`). Then, the client application can periodically (for example, once every 24 hours) query the service to fetch the latest set of supported languages. Passing the current `ETag` value in an `If-None-Match` header field will allow the service to optimize the response. If the resource has not been modified, the service will return status code 304 and an empty response body. +The list of supported languages doesn't change frequently. To save network bandwidth and improve responsiveness, a client application should consider caching language resources and the corresponding entity tag (`ETag`). Then, the client application can periodically (for example, once every 24 hours) query the service to fetch the latest set of supported languages. Passing the current `ETag` value in an `If-None-Match` header field allows the service to optimize the response. If the resource hasn't been modified, the service returns status code 304 and an empty response body. ## Response headers The list of supported languages will not change frequently. To save network band </tr> <tr> <td>X-RequestId</td>- <td>Value generated by the service to identify the request. It is used for troubleshooting purposes.</td> + <td>Value generated by the service to identify the request. It's used for troubleshooting purposes.</td> </tr>-</table> +</table> ## Response status codes -The following are the possible HTTP status codes that a request returns. +The following are the possible HTTP status codes that a request returns. <table width="100%"> <th width="20%">Status Code</th> The following are the possible HTTP status codes that a request returns. </tr> <tr> <td>304</td>- <td>The resource has not been modified since the version specified by request headers `If-None-Match`.</td> + <td>The resource hasn't been modified since the version specified by request headers `If-None-Match`.</td> </tr> <tr> <td>400</td> The following are the possible HTTP status codes that a request returns. <td>503</td> <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td> </tr>-</table> +</table> -If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). +If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors). ## Examples |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/language-support.md | Alternatively, you can specify the country/region using the `cc` query parameter |Sweden|SE| |Switzerland|CH| |Taiwan|TW|-|Turkey|TR| +|T├╝rkiye|TR| |United Kingdom|GB| |United States|US| Alternatively, you can specify the country/region using the `cc` query parameter |Switzerland|French|fr-CH| |Switzerland|German|de-CH| |Taiwan|Traditional Chinese|zh-TW|-|Turkey|Turkish|tr-TR| +|T├╝rkiye|Turkish|tr-TR| |United Kingdom|English|en-GB| |United States|English|en-US| |United States|Spanish|es-US| |
cognitive-services | Tutorial Visual Search Image Upload | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/bing-visual-search/tutorial-visual-search-image-upload.md | This application has an option to change these values. Add the following `<div>` <option value="fr-CH">Switzerland (French)</option> <option value="de-CH">Switzerland (German)</option> <option value="zh-TW">Taiwan (Traditional Chinese)</option>- <option value="tr-TR">Turkey (Turkish)</option> + <option value="tr-TR">T├╝rkiye (Turkish)</option> <option value="en-GB">United Kingdom (English)</option> <option value="en-US" selected>United States (English)</option> <option value="es-US">United States (Spanish)</option> |
communication-services | European Union Data Boundary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/european-union-data-boundary.md | -This boundary defines data residency and processing rules for resources based on the data location selected when creating a new communication resource. When a data location for a resource is one of the European countries in scope of EUDB, then all processing and storage of personal data remain within the European Union. The EU Data Boundary consists of the countries in the European Union (EU) and the European Free Trade Association (EFTA). The EU Countries are Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, and Sweden; and the EFTA countries are Liechtenstein, Iceland, Norway, and Switzerland. +This boundary defines data residency and processing rules for resources based on the data location selected when creating a new communication resource. When a data location for a resource is one of the European countries/regions in scope of EUDB, then all processing and storage of personal data remain within the European Union. The EU Data Boundary consists of the countries/regions in the European Union (EU) and the European Free Trade Association (EFTA). The EU countries/regions are Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, and Sweden; and the EFTA countries/regions are Liechtenstein, Iceland, Norway, and Switzerland. ## Calling |
communication-services | Sub Eligibility Number Capability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md | More details on eligible subscription types are as follows: | Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | | Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go | -\* In some countries, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed. +\* In some countries/regions, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed. \** Applications from all other subscription types will be reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. |
communication-services | Pstn Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md | All prices shown below are in USD. ### Usage charges |Number type |To make calls* |To receive calls| |--|--||-|Geographic |Starting at USD 0165/min |USD 0.0072/min | -|Toll-free |Starting at USD 0165/min | USD 0.2200/min | +|Geographic |Starting at USD 0.165/min |USD 0.0072/min | +|Toll-free |Starting at USD 0.165/min | USD 0.2200/min | \* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv) |
communication-services | Plan Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/plan-solution.md | The table below summarizes these phone number types: | Local (Geographic) | +1 (local area code) XXX XX XX | US* | Calling (Outbound) | Assigning phone numbers to users in your applications | | Toll-Free | +1 (toll-free area *code*) XXX XX XX | US* | Calling (Outbound), SMS (Inbound/Outbound)| Assigning phone numbers to Interactive Voice Response (IVR) systems/Bots, SMS applications | -*To find all countries where telephone numbers are available, please refer to [subscription eligibility and number capabilities page](../numbers/sub-eligibility-number-capability.md). +*To find all countries/regions where telephone numbers are available, please refer to [subscription eligibility and number capabilities page](../numbers/sub-eligibility-number-capability.md). ### Phone number capabilities in Azure Communication Services |
communication-services | Mute Participants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/mute-participants.md | + + Title: Mute participants during a call ++description: Provides a how-to guide for muting participants during a call. ++++ Last updated : 03/19/2023++++zone_pivot_groups: acs-csharp-java +++# Mute participants during a call ++>[!IMPORTANT] +>Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. +>Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/acs-tap-invite). ++With Azure Communication Services Call Automation SDK, developers can now mute participants through server based API requests. This feature can be useful when you want your application to mute participants after they've joined the meeting to avoid any interruptions or distractions to ongoing meetings. ++If youΓÇÖre interested in abilities to allow participants to mute/unmute themselves on the call when theyΓÇÖve joined with ACS Client Libraries, you can use our [mute/unmute function](../../../communication-services/how-tos/calling-sdk/manage-calls.md) provided through our Calling Library. ++## Common use cases ++### Contact center supervisor call monitoring ++In a typical contact center, there may be times when a supervisor needs to join an on-going call to monitor the call to provide guidance to agents after the call on how they could improve their assistance. The supervisor would join muted as to not disturb the on-going call with any extra side noise. ++*This guide helps you learn how to mute participants by using the mute action provided through Azure Communication Services Call Automation SDK.* ++++++## Clean up resources ++If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources). ++## Next steps ++Learn more about [Call Automation](../../concepts/call-automation/call-automation.md). |
communication-services | Bring Your Own Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/call-recording/bring-your-own-storage.md | This quickstart gets you started with BYOS (Bring your own storage) for Call Rec  1. Open your Azure Communication Services resource. Navigate to *Identity* on the left.-2. System Assigned Managed Identity is disabled by default. Enable it and click of *Save* +2. System Assigned Managed Identity is disabled by default. Enable it and click on *Save* 3. Once completed, you're able to see the Object principal ID of the newly created identity.  |
confidential-computing | Quick Create Confidential Vm Azure Cli Amd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli-amd.md | Make a note of the `publicIpAddress` to use later. Create a confidential [disk encryption set](../virtual-machines/linux/disks-enable-customer-managed-keys-cli.md) using [Azure Key Vault](../key-vault/general/quick-create-cli.md) or [Azure Key Vault managed Hardware Security Module (HSM)](../key-vault/managed-hsm/quick-create-cli.md). Based on your security and compliance needs you can choose either option. The following example uses Azure Key Vault Premium. -1. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault. +1. Grant confidential VM Service Principal `Confidential VM Orchestrator` to tenant +For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role. + ```azurecli + Connect-AzureAD -Tenant "your tenant ID" + New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator" + ``` +2. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault. ```azurecli-interactive az keyvault create -n keyVaultName -g myResourceGroup --enabled-for-disk-encryption true --sku premium --enable-purge-protection true ```-2. Create a key in the key vault using [az keyvault key create](/cli/azure/keyvault). For the key type, use RSA-HSM. +3. Give `Confidential VM Orchestrator` permissions to `get` and `release` the key vault. + ```azurecli + $cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json + az keyvault set-policy --name $KeyVault --object-id $cvmAgent.objectId --key-permissions get release + ``` +4. Create a key in the key vault using [az keyvault key create](/cli/azure/keyvault). For the key type, use RSA-HSM. ```azurecli-interactive az keyvault key create --name mykey --vault-name keyVaultName --default-cvm-policy --exportable --kty RSA-HSM ```-3. Create the disk encryption set using [az disk-encryption-set create](/cli/azure/disk-encryption-set). Set the encryption type to `ConfidentialVmEncryptedWithCustomerKey`. +5. Create the disk encryption set using [az disk-encryption-set create](/cli/azure/disk-encryption-set). Set the encryption type to `ConfidentialVmEncryptedWithCustomerKey`. ```azurecli-interactive $keyVaultKeyUrl=(az keyvault key show --vault-name keyVaultName --name mykey--query [key.kid] -o tsv) az disk-encryption-set create --resource-group myResourceGroup --name diskEncryptionSetName --key-url $keyVaultKeyUrl --encryption-type ConfidentialVmEncryptedWithCustomerKey ```-4. Grant the disk encryption set resource access to the key vault using [az key vault set-policy](/cli/azure/keyvault). +6. Grant the disk encryption set resource access to the key vault using [az key vault set-policy](/cli/azure/keyvault). ```azurecli-interactive $desIdentity=(az disk-encryption-set show -n diskEncryptionSetName -g myResourceGroup --query [identity.principalId] -o tsv) az keyvault set-policy -n keyVaultName -g myResourceGroup --object-id $desIdentity --key-permissions wrapkey unwrapkey get ```-5. Use the disk encryption set ID to create the VM. +7. Use the disk encryption set ID to create the VM. ```azurecli-interactive $diskEncryptionSetID=(az disk-encryption-set show -n diskEncryptionSetName -g myResourceGroup --query [id] -o tsv) ```-6. Create a VM with the [az vm create](/cli/azure/vm) command. Choose `DiskWithVMGuestState` for OS disk confidential encryption with a customer-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md). +8. Create a VM with the [az vm create](/cli/azure/vm) command. Choose `DiskWithVMGuestState` for OS disk confidential encryption with a customer-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md). ```azurecli-interactive az vm create \ echo -n $JWT | cut -d "." -f 2 | base64 -d 2> | jq . ## Next steps > [!div class="nextstepaction"]-> [Create a confidential VM on AMD with an ARM template](quick-create-confidential-vm-arm-amd.md) +> [Create a confidential VM on AMD with an ARM template](quick-create-confidential-vm-arm-amd.md) |
container-apps | Azure Arc Enable Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md | The [custom location](../azure-arc/kubernetes/custom-locations.md) is an Azure l + > [!NOTE] + > If you experience issues creating a custom location on your cluster, you may need to [enable the custom location feature on your cluster](../azure-arc/kubernetes/custom-locations.md#enable-custom-locations-on-your-cluster). This is required if logged into the CLI using a Service Principal or if you are logged in with an Azure Active Directory user with restricted permissions on the cluster resource. + > + 1. Validate that the custom location is successfully created with the following command. The output should show the `provisioningState` property as `Succeeded`. If not, rerun the command after a minute. ```azurecli |
container-apps | Log Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-options.md | Container Apps application logs consist of two different categories: - Container console output (`stdout`/`stderr`) messages. - System logs generated by Azure Container Apps.+- Spring App console logs. You can choose between these logs destinations: |
container-apps | Log Streaming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-streaming.md | description: View your container app's log stream. - Previously updated : 08/30/2022 Last updated : 03/24/2023 # View log streams in Azure Container Apps -While developing and troubleshooting your container app, it's important to see a container's logs in real-time. Container Apps lets you view a stream of your container's `stdout` and `stderr` log messages through the Azure portal or the Azure CLI. +While developing and troubleshooting your container app, it's essential to see the [logs](logging.md) for your container app in real time. Azure Container Apps lets you stream: -## Azure portal +- [system logs](logging.md#system-logs) from the Container Apps environment and your container app. +- container [console logs](logging.md#container-console-logs) from your container app. -View a container app's log stream in the Azure portal with these steps. +Log streams are accessible through the Azure portal or the Azure CLI. -1. Navigate to your container app in the Azure portal. +## View log streams via the Azure portal ++You can view system logs and console logs in the Azure portal. System logs are generated by the container app's runtime. Console logs are generated by your container app. ++### Environment system log stream ++To troubleshoot issues in your container app environment, you can view the system log stream from your environment page. The log stream displays the system logs for the Container Apps service and the apps actively running in the environment: ++1. Go to your environment in the Azure portal. 1. Select **Log stream** under the *Monitoring* section on the sidebar menu.-1. If you have multiple revisions, replicas, or containers, you can select from the pull-down menus to choose a container. If your app has only one container, you can skip this step. -After a container is selected, the log stream is displayed in the viewing pane. + :::image type="content" source="media/observability/system-log-streaming-env.png" alt-text="Screenshot of Container Apps environment system log stream page."::: +### Container app log stream ++You can view a log stream of your container app's system or console logs from your container app page. ++1. Go to your container app in the Azure portal. +1. Select **Log stream** under the *Monitoring* section on the sidebar menu. +1. To view the console log stream, select **Console**. + 1. If you have multiple revisions, replicas, or containers, you can select from the drop-down menus to choose a container. If your app has only one container, you can skip this step. -## Azure CLI + :::image type="content" source="media/observability/screenshot-log-stream-console-app.png" alt-text="Screenshot of Container Apps console log stream from app page."::: -You can view a container's log stream from the Azure CLI with the `az containerapp logs show` command. You can use these arguments to: +1. To view the system log stream, select **System**. The system log stream displays the system logs for all running containers in your container app. -- View previous log entries with the `--tail` argument.-- View a live stream with the `--follow`argument. + :::image type="content" source="media/observability/screenshot-log-stream-system-app.png" alt-text="Screenshot of Container Apps system log stream from app page."::: -Use `Ctrl/Cmd-C` to stop the live stream. +## View log streams via the Azure CLI -For example, you can list the last 50 container log entries in a container app with a single container using the following command. +You can view your container app's log streams from the Azure CLI with the `az containerapp logs show` command or your container app's environment system log stream with the `az containerapp env logs show` command. -This example live streams a container's log entries. +Control the log stream with the following arguments: ++- `--tail` (Default) View the last n log messages. Values are 0-300 messages. The default is 20. +- `--follow` View a continuous live stream of the log messages. ++### Stream Container app logs ++You can stream the system or console logs for your container app. To stream the container app system logs, use the `--type` argument with the value `system`. To stream the container console logs, use the `--type` argument with the value `console`. The default is `console`. ++#### View container app system log stream ++This example uses the `--tail` argument to display the last 50 system log messages from the container app. Replace the \<placeholders\> with your container app's values. # [Bash](#tab/bash) This example live streams a container's log entries. az containerapp logs show \ --name <ContainerAppName> \ --resource-group <ResourceGroup> \+ --type system \ --tail 50 ``` az containerapp logs show \ az containerapp logs show ` --name <ContainerAppName> ` --resource-group <ResourceGroup> `+ --type system ` --tail 50 ``` -To connect to a container console in a container app with multiple revisions, replicas, and containers include the following parameters in the `az containerapp logs show` command. +This example displays a continuous live stream of system log messages from the container app using the `--follow` argument. Replace the \<placeholders\> with your container app's values. ++# [Bash](#tab/bash) ++```azurecli +az containerapp logs show \ + --name <ContainerAppName> \ + --resource-group <ResourceGroup> \ + --type system \ + --follow +``` ++# [PowerShell](#tab/powershell) ++```azurecli +az containerapp logs show ` + --name <ContainerAppName> ` + --resource-group <ResourceGroup> ` + --type system ` + --follow +``` ++++Use `Ctrl-C` or `Cmd-C` to stop the live stream. ++### View container console log stream ++To connect to a container's console log stream in a container app with multiple revisions, replicas, and containers, include the following parameters in the `az containerapp logs show` command. | Argument | Description | |-|-|-| `--revision` | The revision name of the container to connect to. | -| `--replica` | The replica name of the container to connect to. | -| `--container` | The container name of the container to connect to. | +| `--revision` | The revision name. | +| `--replica` | The replica name in the revision. | +| `--container` | The container name to connect to. | -You can get the revision names with the `az containerapp revision list` command. Replace the \<placeholders\> with your container app's values. +You can get the revision names with the `az containerapp revision list` command. Replace the \<placeholders\> with your container app's values. # [Bash](#tab/bash) az containerapp replica list ` -Stream the container logs with the `az container app show` command. Replace the \<placeholders\> with your container app's values. -+Live stream the container console using the `az container app show` command with the `--follow` argument. Replace the \<placeholders\> with your container app's values. # [Bash](#tab/bash) az containerapp logs show \ --revision <RevisionName> \ --replica <ReplicaName> \ --container <ContainerName> \+ --type console \ --follow ``` az containerapp logs show ` --revision <RevisionName> ` --replica <ReplicaName> ` --container <ContainerName> `+ --type console ` + --follow +``` ++++Use `Ctrl-C` or `Cmd-C` to stop the live stream. ++View the last 50 console log messages using the `az containerapp logs show` command with the `--tail` argument. Replace the \<placeholders\> with your container app's values. ++# [Bash](#tab/bash) ++```azurecli +az containerapp logs show \ + --name <ContainerAppName> \ + --resource-group <ResourceGroup> \ + --revision <RevisionName> \ + --replica <ReplicaName> \ + --container <ContainerName> \ + --type console \ + --tail 50 +``` ++# [PowerShell](#tab/powershell) ++```azurecli +az containerapp logs show ` + --name <ContainerAppName> ` + --resource-group <ResourceGroup> ` + --revision <RevisionName> ` + --replica <ReplicaName> ` + --container <ContainerName> ` + --type console ` + --tail 50 +``` ++++### View environment system log stream ++Use the following command with the `--follow` argument to view the live system log stream from the Container Apps environment. Replace the \<placeholders\> with your environment values. ++# [Bash](#tab/bash) ++```azurecli +az containerapp env logs show \ + --name <ContainerAppEnvironmentName> \ + --resource-group <ResourceGroup> \ + --follow +``` ++# [PowerShell](#tab/powershell) ++```azurecli +az containerapp env logs show ` + --name <ContainerAppEnvironmentName> ` + --resource-group <ResourceGroup> ` --follow ``` +Use `Ctrl-C` or `Cmd-C` to stop the live stream. -Enter **Ctrl-C** to stop the log stream. +This example uses the `--tail` argument to display the last 50 environment system log messages. Replace the \<placeholders\> with your environment values. ++# [Bash](#tab/bash) ++```azurecli +az containerapp env logs show \ + --name <ContainerAppName> \ + --resource-group <ResourceGroup> \ + --tail 50 +``` ++# [PowerShell](#tab/powershell) ++```azurecli +az containerapp env logs show ` + --name <ContainerAppName> ` + --resource-group <ResourceGroup> ` + --tail 50 +``` ++ > [!div class="nextstepaction"]-> [View log streams from the Azure portal](log-streaming.md) +> [Log storage and monitoring options in Azure Container Apps](log-monitoring.md) |
container-apps | Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/logging.md | +- [Container console logs](#container-console-logs): Log streams from your container console. +- [System logs](#system-logs): Logs generated by the Azure Container Apps service. +You can view the [log streams](log-streaming.md) in near real-time in the Azure portal or CLI. For more options to store and monitor your logs, see [Logging options](log-options.md). ## Container console Logs -Container console logs are written by your application to the `stdout` and `stderr` output streams of the application's container. By implementing detailed logging in your application, you'll be able to troubleshoot issues and monitor the health of your application. --You can view your container console logs through [Logs streaming](log-streaming.md). For other options to store and monitoring your log data, see [Logging options](log-options.md). +Container Apps captures the `stdout` and `stderr` output streams from your application containers and displays them as console logs. When you implement logging in your application, you can troubleshoot problems and monitor the health of your app. ## System logs -System logs are generated by the Azure Container Apps to inform you for the status of service level events. Log messages include the following information: +Container Apps generates system logs to inform you of the status of service level events. Log messages include the following information: - Successfully created dapr component - Successfully updated dapr component System logs are generated by the Azure Container Apps to inform you for the stat - Successfully mounted volume - Error mounting volume - Successfully bound Domain-- Auth enabled on app. Creating authentication config+- Auth enabled on app +- Creating authentication config - Auth config created successfully-- Setting a traffic weight +- Setting a traffic weight - Creating a new revision: - Successfully provisioned revision - Deactivating Old revisions - Error provisioning revision -The system log data can be stored and monitored through the Container Apps logging options. For more information, see [Logging options](log-options.md). - ## Next steps > [!div class="nextstepaction"] |
container-apps | Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md | These features include: |Feature |Description | |||-|[Log streaming](log-streaming.md) | View streaming console logs from a container in near real-time. | +|[Log streaming](log-streaming.md) | View streaming system and console logs from a container in near real-time. | |[Container console](container-console.md) | Connect to the Linux console in your containers to debug your application from inside the container. | |[Azure Monitor metrics](metrics.md)| View and analyze your application's compute and network usage through metric data. | |[Application logging](logging.md) | Monitor, analyze and debug your app using log data.| |
cosmos-db | Analytical Store Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md | + + Title: Change data capture in analytical store ++description: Change data capture (CDC) in Azure Cosmos DB analytical store allows you to efficiently consume a continuous and incremental feed of changed data. +++++ Last updated : 03/23/2023+++# Change Data Capture in Azure Cosmos DB analytical store +++Change data capture (CDC) in [Azure Cosmos DB analytical store](analytical-store-introduction.md) allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store. The change data capture feature of the analytical store is seamlessly integrated with Azure Synapse and Azure Data Factory, providing you with a scalable no-code experience for high data volume. As the change data capture feature is based on analytical store, it [doesn't consume provisioned RUs, doesn't affect your transactional workloads](analytical-store-introduction.md#decoupled-performance-for-analytical-workloads), provides lower latency, and has lower TCO. +++In addition to providing incremental data feed from analytical store to diverse targets, change data capture supports the following capabilities: ++- Supports applying filters, projections and transformations on the Change feed via source query +- Supports capturing deletes and intermediate updates +- Ability to filter the change feed for a specific type of operation (**Insert** | **Update** | **Delete** | **TTL**) +- Each change in Container appears exactly once in the change data capture feed, and the checkpoints are managed internally for you +- Changes can be synchronized from ΓÇ£the BeginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥ +- There's no limitation around the fixed data retention period for which changes are available +- Multiple change feeds on the same container can be consumed simultaneously ++## Features ++Change data capture in Azure Cosmos DB analytical store supports the following key features. ++### Capturing deletes and intermediate updates ++The change data capture feature for the analytical store captures deleted records and the intermediate updates. The captured deletes and updates can be applied on Sinks that support delete and update operations. The {_rid} value uniquely identifies the records and so by specifying {_rid} as key column on the Sink side, the update and delete operations would be reflected on the Sink. ++### Filter the change feed for a specific type of operation ++You can filter the change data capture feed for a specific type of operation. For example, you can selectively capture the insert and update operations only, thereby ignoring the user-delete and TTL-delete operations. ++### Applying filters, projections, and transformations on the Change feed via source query ++You can optionally use a source query to specify filter(s), projection(s), and transformation(s), which would all be pushed down to the columnar analytical store. Here's a sample source-query that would only capture incremental records with the filter `Category = 'Urban'`. This sample query projects only five fields and applies a simple transformation: ++```sql +SELECT ProductId, Product, Segment, concat(Manufacturer, '-', Category) as ManufacturerCategory +FROM c +WHERE Category = 'Urban' +``` ++> [!NOTE] +> If you would like to enable source-query based change data capture on Azure Data Factory data flows during preview, please email [cosmosdbsynapselink@microsoft.com](mailto:cosmosdbsynapselink@microsoft.com) and share your **subscription Id** and **region**. This is not necessary to enable source-query based change data capture on an Azure Synapse data flow. ++### Throughput isolation, lower latency and lower TCO ++Operations on Cosmos DB analytical store don't consume the provisioned RUs and so don't affect your transactional workloads. change data capture with analytical store also has lower latency and lower TCO. The lower latency is attributed to analytical store enabling better parallelism for data processing and reduces the overall TCO enabling you to drive cost efficiencies in these rapidly shifting economic conditions. ++## Scenarios ++Here are common scenarios where you could use change data capture and the analytical store. ++### Consuming incremental data from Cosmos DB ++You can use analytical store change data capture, if you're currently using or planning to use: ++- Incremental data capture using Azure Data Factory Data Flows or Copy activity. +- One time batch processing using Azure Data Factory. +- Streaming Cosmos DB data + - The analytical store has up to 2-min latency to sync transactional store data. You can schedule Data Flows in Azure Data Factory every minute. + - If you need to stream without the above latency, we recommend using the change feed feature of the transactional store. +- Capturing deletes, incremental changes, applying filters on Cosmos DB Data. + - If you're using Azure Functions triggers or any other option with change feed and would like to capture deletes, incremental changes, apply transformations etc.; we recommend change data capture over analytical store. ++### Incremental feed to analytical platform of your choice ++change data capture capability enables end-to-end analytical story providing you with the flexibility to use Azure Cosmos DB data on analytical platform of your choice seamlessly. It also enables you to bring Cosmos DB data into a centralized data lake and join with data from diverse data sources. For more information, see [supported sink types](../data-factory/data-flow-sink.md#supported-sinks). You can flatten the data, apply more transformations either in Azure Synapse Analytics or Azure Data Factory. ++## Change data capture on Azure Cosmos DB for MongoDB containers ++The linked service interface for the API for MongoDB isn't available within Azure Data Factory data flows yet. You can use your API for MongoDB's account endpoint with the **Azure Cosmos DB for NoSQL** linked service interface as a work around until the Mongo linked service is directly supported. ++In the interface for a new NoSQL linked service, select **Enter Manually** to provide the Azure Cosmos DB account information. Here, use the account's NoSQL document endpoint (ex: `https://<account-name>.documents.azure.com:443/`) instead of the Mongo DB endpoint (ex: `mongodb://<account-name>.mongo.cosmos.azure.com:10255/`) ++## Next steps ++> [!div class="nextstepaction"] +> [Get started with change data capture in the analytical store](get-started-change-data-capture.md) |
cosmos-db | Get Started Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-started-change-data-capture.md | + + Title: Get started with change data capture in analytical store ++description: Enable change data capture in Azure Cosmos DB analytical store for an existing account to consume a continuous and incremental feed of changed data. +++++ Last updated : 03/23/2023+++# Get started with change data capture in the analytical store for Azure Cosmos DB +++Use Change data capture (CDC) in Azure Cosmos DB analytical store as a source to [Azure Data Factory](../data-factory/index.yml) or [Azure Synapse Analytics](../synapse-analytics/index.yml) to capture specific changes to your data. ++## Prerequisites ++- An existing Azure Cosmos DB account. + - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal). + - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit. ++## Enable analytical store ++First, enable Azure Synapse Link at the account level and then enable analytical store for the containers that's appropriate for your workload. ++1. Enable Azure Synapse Link: [Enable Azure Synapse Link for an Azure Cosmos DB account](configure-synapse-link.md#enable-synapse-link) | ++1. Enable analytical store for your container\[s\]: ++ | Option | Guide | + | | | + | **Enable for a specific new container** | [Enable Azure Synapse Link for your new containers](configure-synapse-link.md#new-container) | + | **Enable for a specific existing container** | [Enable Azure Synapse Link for your existing containers](configure-synapse-link.md#existing-container) | ++## Create a target Azure resource using data flows ++The change data capture feature of the analytical store is available through the data flow feature of [Azure Data Factory](../data-factory/concepts-data-flow-overview.md) or [Azure Synapse Analytics](../synapse-analytics/concepts-data-flow-overview.md). For this guide, use Azure Data Factory. ++> [!IMPORTANT] +> You can alternatively use Azure Synapse Analytics. First, [create an Azure Synapse workspace](../synapse-analytics/quickstart-create-workspace.md), if you don't already have one. Within the newly created workspace, select the **Develop** tab, select **Add new resource**, and then select **Data flow**. ++1. [Create an Azure Data Factory](../data-factory/quickstart-create-data-factory.md), if you don't already have one. ++ > [!TIP] + > If possible, create the data factory in the same region where your Azure Cosmos DB account resides. ++1. Launch the newly created data factory. ++1. In the data factory, select the **Data flows** tab, and then select **New data flow**. ++1. Give the newly created data flow a unique name. In this example, the data flow is named `cosmoscdc`. ++ :::image type="content" source="media/get-started-change-data-capture/data-flow-name.png" lightbox="media/get-started-change-data-capture/data-flow-name.png" alt-text="Screnshot of a new data flow with the name cosmoscdc."::: ++## Configure source settings for the analytical store container ++Now create and configure a source to flow data from the Azure Cosmos DB account's analytical store. ++1. Select **Add Source**. ++ :::image type="content" source="media/get-started-change-data-capture/add-source.png" alt-text="Screenshot of the add source menu option."::: ++1. In the **Output stream name** field, enter **cosmos**. ++ :::image type="content" source="media/get-started-change-data-capture/source-name.png" alt-text="Screenshot of naming the newly created source cosmos."::: ++1. In the **Source type** section, select **Inline**. ++ :::image type="content" source="media/get-started-change-data-capture/inline-source-type.png" alt-text="Screenshot of selecting the inline source type."::: ++1. In the **Dataset** field, select **Azure - Azure Cosmos DB for NoSQL**. ++ :::image type="content" source="media/get-started-change-data-capture/dataset-type-cosmos.png" alt-text="Screenshot of selecting Azure Cosmos DB for NoSQL as the dataset type."::: ++1. Create a new linked service for your account named **cosmoslinkedservice**. Select your existing Azure Cosmos DB for NoSQL account in the **New linked service** popup dialog and then select **Ok**. In this example, we select a pre-existing Azure Cosmos DB for NoSQL account named `msdocs-cosmos-source` and a database named `cosmicworks`. ++ :::image type="content" source="media/get-started-change-data-capture/new-linked-service.png" alt-text="Screenshot of the New linked service dialog with an Azure Cosmos DB account selected."::: ++1. Select **Analytical** for the store type. ++ :::image type="content" source="media/get-started-change-data-capture/linked-service-analytical.png" alt-text="Screenshot of the analytical option selected for a linked service."::: ++1. Select the **Source options** tab. ++1. Within **Source options**, select your target container and enable **Data flow debug**. In this example, the container is named `products`. ++ :::image type="content" source="media/get-started-change-data-capture/container-name.png" alt-text="Screenshot of a source container selected named products."::: ++1. Select **Data flow debug**. In the **Turn on data flow debug** popup dialog, retain the default options and then select **Ok**. ++ :::image type="content" source="media/get-started-change-data-capture/enable-data-flow-debug.png" alt-text="Screenshot of the toggle option to enable data flow debug."::: ++1. The **Source options** tab also contains other options you may wish to enable. This table describes those options: ++| Option | Description | +| | | +| Capture intermediate updates | Enable this option if you would like to capture the history of changes to items including the intermediate changes between change data capture reads. | +| Capture Deletes | Enable this option to capture user-deleted records and apply them on the Sink. Deletes can't be applied on Azure Data Explorer and Azure Cosmos DB Sinks. | +| Capture Transactional store TTLs | Enable this option to capture Azure Cosmos DB transactional store (time-to-live) TTL deleted records and apply on the Sink. TTL-deletes can't be applied on Azure Data Explorer and Azure Cosmos DB sinks. | +| Batchsize in bytes | Specify the size in bytes if you would like to batch the change data capture feeds | +| Extra Configs | Extra Azure Cosmos DB analytical store configs and their values. (ex: `spark.cosmos.allowWhiteSpaceInFieldNames -> true`) | ++## Create and configure sink settings for update and delete operations ++First, create a straightforward [Azure Blob Storage](../storage/blobs/index.yml) sink and then configure the sink to filter data to only specific operations. ++1. [Create an Azure Blob Storage](../data-factory/quickstart-create-data-factory.md) account and container, if you don't already have one. For the next examples, we'll use an account named `msdocsblobstorage` and a container named `output`. ++ > [!TIP] + > If possible, create the storage account in the same region where your Azure Cosmos DB account resides. ++1. Back in Azure Data Factory, create a new sink for the change data captured from your `cosmos` source. ++ :::image type="content" source="media/get-started-change-data-capture/add-sink.png" alt-text="Screenshot of adding a new sink that's connected to the existing source."::: ++1. Give the sink a unique name. In this example, the sink is named `storage`. ++ :::image type="content" source="media/get-started-change-data-capture/sink-name.png" alt-text="Screenshot of naming the newly created sink storage."::: ++1. In the **Sink type** section, select **Inline**. In the **Dataset** field, select **Delta**. ++ :::image type="content" source="media/get-started-change-data-capture/sink-dataset-type.png" alt-text="Screenshot of selecting and Inline Delta dataset type for the sink."::: ++1. Create a new linked service for your account using **Azure Blob Storage** named **storagelinkedservice**. Select your existing Azure Blob Storage account in the **New linked service** popup dialog and then select **Ok**. In this example, we select a pre-existing Azure Blob Storage account named `msdocsblobstorage`. ++ :::image type="content" source="media/get-started-change-data-capture/new-linked-service-sink-type.png" alt-text="Screenshot of the service type options for a new Delta linked service."::: ++ :::image type="content" source="media/get-started-change-data-capture/new-linked-service-sink-config.png" alt-text="Screenshot of the New linked service dialog with an Azure Blob Storage account selected."::: ++1. Select the **Settings** tab. ++1. Within **Settings**, set the **Folder path** to the name of the blob container. In this example, the container's name is `output`. ++ :::image type="content" source="media/get-started-change-data-capture/sink-container-name.png" alt-text="Screenshot of the blob container named output set as the sink target."::: ++1. Locate the **Update method** section and change the selections to only allow **delete** and **update** operations. Also, specify the **Key columns** as a **List of columns** using the field `_{rid}` as the unique identifier. ++ :::image type="content" source="media/get-started-change-data-capture/sink-methods-columns.png" alt-text="Screenshot of update methods and key columns being specified for the sink."::: ++1. Select **Validate** to ensure you haven't made any errors or omissions. Then, select **Publish** to publish the data flow. ++ :::image type="content" source="media/get-started-change-data-capture/validate-publish-data-flow.png" alt-text="Screenshot of the option to validate and then publish the current data flow."::: ++## Schedule change data capture execution ++After a data flow has been published, you can add a new pipeline to move and transform your data. ++1. Create a new pipeline. Give the pipeline a unique name. In this example, the pipeline is named `cosmoscdcpipeline`. ++ :::image type="content" source="media/get-started-change-data-capture/new-pipeline.png" alt-text="Screenshot of the new pipeline option within the resources section."::: ++1. In the **Activities** section, expand the **Move & transform** option and then select **Data flow**. ++ :::image type="content" source="media/get-started-change-data-capture/data-flow-activity.png" alt-text="Screenshot of the data flow activity option within the activities section."::: ++1. Give the data flow activity a unique name. In this example, the activity is named `cosmoscdcactivity`. ++1. In the **Settings** tab, select the data flow named `cosmoscdc` you created earlier in this guide. Then, select a compute size based on the data volume and required latency for your workload. ++ :::image type="content" source="media/get-started-change-data-capture/data-flow-settings.png" alt-text="Screenshot of the configuration settings for both the data flow and compute size for the activity."::: ++ > [!TIP] + > For incremental data sizes greater than 100 GB, we recommend the **Custom** size with a core count of 32 (+16 driver cores). ++1. Select **Add trigger**. Schedule this pipeline to execute at a cadence that makes sense for your workload. In this example, the pipeline is configured to execute every five minutes. ++ :::image type="content" source="media/get-started-change-data-capture/add-trigger.png" alt-text="Screenshot of the add trigger button for a new pipeline."::: ++ :::image type="content" source="media/get-started-change-data-capture/trigger-configuration.png" alt-text="Screenshot of a trigger configuration based on a schedule, starting in the year 2023, that runs every five minutes."::: ++ > [!NOTE] + > The minimum recurrence window for change data capture executions is one minute. ++1. Select **Validate** to ensure you haven't made any errors or omissions. Then, select **Publish** to publish the pipeline. ++1. Observe the data placed into the Azure Blob Storage container as an output of the data flow using Azure Cosmos DB analytical store change data capture. ++ :::image type="content" source="media/get-started-change-data-capture/output-files.png" alt-text="Screnshot of the output files from the pipeline in the Azure Blob Storage container."::: ++ > [!NOTE] + > The initial cluster startup time may take up to three minutes. To avoid cluster startup time in the subsequent change data capture executions, configure the Dataflow cluster **Time to live** value. For more information about the itegration runtime and TTL, see [integration runtime in Azure Data Factory](../data-factory/concepts-integration-runtime.md). ++## Next steps ++- Review the [overview of Azure Cosmos DB analytical store](analytical-store-introduction.md) |
cosmos-db | Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md | Azure Cosmos DB for MongoDB vCore supports the following aggregation pipeline fe | Command | Supported | ||| | `$mergeObjects` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |-| `$objectToArray` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No | +| `$objectToArray` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes | | `$setField` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No | ## Data types |
cosmos-db | Periodic Backup Modify Interval Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-modify-interval-retention.md | + + Title: Modify periodic backup interval and retention period ++description: Learn how to modify the interval and retention period for periodic backup in Azure Cosmos DB accounts. +++++ Last updated : 03/21/2023++++# Modify periodic backup interval and retention period in Azure Cosmos DB +++Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos DB account creation or after the account is created. The backup configuration is set at the Azure Cosmos DB account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal, Azure PowerShell, or the Azure CLI. ++## Prerequisites ++- An existing Azure Cosmos DB account. + - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal). + - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit. ++## Before you start ++If you've accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account. ++## Modify backup options for an existing account ++Use the following steps to change the default backup options for an existing Azure Cosmos DB account. ++### [Azure portal](#tab/azure-portal) ++1. Sign into the [Azure portal](https://portal.azure.com/). ++1. Navigate to your Azure Cosmos DB account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required. ++ - **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a nonzero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval can't be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken. ++ - **Backup Retention** - It represents the period where each backup is retained. You can configure it in hours or days. The minimum retention period canΓÇÖt be less than two times the backup interval (in hours) and it canΓÇÖt be greater than 720 hours. ++ - **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There's an extra charge if you need more than two copies. See the Consumed Storage section in the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for extra copies. ++ - **Backup storage redundancy** - Choose the required storage redundancy option. For more information, see [backup storage redundancy](periodic-backup-storage-redundancy.md). By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the account is being provisioned supports it. Otherwise, the account fallback to the highest redundancy option available. You can choose other storage such as locally redundant to ensure the backup isn't replicated to another region. The changes made to an existing account are applied to only future backups. After the backup storage redundancy of an existing account is updated, it may take up to twice the backup interval time for the changes to take effect, and **you will lose access to restore the older backups immediately.** ++ > [!NOTE] + > You must have the Azure [Azure Cosmos DB Operator role](../role-based-access-control/built-in-roles.md#cosmos-db-operator) role assigned at the subscription level to configure backup storage redundancy. ++ :::image type="content" source="./media/periodic-backup-modify-interval-retention/configure-existing-account-portal.png" lightbox="./media/periodic-backup-modify-interval-retention/configure-existing-account-portal.png" alt-text="Screenshot of configuration options including backup interval, retention, and storage redundancy for an existing Azure Cosmos DB account."::: ++### [Azure CLI](#tab/azure-cli) ++Use the [`az cosmosdb update`](/cli/azure/cosmosdb#az-cosmosdb-update) command to update the periodic backup options for an existing account. ++```azurecli-interactive +az cosmosdb update \ + --resource-group <resource-group-name> \ + --name <account-name> \ + --backup-interval 480 \ + --backup-retention 24 +``` ++### [Azure PowerShell](#tab/azure-powershell) ++Use the [`Update-AzCosmosDBAccount`](/powershell/module/az.cosmosdb/update-azcosmosdbaccount) cmdlet to update the periodic backup options for an existing account. ++```azurepowershell-interactive +$parameters = @{ + ResourceGroupName = "<resource-group-name>" + Name = "<account-name>" + BackupIntervalInMinutes = 480 + BackupRetentionIntervalInHours = 24 +} +Update-AzCosmosDBAccount @parameters +``` ++### [Azure Resource Manager template](#tab/azure-resource-manager-template) ++Use the following Azure Resource Manager JSON template to update the periodic backup options for an existing account. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "newAccountName": { + "type": "string", + "defaultValue": "[format('nosql-{0}', toLower(uniqueString(resourceGroup().id)))]", + "metadata": { + "description": "Name of the existing Azure Cosmos DB account." + } + }, + "location": { + "type": "string", + "defaultValue": "[resourceGroup().location]", + "metadata": { + "description": "Location for the Azure Cosmos DB account." + } + } + }, + "resources": [ + { + "type": "Microsoft.DocumentDB/databaseAccounts", + "apiVersion": "2022-05-15", + "name": "[parameters('newAccountName')]", + "location": "[parameters('location')]", + "kind": "GlobalDocumentDB", + "properties": { + "databaseAccountOfferType": "Standard", + "locations": [ + { + "locationName": "[parameters('location')]" + } + ], + "backupPolicy": { + "type": "Periodic", + "periodicModeProperties": { + "backupIntervalInMinutes": 480, + "backupRetentionIntervalInHours": 24, + "backupStorageRedundancy": "Local" + } + } + } + } + ] +} +``` ++Alternatively, you can use the Bicep variant of the same template. ++```bicep +@description('Name of the existing Azure Cosmos DB account.') +param newAccountName string = 'nosql-${toLower(uniqueString(resourceGroup().id))}' ++@description('Location for the Azure Cosmos DB account.') +param location string = resourceGroup().location ++resource account 'Microsoft.DocumentDB/databaseAccounts@2022-05-15' = { + name: newAccountName + location: location + kind: 'GlobalDocumentDB' + properties: { + databaseAccountOfferType: 'Standard' + locations: [ + { + locationName: location + } + ] + backupPolicy: + type: 'Periodic' + periodicModeProperties: + backupIntervalInMinutes: 480, + backupRetentionIntervalInHours: 24, + backupStorageRedundancy: 'Local' + } +} +``` ++++## Configure backup options for a new account ++Use these steps to change the default backup options for a new Azure Cosmos DB account. ++> [!NOTE] +> For illustrative purposes, these examples assume that you are creating an [Azure Cosmos DB for NoSQL](nosql/index.yml) account. The steps are very similar for accounts using other APIs. ++### [Azure portal](#tab/azure-portal) ++When provisioning a new account, from the **Backup Policy** tab, select **Periodic*** backup policy. The periodic policy allows you to configure the backup interval, backup retention, and backup storage redundancy. For example, you can choose **locally redundant backup storage** or **Zone redundant backup storage** options to prevent backup data replication outside your region. +++### [Azure CLI](#tab/azure-cli) ++Use the [`az cosmosdb create`](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new account with the specified periodic backup options. ++```azurecli-interactive +az cosmosdb create \ + --resource-group <resource-group-name> \ + --name <account-name> \ + --locations regionName=<azure-region> \ + --backup-interval 360 \ + --backup-retention 12 +``` ++### [Azure PowerShell](#tab/azure-powershell) ++Use the [`New-AzCosmosDBAccount`](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new account with the specified periodic backup options. ++```azurepowershell-interactive +$parameters = @{ + ResourceGroupName = "<resource-group-name>" + Name = "<account-name>" + Location = "<azure-region>" + BackupPolicyType = "Periodic" + BackupIntervalInMinutes = 360 + BackupRetentionIntervalInHours = 12 +} +New-AzCosmosDBAccount @parameters +``` ++### [Azure Resource Manager template](#tab/azure-resource-manager-template) ++Use the following Azure Resource Manager JSON template to update the periodic backup options for an existing account. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "newAccountName": { + "type": "string", + "defaultValue": "[format('nosql-{0}', toLower(uniqueString(resourceGroup().id)))]", + "metadata": { + "description": "New Azure Cosmos DB account name. Max length is 44 characters." + } + }, + "location": { + "type": "string", + "defaultValue": "[resourceGroup().location]", + "metadata": { + "description": "Location for the new Azure Cosmos DB account." + } + } + }, + "resources": [ + { + "type": "Microsoft.DocumentDB/databaseAccounts", + "apiVersion": "2022-05-15", + "name": "[parameters('newAccountName')]", + "location": "[parameters('location')]", + "kind": "GlobalDocumentDB", + "properties": { + "databaseAccountOfferType": "Standard", + "locations": [ + { + "locationName": "[parameters('location')]" + } + ], + "backupPolicy": { + "type": "Periodic", + "periodicModeProperties": { + "backupIntervalInMinutes": 360, + "backupRetentionIntervalInHours": 12, + "backupStorageRedundancy": "Zone" + } + } + } + } + ] +} +``` ++Alternatively, you can use the Bicep variant of the same template. ++```bicep +@description('New Azure Cosmos DB account name. Max length is 44 characters.') +param newAccountName string = 'sql-${toLower(uniqueString(resourceGroup().id))}' ++@description('Location for the new Azure Cosmos DB account.') +param location string = resourceGroup().location ++resource account 'Microsoft.DocumentDB/databaseAccounts@2022-05-15' = { + name: newAccountName + location: location + kind: 'GlobalDocumentDB' + properties: { + databaseAccountOfferType: 'Standard' + locations: [ + { + locationName: location + } + ] + backupPolicy: + type: 'Periodic' + periodicModeProperties: + backupIntervalInMinutes: 360, + backupRetentionIntervalInHours: 12, + backupStorageRedundancy: 'Zone' + } +} +``` ++++## Next steps ++> [!div class="nextstepaction"] +> [Request data restoration from a backup](periodic-backup-request-data-restore.md) |
cosmos-db | Periodic Backup Request Data Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-request-data-restore.md | + + Title: Request data restoration from a backup ++description: Request the restoration of your Azure Cosmos DB data from a backup if you've lost or accidentally deleted a database or container. +++++ Last updated : 03/21/2023++++# Request data restoration from an Azure Cosmos DB backup +++If you accidentally delete your database or a container, you can [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) or [call the Azure support](https://azure.microsoft.com/support/options/) to restore the data from automatic online backups. Azure support is available for selected plans only such as **Standard**, **Developer**, and plans higher than those tiers. Azure support isn't available with **Basic** plan. To learn about different support plans, see the [Azure support plans](https://azure.microsoft.com/support/plans/) page. ++To restore a specific snapshot of the backup, Azure Cosmos DB requires that the data is available during the backup cycle for that snapshot. +You should have the following details before requesting a restore: ++- Have your subscription ID ready. +- Based on how your data was accidentally deleted or modified, you should prepare to have additional information. It's advised that you have the information available ahead to minimize the back-and-forth that can be detrimental in some time sensitive cases. +- If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore. +- If one or more databases are deleted, you should provide the Azure Cosmos DB account, and the Azure Cosmos DB database names and specify if a new database with the same name exists. +- If one or more containers are deleted, you should provide the Azure Cosmos DB account name, database names, and the container names. And specify if a container with the same name exists. +- If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](periodic-backup-modify-interval-retention.md) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team has enough time to restore your account. ++In addition to Azure Cosmos DB account name, database names, container names, you should specify the point in time to use for data restoration. It's important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.** ++The following screenshot illustrates how to create a support request for a container(collection/graph/table) to restore data by using Azure portal. Provide other details such as type of data, purpose of the restore, time when the data was deleted to help us prioritize the request. +++## Considerations for restoring the data from a backup ++You may accidentally delete or modify your data in one of the following scenarios: ++- Delete the entire Azure Cosmos DB account. ++- Delete one or more Azure Cosmos DB databases. ++- Delete one or more Azure Cosmos DB containers. ++- Delete or modify the Azure Cosmos DB items (for example, documents) within a container. This specific case is typically referred to as data corruption. ++- A shared offer database or containers within a shared offer database are deleted or corrupted. ++Azure Cosmos DB can restore data in all the above scenarios. A new Azure Cosmos DB account is created to hold the restored data when restoring from a backup. The name of the new account, if it's not specified, has the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a precreated Azure Cosmos DB account. ++When you accidentally delete an Azure Cosmos DB account, we can restore the data into a new account with the same name, if the account name isn't in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult. ++When you accidentally delete an Azure Cosmos DB database, we can restore the whole database or a subset of the containers within that database. It's also possible to select specific containers across databases and restore them to a new Azure Cosmos DB account. ++When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there's data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption. ++If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. This way the Azure Cosmos DB support team has enough time to restore your account. ++> [!NOTE] +> After you restore the data, not all the source capabilities or settings are carried over to the restored account. The following settings are not carried over to the new account: +> +> - VNET access control lists +> - Stored procedures, triggers and user-defined functions +> - Multi-region settings +> - Managed identity settings +> ++If you assign throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore. ++## Get the restore details from the restored account ++After the restore operation completes, you may want to know the source account details from which you restored or the restore time. You can get these details from the Azure portal, PowerShell, or CLI. ++### [Azure portal](#tab/azure-portal) ++Use the following steps to get the restore details from Azure portal: ++1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to the restored account. ++1. Open the **Tags** page. ++1. The **Tags** page should have the tags **restoredAtTimestamp** and **restoredSourceDatabaseAccountName**. These tags describe the timestamp and the source account name that were used for the periodic restore. ++### [Azure CLI](#tab/azure-cli) ++Run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` fields are within the `tags` field: ++```azurecli-interactive +az cosmosdb show \ + --resource-group <resource-group-name> \ + --name <account-name> +``` ++### [Azure PowerShell](#tab/azure-powershell) ++Import the Az.CosmosDB module and run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` are within the `tags` field: ++```powershell-interactive +$parameters = @{ + ResourceGroupName = "<resource-group-name>" + Name = "<account-name>" +} +Get-AzCosmosDBAccount @parameters +``` ++++## Post-restore actions ++The primary goal of the data restore is to recover the data that you've accidentally deleted or modified. So, we recommend that you first inspect the content of the recovered data to ensure it contains what you are expecting. If everything looks good, you can migrate the data back to the primary account. Although it's possible to use the restored account as your new active account, it's not a recommended option if you have production workloads. ++After you restore the data, you get a notification about the name of the new account (itΓÇÖs typically in the format `<original-name>-restored1`) and the time when the account was restored to. The restored account has the same provisioned throughput, indexing policies and it is in same region as the original account. A user who is the subscription admin or a coadmin can see the restored account. ++### Migrate data to the original account ++The following are different ways to migrate data back to the original account: ++- Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md). +- Use the [change feed](change-feed.md) in Azure Cosmos DB. +- You can write your own custom code. ++It's advised that you delete the container or database immediately after migrating the data. If you don't delete the restored databases or containers, they incur cost for request units, storage, and egress. ++## Next steps ++- Learn more about [periodic backup and restore](periodic-backup-restore-introduction.md) +- Learn more about [continuous backup](continuous-backup-restore-introduction.md) |
cosmos-db | Periodic Backup Restore Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-restore-introduction.md | Title: Configure periodic backup + Title: Periodic backup/restore introduction -description: Configure Azure Cosmos DB accounts with periodic backup and retention at a specified interval through the portal or a support ticket. +description: Learn about Azure Cosmos DB accounts with periodic backup retention and restoration capabilities at a specified interval. ++ - Previously updated : 03/16/2023-- Last updated : 03/21/2023+ -# Configure Azure Cosmos DB account with periodic backup +# Periodic backup and restore in Azure Cosmos DB [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] -Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters. With Azure Cosmos DB, not only your data, but also the backups of your data are highly redundant and resilient to regional disasters. The following steps show how Azure Cosmos DB performs data backup: +Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters. With Azure Cosmos DB, not only your data, but also the backups of your data are highly redundant and resilient to regional disasters. ++## How Azure Cosmos DB performs data backup ++The following steps show how Azure Cosmos DB performs data backup: - Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time, only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you can change the backup interval and the retention period from the Azure portal. You can change the backup configuration during or after the Azure Cosmos DB account is created. If the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given provisioned throughput container or shared throughput database for 30 days. If throughput is provisioned at the database level, the backup and restore process happens across the entire database scope. Azure Cosmos DB automatically takes backups of your data at regular intervals. T The following image shows how an Azure Cosmos DB container with all the three primary physical partitions in West US. The container is backed up in a remote Azure Blob Storage account in West US and then replicated to East US: - :::image type="content" source="./media/configure-periodic-backup-restore/automatic-backup.png" alt-text="Diagram of periodic full backups taken of multiple Azure Cosmos DB entities in geo-redundant Azure Storage." lightbox="./media/configure-periodic-backup-restore/automatic-backup.png" border="false"::: + :::image type="content" source="./media/periodic-backup-restore-introduction/automatic-backup.png" alt-text="Diagram of periodic full backups taken of multiple Azure Cosmos DB entities in geo-redundant Azure Storage." lightbox="./media/periodic-backup-restore-introduction/automatic-backup.png" border="false"::: - The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database. -> [!NOTE] -> For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store is not supported at this time. --## Backup storage redundancy --By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [blob storage](../storage/common/storage-redundancy.md) that is replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can update this default value using Azure PowerShell or CLI and define an Azure policy to enforce a specific storage redundancy option. To learn more, see [update backup storage redundancy](periodic-backup-update-storage-redundancy.md) article. --Change the default geo-redundant backup storage to ensure that your backup data stays within the same region where your Azure Cosmos DB account is provisioned. You can configure the geo-redundant backup to use either locally redundant or zone-redundant storage. Storage redundancy mechanisms store multiple copies of your backups so that it's protected from planned and unplanned events. These events can include transient hardware failure, network or power outages, or massive natural disasters. --You can configure storage redundancy for periodic backup mode at the time of account creation or update it for an existing account. You can use the following three data redundancy options in periodic backup mode: --- **Geo-redundant backup storage:** This option copies your data asynchronously across the paired region.--- **Zone-redundant backup storage:** This option copies your data synchronously across three Azure availability zones in the primary region. For more information, see [Zone-redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)--- **Locally-redundant backup storage:** This option copies your data synchronously three times within a single physical location in the primary region. For more information, see [locally redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)--> [!NOTE] -> Zone-redundant storage is currently available only in [specific regions](../availability-zones/az-region.md). Depending on the region you select for a new account or the region you have for an existing account; the zone-redundant option will not be available. -> -> Updating backup storage redundancy will not have any impact on backup storage pricing. --## Modify the backup interval and retention period --Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos DB account creation or after the account is created. The backup configuration is set at the Azure Cosmos DB account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal as described later in this article, or via [PowerShell](periodic-backup-restore-introduction.md#modify-backup-options-using-azure-powershell) or the [Azure CLI](periodic-backup-restore-introduction.md#modify-backup-options-using-azure-cli). --If you've accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account. --### Modify backup options using Azure portal - Existing account --Use the following steps to change the default backup options for an existing Azure Cosmos DB account: --1. Sign into the [Azure portal.](https://portal.azure.com/) --1. Navigate to your Azure Cosmos DB account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required. -- - **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a nonzero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval can't be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken. -- - **Backup Retention** - It represents the period where each backup is retained. You can configure it in hours or days. The minimum retention period canΓÇÖt be less than two times the backup interval (in hours) and it canΓÇÖt be greater than 720 hours. -- - **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There's an extra charge if you need more than two copies. See the Consumed Storage section in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for extra copies. -- - **Backup storage redundancy** - Choose the required storage redundancy option, see the [Backup storage redundancy](#backup-storage-redundancy) section for available options. By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the account is being provisioned supports it. Otherwise, the account fallback to the highest redundancy option available. You can choose other storage such as locally redundant to ensure the backup isn't replicated to another region. The changes made to an existing account are applied to only future backups. After the backup storage redundancy of an existing account is updated, it may take up to twice the backup interval time for the changes to take effect, and **you will lose access to restore the older backups immediately.** -- > [!NOTE] - > You must have the Azure [Azure Cosmos DB Operator role](../role-based-access-control/built-in-roles.md#cosmos-db-operator) role assigned at the subscription level to configure backup storage redundancy. -- :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-existing-accounts.png" alt-text="Screenshot of configuration options including backup interval, retention, and storage redundancy for an existing Azure Cosmos DB account." border="true"::: --### Modify backup options using Azure portal - New account --When provisioning a new account, from the **Backup Policy** tab, select **Periodic*** backup policy. The periodic policy allows you to configure the backup interval, backup retention, and backup storage redundancy. For example, you can choose **locally redundant backup storage** or **Zone redundant backup storage** options to prevent backup data replication outside your region. ---### Modify backup options using Azure PowerShell --Use the following PowerShell cmdlet to update the periodic backup options: --```azurepowershell-interactive -Update-AzCosmosDBAccount -ResourceGroupName "resourceGroupName" ` - -Name "accountName" ` - -BackupIntervalInMinutes 480 ` - -BackupRetentionIntervalInHours 16 -``` --### Modify backup options using Azure CLI --Use the following CLI command to update the periodic backup options: --```azurecli-interactive -az cosmosdb update --resource-group "resourceGroupName" \ - --name "accountName" \ - --backup-interval 240 \ - --backup-retention 8 -``` --### Modify backup options using Resource Manager template --When deploying the Resource Manager template, change the periodic backup options within the `backupPolicy` object: +## Azure Cosmos DB Backup with Azure Synapse Link -```json - "backupPolicy": { - "type": "Periodic", - "periodicModeProperties": { - "backupIntervalInMinutes": 240, - "backupRetentionIntervalInHours": 8, - "backupStorageRedundancy": "Zone" - } -} -``` +For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Azure Synapse Link is enabled, Azure Cosmos DB continues to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store isn't supported at this time. -## Request data restore from a backup +## Understanding the cost of backups -If you accidentally delete your database or a container, you can [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) or [call the Azure support](https://azure.microsoft.com/support/options/) to restore the data from automatic online backups. Azure support is available for selected plans only such as **Standard**, **Developer**, and plans higher than those tiers. Azure support isn't available with **Basic** plan. To learn about different support plans, see the [Azure support plans](https://azure.microsoft.com/support/plans/) page. +Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). -To restore a specific snapshot of the backup, Azure Cosmos DB requires that the data is available during the backup cycle for that snapshot. -You should have the following details before requesting a restore: +For example, consider a scenario where Backup Retention is configured to **240 hrs** (or **10 days**) and Backup Interval is configured to **24 hours**. This configuration implies that there are **10** copies of the backup data. If you have **1 TB** of data in an Azure region, the cost for backup storage in a given month would be: `0.12 * 1000 * 8` -- Have your subscription ID ready.-- Based on how your data was accidentally deleted or modified, you should prepare to have additional information. It's advised that you have the information available ahead to minimize the back-and-forth that can be detrimental in some time sensitive cases.-- If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore.-- If one or more databases are deleted, you should provide the Azure Cosmos DB account, and the Azure Cosmos DB database names and specify if a new database with the same name exists.-- If one or more containers are deleted, you should provide the Azure Cosmos DB account name, database names, and the container names. And specify if a container with the same name exists.-- If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](#modify-the-backup-interval-and-retention-period) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team has enough time to restore your account.--In addition to Azure Cosmos DB account name, database names, container names, you should specify the point in time to use for data restoration. It's important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.** --The following screenshot illustrates how to create a support request for a container(collection/graph/table) to restore data by using Azure portal. Provide other details such as type of data, purpose of the restore, time when the data was deleted to help us prioritize the request. ---## Considerations for restoring the data from a backup --You may accidentally delete or modify your data in one of the following scenarios: --- Delete the entire Azure Cosmos DB account.--- Delete one or more Azure Cosmos DB databases.--- Delete one or more Azure Cosmos DB containers.--- Delete or modify the Azure Cosmos DB items (for example, documents) within a container. This specific case is typically referred to as data corruption.--- A shared offer database or containers within a shared offer database are deleted or corrupted.--Azure Cosmos DB can restore data in all the above scenarios. A new Azure Cosmos DB account is created to hold the restored data when restoring from a backup. The name of the new account, if it's not specified, has the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a precreated Azure Cosmos DB account. --When you accidentally delete an Azure Cosmos DB account, we can restore the data into a new account with the same name, if the account name isn't in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult. --When you accidentally delete an Azure Cosmos DB database, we can restore the whole database or a subset of the containers within that database. It's also possible to select specific containers across databases and restore them to a new Azure Cosmos DB account. --When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there's data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption. --If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. This way the Azure Cosmos DB support team has enough time to restore your account. --> [!NOTE] -> After you restore the data, not all the source capabilities or settings are carried over to the restored account. The following settings are not carried over to the new account: -> -> - VNET access control lists -> - Stored procedures, triggers and user-defined functions -> - Multi-region settings -> - Managed identity settings -> --If you assign throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore. --## Required permissions to change retention or restore from the portal +## Required permissions to manage retention or restoration Principals who are part of the role [CosmosdbBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), owner, or contributor are allowed to request a restore or change the retention period. -## Understanding Costs of extra backups --Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). For example, consider a scenario where Backup Retention is configured to **240 hrs** (or 10 days) and Backup Interval is configured to **24** hrs. This configuration implies that there are 10 copies of the backup data. If you have **1 TB** of data in the West US 2 region, the cost would be `0.12 * 1000 * 8` for backup storage in given month. --## Get the restore details from the restored account --After the restore operation completes, you may want to know the source account details from which you restored or the restore time. You can get these details from the Azure portal, PowerShell, or CLI. --### Use Azure portal --Use the following steps to get the restore details from Azure portal: --1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to the restored account. --1. Open the **Tags** page. This page should have the tags **restoredAtTimestamp** and **restoredSourceDatabaseAccountName**. These tags describe the timestamp and the source account name that were used for the periodic restore. --### Use Azure CLI --Run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` fields are within the `tags` field: --```azurecli-interactive -az cosmosdb show --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup -``` --### Use PowerShell --Import the Az.CosmosDB module and run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` are within the `tags` field: --```powershell-interactive -Get-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount -``` --## Options to manage your own backups +## Manually managing periodic backups in Azure Cosmos DB With Azure Cosmos DB API for NoSQL accounts, you can also maintain your own backups by using one of the following approaches: -- Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage solution of your choice.--- Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage.--## Post-restore actions --The primary goal of the data restore is to recover the data that you've accidentally deleted or modified. So, we recommend that you first inspect the content of the recovered data to ensure it contains what you are expecting. If everything looks good, you can migrate the data back to the primary account. Although it's possible to use the restored account as your new active account, it's not a recommended option if you have production workloads. --After you restore the data, you get a notification about the name of the new account (itΓÇÖs typically in the format `<original-name>-restored1`) and the time when the account was restored to. The restored account has the same provisioned throughput, indexing policies and it is in same region as the original account. A user who is the subscription admin or a coadmin can see the restored account. --### Migrate data to the original account +### Azure Data Factory -The following are different ways to migrate data back to the original account: +Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage solution of your choice. -- Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).-- Use the [change feed](change-feed.md) in Azure Cosmos DB.-- You can write your own custom code.+### Azure Cosmos DB change feed -It's advised that you delete the container or database immediately after migrating the data. If you don't delete the restored databases or containers, they incur cost for request units, storage, and egress. +Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage. ## Next steps -- To make a restore request, contact Azure Support by [filing a ticket in the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).-- [Create account with continuous backup](provision-account-continuous-backup.md).-- [Restore continuous backup account](restore-account-continuous-backup.md).+> [!div class="nextstepaction"] +> [Periodic backup storage redundancy](periodic-backup-storage-redundancy.md) |
cosmos-db | Periodic Backup Storage Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-storage-redundancy.md | + + Title: Periodic backup storage redundancy ++description: Learn how to configure Azure Storage-based data redundancy for periodic backup in Azure Cosmos DB accounts. +++++ Last updated : 03/21/2023++++# Periodic backup storage redundancy in Azure Cosmos DB +++By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [Azure Blob Storage](../storage/common/storage-redundancy.md). The blob storage is then, by default, replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can update this default value using Azure PowerShell or Azure CLI and define an Azure policy to enforce a specific storage redundancy option. For more information, see [update backup storage redundancy](periodic-backup-update-storage-redundancy.md). ++## Best practices ++Change the default geo-redundant backup storage to ensure that your backup data stays within the same region where your Azure Cosmos DB account is provisioned. You can configure the geo-redundant backup to use either locally redundant or zone-redundant storage. Storage redundancy mechanisms store multiple copies of your backups so that it's protected from planned and unplanned events. These events can include transient hardware failure, network or power outages, or massive natural disasters. ++## Redundancy options ++You can configure storage redundancy for periodic backup mode at the time of account creation or update it for an existing account. You can use the following three data redundancy options in periodic backup mode: ++- **Geo-redundant backup storage:** This option copies your data asynchronously across the paired region. ++- **Zone-redundant backup storage:** This option copies your data synchronously across three Azure availability zones in the primary region. For more information, see [Zone-redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) ++- **Locally-redundant backup storage:** This option copies your data synchronously three times within a single physical location in the primary region. For more information, see [locally redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) ++> [!NOTE] +> Zone-redundant storage is currently available only in [specific regions](../availability-zones/az-region.md). Depending on the region you select for a new account or the region you have for an existing account; the zone-redundant option will not be available. +> +> Updating backup storage redundancy will not have any impact on backup storage pricing. ++## Next steps ++> [!div class="nextstepaction"] +> [Update the redundancy of backup storage](periodic-backup-update-storage-redundancy.md) |
cosmos-db | Periodic Backup Update Storage Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-update-storage-redundancy.md | Title: Update backup storage redundancy for Azure Cosmos DB periodic backup accounts -description: Learn how to update the backup storage redundancy using Azure CLI, and PowerShell. You can also configure an Azure policy on your accounts to enforce the required storage redundancy. + Title: Update periodic backup storage redundancy ++description: Update the backup storage redundancy using Azure CLI or Azure PowerShell and enforce a minimum storage redundancy using Azure Policy. ++ - Previously updated : 12/03/2021-- Last updated : 03/21/2023+ -# Update backup storage redundancy for Azure Cosmos DB periodic backup accounts +# Update periodic backup storage redundancy for Azure Cosmos DB + [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [blob storage](../storage/common/storage-redundancy.md) that is replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can override the default backup storage redundancy. This article explains how to update the backup storage redundancy using Azure CLI and PowerShell. It also shows how to configure an Azure policy on your accounts to enforce the required storage redundancy. -## Update using Azure portal +## Prerequisites -Use the following steps to update backup storage redundancy from the Azure portal: +- An existing Azure Cosmos DB account. + - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal). + - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit. ++## Update storage redundancy ++Use the following steps to update backup storage redundancy. ++### [Azure portal](#tab/azure-portal) 1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your Azure Cosmos DB account. -1. Open the **Backup & Restore** pane, update the backup storage redundancy and select **Submit**. It takes few minutes for the operation to complete: +1. Open the **Backup & Restore** pane, update the backup storage redundancy and select **Submit**. It takes few minutes for the operation to complete. - :::image type="content" source="./media/update-backup-storage-redundancy/update-backup-storage-redundancy-portal.png" alt-text="Update backup storage redundancy from the Azure portal"::: + :::image type="content" source="./media/periodic-backup-update-storage-redundancy/update-existing-account-portal.png" lightbox="./media/periodic-backup-update-storage-redundancy/update-existing-account-portal.png" alt-text="Screenshot of the update backup storage redundancy page from the Azure portal."::: |