Updates from: 01/06/2023 02:07:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Claimsschema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claimsschema.md
The following example configures an **email** claim with regular expression inpu
<UserHelpText>Email address that can be used to contact you.</UserHelpText> <UserInputType>TextBox</UserInputType> <Restriction>
- <Pattern RegularExpression="^[a-zA-Z0-9.+!#$%&amp;'^_`{}~-]+@[a-zA-Z0-9-]+(?:\.[a-zA-Z0-9-]+)*$" HelpText="Please enter a valid email address." />
+ <Pattern RegularExpression="^[a-zA-Z0-9.+!#$%&amp;'+^_`{}~-]+(?:\.[a-zA-Z0-9!#$%&amp;'+^_`{}~-]+)*@(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]*[a-zA-Z0-9])?\.)+[a-zA-Z0-9](?:[a-zA-Z0-9-]*[a-zA-Z0-9])?$" HelpText="Please enter a valid email address." />
</Restriction> </ClaimType> ```
active-directory-b2c Oauth2 Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth2-technical-profile.md
For identity providers that support private key JWT authentication, configure th
```xml <Item Key="AccessTokenEndpoint">https://contoso.com/oauth2/token</Item>
-<Item Key="token_endpoint_auth_method">client_secret_basic</Item>
+<Item Key="token_endpoint_auth_method">private_key_jwt</Item>
<Item Key="token_signing_algorithm">RS256</Item> ```
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
When you install the extension, you need the *Tenant ID* and admin credentials f
### Network requirements
-The NPS server must be able to communicate with the following URLs over ports 80 and 443:
+The NPS server must be able to communicate with the following URLs over TCP port 443:
-* *https:\//strongauthenticationservice.auth.microsoft.com*
-* *https:\//strongauthenticationservice.auth.microsoft.us*
-* *https:\//strongauthenticationservice.auth.microsoft.cn*
+* *https:\//strongauthenticationservice.auth.microsoft.com* (for Azure Public cloud customers).
+* *https:\//strongauthenticationservice.auth.microsoft.us* (for Azure Government customers).
+* *https:\//strongauthenticationservice.auth.microsoft.cn* (for Azure China 21Vianet customers).
* *https:\//adnotifications.windowsazure.com* * *https:\//login.microsoftonline.com* * *https:\//credentials.azure.com*
There are two factors that affect which authentication methods are available wit
* The password encryption algorithm used between the RADIUS client (VPN, Netscaler server, or other) and the NPS servers. - **PAP** supports all the authentication methods of Azure AD Multi-Factor Authentication in the cloud: phone call, one-way text message, mobile app notification, OATH hardware tokens, and mobile app verification code.
- - **CHAPV2** and **EAP** support phone call and mobile app notification.
-
- > [!NOTE]
- > When you deploy the NPS extension, use these factors to evaluate which methods are available for your users. If your RADIUS client supports PAP, but the client UX doesn't have input fields for a verification code, then phone call and mobile app notification are the two supported options.
- >
- > Also, regardless of the authentication protocol that's used (PAP, CHAP, or EAP), if your MFA method is text-based (SMS, mobile app verification code, or OATH hardware token) and requires the user to enter a code or text in the VPN client UI input field, the authentication might succeed. *But* any RADIUS attributes that are configured in the Network Access Policy are *not* forwarded to the RADIUS client (the Network Access Device, like the VPN gateway). As a result, the VPN client might have more access than you want it to have, or less access or no access.
- >
- > As a workaround, you can run the [CrpUsernameStuffing script](https://github.com/OneMoreNate/CrpUsernameStuffing) to forward RADIUS attributes that are configured in the Network Access Policy and allow MFA when the user's authentication method requires the use of a One-Time Passcode (OTP), such as SMS, a Microsoft Authenticator passcode, or a hardware FOB.
-
+ - **CHAPV2** and **EAP** support phone call and mobile app notification.
* The input methods that the client application (VPN, Netscaler server, or other) can handle. For example, does the VPN client have some means to allow the user to type in a verification code from a text or mobile app? You can [disable unsupported authentication methods](howto-mfa-mfasettings.md#verification-methods) in Azure.
+ > [!NOTE]
+ > Regardless of the authentication protocol that's used (PAP, CHAP, or EAP), if your MFA method is text-based (SMS, mobile app verification code, or OATH hardware token) and requires the user to enter a code or text in the VPN client UI input field, the authentication might succeed. *But* any RADIUS attributes that are configured in the Network Access Policy are *not* forwarded to the RADIUS client (the Network Access Device, like the VPN gateway). As a result, the VPN client might have more access than you want it to have, or less access or no access.
+ >
+ > As a workaround, you can run the [CrpUsernameStuffing script](https://github.com/OneMoreNate/CrpUsernameStuffing) to forward RADIUS attributes that are configured in the Network Access Policy and allow MFA when the user's authentication method requires the use of a One-Time Passcode (OTP), such as SMS, a Microsoft Authenticator passcode, or a hardware FOB.
+ ### Register users for MFA Before you deploy and use the NPS extension, users that are required to perform Azure AD Multi-Factor Authentication need to be registered for MFA. To test the extension as you deploy it, you also need at least one test account that is fully registered for Azure AD Multi-Factor Authentication.
A VPN server may send repeated requests to the NPS server if the timeout value i
For more information on why you see discarded packets in the NPS server logs, see [RADIUS protocol behavior and the NPS extension](#radius-protocol-behavior-and-the-nps-extension) at the start of this article.
+### How do I get Microsoft Authenticator number matching to work with NPS?
+Make sure you run the latest version of the NPS extension. NPS extension versions beginning with 1.0.1.40 support number matching.
+
+Because the NPS extension can't show a number, a user who is enabled for number matching will still be prompted to Approve/Deny. However, you can create a registry key that overrides push notifications to ask a user to enter a One-Time Passcode (OTP). The user must have an OTP authentication method registered to see this behavior. Common OTP authentication methods include the OTP available in the Authenticator app, other software tokens, and so on.
+
+If the user doesn't have an OTP method registered, they'll continue to get the Approve/Deny experience. A user with number matching disabled will always see the Approve/Deny experience.
+
+To create the registry key that overrides push notifications:
+1. On the NPS Server, open the Registry Editor.
+2. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa.
+3. Set the following Key Value Pair: Key: OVERRIDE_NUMBER_MATCHING_WITH_OTP Value = TRUE
+4. Restart the NPS Service.
+ ## Managing the TLS/SSL Protocols and Cipher Suites It's recommended that older and weaker cipher suites be disabled or removed unless required by your organization. Information on how to complete this task can be found in the article, [Managing SSL/TLS Protocols and Cipher Suites for AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-protocols-in-ad-fs)
active-directory Active Directory Authentication Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-authentication-libraries.md
Title: Azure Active Directory Authentication Libraries | Microsoft Docs
+ Title: Azure Active Directory Authentication Libraries
description: The Azure AD Authentication Library (ADAL) allows client application developers to easily authenticate users to cloud or on-premises Active Directory (AD) and then obtain access tokens for securing API calls. - Previously updated : 12/01/2018 Last updated : 12/29/2022
The Azure Active Directory Authentication Library (ADAL) v1.0 enables applicatio
> [!WARNING]
-> Support for Active Directory Authentication Library (ADAL) will end in December, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](..\develop\msal-migration.md).
+> Support for Active Directory Authentication Library (ADAL) [will end](https://aka.ms/adal-eos) in June 2023. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](..\develop\msal-migration.md).
## Microsoft-supported Client Libraries
active-directory Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md
This article also describes how to enable the controller in Amazon Web Services
The **Cloud Infrastructure Entitlement Management assignments** page appears, displaying the roles assigned to you. - If you have read-only permission, the **Role** column displays **Reader**.
- - If you have administrative permission, the **Role** column displays **User Access Administrative**.
+ - If you have administrative permission, the **Role** column displays **User Access Administrator**.
1. To add the administrative role assignment, return to the **Access control (IAM)** page, and then select **Add role assignment**. 1. Add or remove the role assignment for Cloud Infrastructure Entitlement Management.
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
In this example, you create a policy that emits a custom claim "JoinedData" to J
1. To create the policy, run the following command: ```powershell
- New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema":[{"Source":"user","ID":"extensionattribute1"},{"Source":"transformation","ID":"DataJoin","TransformationId":"JoinTheData","JwtClaimType":"JoinedData"}],"ClaimsTransformation":[{"ID":"JoinTheData","TransformationMethod":"Join","InputClaims":[{"ClaimTypeReferenceId":"extensionattribute1","TransformationClaimType":"string1"}], "InputParameters": [{"ID":"string2","Value":"sandbox"},{"ID":"separator","Value":"."}],"OutputClaims":[{"ClaimTypeReferenceId":"DataJoin","TransformationClaimType":"outputClaim"}]}]}}') -DisplayName "TransformClaimsExample" -Type "ClaimsMappingPolicy"
+ -
``` 2. To see your new policy, and to get the policy ObjectId, run the following command:
active-directory Active Directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
Refresh and session token configuration are affected by the following properties
|Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and nonpersistent) |Until-revoked | |Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
-Non-persistent session tokens have a Max Inactive Time of 24 hours whereas persistent session tokens have a Max Inactive Time of 180 days. Any time the SSO session token is used within its validity period, the validity period is extended another 24 hours or 180 days. If the SSO session token is not used within its Max Inactive Time period, it is considered expired and will no longer be accepted. Any changes to this default periods should be change using [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
+Non-persistent session tokens have a Max Inactive Time of 24 hours whereas persistent session tokens have a Max Inactive Time of 90 days. Any time the SSO session token is used within its validity period, the validity period is extended another 24 hours or 90 days. If the SSO session token is not used within its Max Inactive Time period, it is considered expired and will no longer be accepted. Any changes to this default periods should be change using [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
You can use PowerShell to find the policies that will be affected by the retirement. Use the [PowerShell cmdlets](configure-token-lifetimes.md#get-started) to see the all policies created in your organization, or to find which apps and service principals are linked to a specific policy.
active-directory Msal Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md
description: Learn about the differences between the Microsoft Authentication Li
- Previously updated : 03/03/2022 Last updated : 12/29/2022
-# Customer intent: As an application developer, I want to learn about MSAL library so I can migrate my ADAL applications to MSAL.
+# Customer intent: As an application developer, I want to learn about MSAL so I can migrate my ADAL applications to MSAL.
# Migrate applications to the Microsoft Authentication Library (MSAL) If any of your applications use the Azure Active Directory Authentication Library (ADAL) for authentication and authorization functionality, it's time to migrate them to the [Microsoft Authentication Library (MSAL)](msal-overview.md#languages-and-frameworks). -- All Microsoft support and development for ADAL, including security fixes, ends in December, 2022.-- There are no ADAL feature releases or new platform version releases planned prior to December, 2022.
+- All Microsoft support and development for ADAL, including security fixes, ends in June 2023.
+- There are no ADAL feature releases or new platform version releases planned prior to June 2023.
- No new features have been added to ADAL since June 30, 2020. > [!WARNING]
-> If you choose not to migrate to MSAL before ADAL support ends in December, 2022, you put your app's security at risk. Existing apps that use ADAL will continue to work after the end-of-support date, but Microsoft will no longer release security fixes on ADAL.
+> If you choose not to migrate to MSAL before ADAL support ends in June 2023, you put your app's security at risk. Existing apps that use ADAL will continue to work after the end-of-support date but Microsoft will no longer release security fixes on ADAL. Learn more in [the official announcement](https://aka.ms/adal-eos).
## Why switch to MSAL?
MSAL provides multiple benefits over ADAL, including the following features:
|Features|MSAL|ADAL| |||| |**Security**|||
-|Security fixes beyond December, 2022|![Security fixes beyond December, 2022 - MSAL provides the feature][y]|![Security fixes beyond December, 2022 - ADAL doesn't provide the feature][n]|
+|Security fixes beyond June 2023|![Security fixes beyond June 2023 - MSAL provides the feature][y]|![Security fixes beyond June 2023 - ADAL doesn't provide the feature][n]|
| Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support [Continuous Access Evaluation (CAE)](app-resilience-continuous-access-evaluation.md).|![Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support Continuous Access Evaluation (CAE) - MSAL provides the feature][y]|![Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support Continuous Access Evaluation (CAE) - ADAL doesn't provide the feature][n]| | Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) |![Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) - MSAL provides the feature][y]|![Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) - ADAL doesn't provide the feature][n]| |**User accounts and experiences**|||
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
There are other ways in which applications can be granted authorization for app-
| Types of apps | Web / Mobile / single-page app (SPA) | Web / Daemon | | Access context | Get access on behalf of a user | Get access without a user | | Who can consent | - Users can consent for their data <br> - Admins can consent for all users | Only admin can consent |
+| Consent methods | - Static: configured list on app registration <br> - Dynamic: request individual permissions at login | - Static ONLY: configured list on app registration |
| Other names | - Scopes <br> - OAuth2 permission scopes | - App roles <br> - App-only permissions | | Result of consent (specific to Microsoft Graph) | [oAuth2PermissionGrant](/graph/api/resources/oauth2permissiongrant) | [appRoleAssignment](/graph/api/resources/approleassignment) |
One way that applications are granted permissions is through consent. Consent is
- When previously granted consent is revoked. - When the application is coded to specifically prompt for consent during every sign-in.-- When the application uses incremental or dynamic consent to ask for some permissions upfront and more permission later as needed.
+- When the application uses dynamic consent to ask for new permissions as needed at run time.
The key details of a consent prompt are the list of permissions the application requires and the publisher information. For more information about the consent prompt and the consent experience for both admins and end-users, see [application consent experience](application-consent-experience.md).
active-directory Tutorial V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-console.md
CLIENT_ID=Enter_the_Application_Id_Here
CLIENT_SECRET=Enter_the_Client_Secret_Here # Endpoints
-AAD_ENDPOINT=Enter_the_Cloud_Instance_Id_Here
-GRAPH_ENDPOINT=Enter_the_Graph_Endpoint_Here
+AAD_ENDPOINT=Enter_the_Cloud_Instance_Id_Here/
+GRAPH_ENDPOINT=Enter_the_Graph_Endpoint_Here/
``` Fill in these details with the values you obtain from Azure app registration portal:
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
post_logout_redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
| `post_logout_redirect_uri` | Recommended | The URL that the user is redirected to after successfully signing out. If the parameter isn't included, the user is shown a generic message that's generated by the Microsoft identity platform. This URL must match one of the redirect URIs registered for your application in the app registration portal. | | `logout_hint` | Optional | Enables sign-out to occur without prompting the user to select an account. To use `logout_hint`, enable the `login_hint` [optional claim](active-directory-optional-claims.md) in your client application and use the value of the `login_hint` optional claim as the `logout_hint` parameter. Don't use UPNs or phone numbers as the value of the `logout_hint` parameter.
+> [!NOTE]
+> After successful sign-out, the active sessions will be set to inactive. If a valid Primary Refresh Token (PRT) exists for the signed-out user and a new sign-in is executed, SSO will be interrupted and user will see a prompt with an account picker. If the option selected is the connected account that refers to the PRT, sign-in will proceed automatically without the need to insert fresh credentials.
+ ## Single sign-out When you redirect the user to the `end_session_endpoint`, the Microsoft identity platform clears the user's session from the browser. However, the user may still be signed in to other applications that use Microsoft accounts for authentication. To enable those applications to sign the user out simultaneously, the Microsoft identity platform sends an HTTP GET request to the registered `LogoutUrl` of all the applications that the user is currently signed in to. Applications must respond to this request by clearing any session that identifies the user and returning a `200` response. If you wish to support single sign-out in your application, you must implement such a `LogoutUrl` in your application's code. You can set the `LogoutUrl` from the app registration portal.
When you redirect the user to the `end_session_endpoint`, the Microsoft identity
* Review the [UserInfo endpoint documentation](userinfo.md). * [Populate claim values in a token](active-directory-claims-mapping.md) with data from on-premises systems.
-* [Include your own claims in tokens](active-directory-optional-claims.md).
+* [Include your own claims in tokens](active-directory-optional-claims.md).
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
Only administrators in the Global Administrator, Intune Administrator, or User A
- MemberOf can't be used with other rules. For example, a rule that states dynamic group A should contain members of group B and also should contain only users located in Redmond will fail. - Dynamic group rule builder and validate feature can't be used for memberOf at this time. - MemberOf can't be used with other operators. For example, you can't create a rule that states ΓÇ£Members Of group A can't be in Dynamic group B.ΓÇ¥-- The objects specified in the rule can't be administrative units. ## Getting started
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
To configure the Azure AD Connect Health agent to use an HTTP proxy, you can:
> [!NOTE] > To update the proxy settings, you must restart all Azure AD Connect Health agent services. Run the following command: >
-> `Restart-Service AzureADConnectHealth*`
+> `Restart-Service AdHealthAdfs*`
#### Import existing proxy settings
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later - note that Windows Server 2022 is not yet supported. You can deploy Azure AD Connect on Windows Server 2016 but since WS2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration.
+- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later - **note that Windows Server 2022 is not yet supported**. You can deploy Azure AD Connect on Windows Server 2016 but since Windows Server 2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration. We recommend the usage of domain joined Windows Server 2019.
- The minimum .Net Framework version required is 4.6.2, and newer versions of .Net are also supported. - Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported.
active-directory How To Connect Pta Upgrade Preview Authentication Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-upgrade-preview-authentication-agents.md
You need upgrade Azure AD Connect before upgrading the Authentication Agent on t
1. **Upgrade Azure AD Connect**: Follow this [article](how-to-upgrade-previous-version.md) and upgrade to the latest Azure AD Connect version. 2. **Uninstall the preview version of the Authentication Agent**: Download [this PowerShell script](https://aka.ms/rmpreviewagent) and run it as an Administrator on the server.
-3. **Download the latest version of the Authentication Agent (versions 1.5.389.0 or later)**: Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with your tenant's Global Administrator credentials. Select **Azure Active Directory -> Azure AD Connect -> Pass-through Authentication -> Download agent**. Accept the [terms of service](https://aka.ms/authagenteula) and download the latest version of the Authentication Agent. You can also download the Authentication Agent from [here](https://aka.ms/getauthagent).
+3. **Download the latest version of the Authentication Agent (versions 1.5.2482.0 or later)**: Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with your tenant's Global Administrator credentials. Select **Azure Active Directory -> Azure AD Connect -> Pass-through Authentication -> Download agent**. Accept the [terms of service](https://aka.ms/authagenteula) and download the latest version of the Authentication Agent. You can also download the Authentication Agent from [here](https://aka.ms/getauthagent).
4. **Install the latest version of the Authentication Agent**: Run the executable downloaded in Step 3. Provide your tenant's Global Administrator credentials when prompted. 5. **Verify that the latest version has been installed**: As shown before, go to **Control Panel -> Programs -> Programs and Features** and verify that there is an entry for "**Microsoft Azure AD Connect Authentication Agent**".
You need upgrade Azure AD Connect before upgrading the Authentication Agent on t
Follow these steps to upgrade Authentication Agents on other servers (where Azure AD Connect is not installed): 1. **Uninstall the preview version of the Authentication Agent**: Download [this PowerShell script](https://aka.ms/rmpreviewagent) and run it as an Administrator on the server.
-2. **Download the latest version of the Authentication Agent (versions 1.5.389.0 or later)**: Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with your tenant's Global Administrator credentials. Select **Azure Active Directory -> Azure AD Connect -> Pass-through Authentication -> Download agent**. Accept the terms of service and download the latest version.
+2. **Download the latest version of the Authentication Agent (versions 1.5.2482.0 or later)**: Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with your tenant's Global Administrator credentials. Select **Azure Active Directory -> Azure AD Connect -> Pass-through Authentication -> Download agent**. Accept the terms of service and download the latest version.
3. **Install the latest version of the Authentication Agent**: Run the executable downloaded in Step 2. Provide your tenant's Global Administrator credentials when prompted. 4. **Verify that the latest version has been installed**: As shown before, go to **Control Panel -> Programs -> Programs and Features** and verify that there is an entry called **Microsoft Azure AD Connect Authentication Agent**.
active-directory Prevent Domain Hints With Home Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
Admins of federated domains should set up this section of the HRD policy in a fo
::: zone pivot="powershell-hrd" ```powershell
-New-AzureADPolicy -Definition @("{`"DomainHintPolicy`": { `"IgnoreDomainHintForDomains`": [ `"testDomain.com`" ], `"RespectDomainHintForDomains`": [], `"IgnoreDomainHintForApps`": [], `"RespectDomainHintForApps`": [] } }") -DisplayName BasicBlockAccelerationPolicy -Type HomeRealmDiscoveryPolicy
+New-AzureADPolicy -Definition @("{`"HomeRealmDiscoveryPolicy`":{`"DomainHintPolicy`": { `"IgnoreDomainHintForDomains`": [ `"testDomain.com`" ], `"RespectDomainHintForDomains`": [], `"IgnoreDomainHintForApps`": [], `"RespectDomainHintForApps`": [] } } }") -DisplayName BasicBlockAccelerationPolicy -Type HomeRealmDiscoveryPolicy
``` ::: zone-end
New-AzureADPolicy -Definition @("{`"DomainHintPolicy`": { `"IgnoreDomainHintForD
::: zone pivot="powershell-hrd" ```powershell
-New-AzureADPolicy -Definition @("{`"DomainHintPolicy`": { `"IgnoreDomainHintForDomains`": [ `"testDomain.com`" ], `"RespectDomainHintForDomains`": [], `"IgnoreDomainHintForApps`": [], `"RespectDomainHintForApps`": ["app1-clientID-Guid", "app2-clientID-Guid] } }") -DisplayName BasicBlockAccelerationPolicy -Type HomeRealmDiscoveryPolicy
+New-AzureADPolicy -Definition @("{`"HomeRealmDiscoveryPolicy`":{`"DomainHintPolicy`": { `"IgnoreDomainHintForDomains`": [ `"testDomain.com`" ], `"RespectDomainHintForDomains`": [], `"IgnoreDomainHintForApps`": [], `"RespectDomainHintForApps`": ["app1-clientID-Guid", "app2-clientID-Guid] } } }") -DisplayName BasicBlockAccelerationPolicy -Type HomeRealmDiscoveryPolicy
``` ::: zone-end
New-AzureADPolicy -Definition @("{`"DomainHintPolicy`": { `"IgnoreDomainHintForD
::: zone pivot="powershell-hrd" ```powershell
-New-AzureADPolicy -Definition @("{`"DomainHintPolicy`": { `"IgnoreDomainHintForDomains`": [ `"testDomain.com`", "otherDomain.com", "anotherDomain.com"], `"RespectDomainHintForDomains`": [], `"IgnoreDomainHintForApps`": [], `"RespectDomainHintForApps`": ["app1-clientID-Guid", "app2-clientID-Guid] } }") -DisplayName BasicBlockAccelerationPolicy -Type HomeRealmDiscoveryPolicy
+New-AzureADPolicy -Definition @("{`"HomeRealmDiscoveryPolicy`":{`"DomainHintPolicy`": { `"IgnoreDomainHintForDomains`": [ `"testDomain.com`", "otherDomain.com", "anotherDomain.com"], `"RespectDomainHintForDomains`": [], `"IgnoreDomainHintForApps`": [], `"RespectDomainHintForApps`": ["app1-clientID-Guid", "app2-clientID-Guid] } } }") -DisplayName BasicBlockAccelerationPolicy -Type HomeRealmDiscoveryPolicy
``` ::: zone-end
New-AzureADPolicy -Definition @("{`"DomainHintPolicy`": { `"IgnoreDomainHintForD
::: zone pivot="powershell-hrd" ```powershell
-New-AzureADPolicy -Definition @("{`"DomainHintPolicy`": { `"IgnoreDomainHintForDomains`": [ `"*`" ], `"RespectDomainHintForDomains`": [guestHandlingDomain.com], `"IgnoreDomainHintForApps`": [], `"RespectDomainHintForApps`": ["app1-clientID-Guid", "app2-clientID-Guid] } }") -DisplayName BasicBlockAccelerationPolicy -Type HomeRealmDiscoveryPolicy
+New-AzureADPolicy -Definition @("{`"HomeRealmDiscoveryPolicy`":{`"DomainHintPolicy`": { `"IgnoreDomainHintForDomains`": [ `"*`" ], `"RespectDomainHintForDomains`": [guestHandlingDomain.com], `"IgnoreDomainHintForApps`": [], `"RespectDomainHintForApps`": ["app1-clientID-Guid", "app2-clientID-Guid] } } }") -DisplayName BasicBlockAccelerationPolicy -Type HomeRealmDiscoveryPolicy
``` ::: zone-end
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-concept.md
Role-assignable groups have the following restrictions:
- The `isAssignableToRole` property is **immutable**. Once a group is created with this property set, it can't be changed. - You can't make an existing group a role-assignable group. - A maximum of 500 role-assignable groups can be created in a single Azure AD organization (tenant).
+- You can't assign licenses to a role-assignable group.
## How are role-assignable groups protected?
active-directory My Staff Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/my-staff-configure.md
To manage a user's phone number, you must be assigned one of the following roles
You can search for administrative units and users in your organization using the search bar in My Staff. You can search across all administrative units and users in your organization, but you can only make changes to users who are in an administrative unit over which you have been given admin permissions.
-You can also search for a user within an administrative unit. To do this, use the search bar at the top of the user list.
- ## Audit logs You can view audit logs for actions taken in My Staff in the Azure Active Directory portal. If an audit log was generated by an action taken in My Staff, you will see this indicated under ADDITIONAL DETAILS in the audit event.
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
kubectl apply -f internal-lb.yaml
This command creates an Azure load balancer in the node resource group that's connected to the same virtual network as your AKS cluster.
-When you view the service details, the IP address of the internal load balancer is shown in the *EXTERNAL-IP* column. In this context, *External* refers to the external interface of the load balancer. It doesn't mean that it receives a public, external IP address.
+When you view the service details, the IP address of the internal load balancer is shown in the *EXTERNAL-IP* column. In this context, *External* refers to the external interface of the load balancer. It doesn't mean that it receives a public, external IP address. This IP address is dynamically assigned from the same subnet as the AKS cluster.
It may take a minute or two for the IP address to change from *\<pending\>* to an actual internal IP address, as shown in the following example:
internal-app LoadBalancer 10.0.248.59 10.240.0.7 80:30555/TCP 2m
If you want to use a specific IP address with the internal load balancer, add the *loadBalancerIP* property to the load balancer YAML manifest. In this scenario, the specified IP address must reside in the same subnet as the AKS cluster, but it can't already be assigned to a resource. For example, you shouldn't use an IP address in the range designated for the Kubernetes subnet within the AKS cluster.
+> [!NOTE]
+> If you initially deploy the service without specifying an IP address and later you update its configuration to use a dynamically assigned IP address using the *loadBalancerIP* property, the IP address still shows as dynamically assigned.
+ For more information on subnets, see [Add a node pool with a unique subnet][unique-subnet]. ```yaml
aks Operator Best Practices Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-scheduler.md
This best practices article focuses on basic Kubernetes scheduling features for
> > Plan and apply resource quotas at the namespace level. If pods don't define resource requests and limits, reject the deployment. Monitor resource usage and adjust quotas as needed.
-Resource requests and limits are placed in the pod specification. Limits are used by the Kubernetes scheduler at deployment time to find an available node in the cluster. Limits and requests work at the individual pod level. For more information about how to define these values, see [Define pod resource requests and limits][resource-limits]
+Resource requests and limits are placed in the pod specification. Limits are used by the Kubernetes scheduler at deployment time to find an available node in the cluster. Limits and requests work at the individual pod level. For more information about how to define these values, see [Define pod resource requests and limits][resource-limits].
To provide a way to reserve and limit resources across a development team or project, you should use *resource quotas*. These quotas are defined on a namespace, and can be used to set quotas on the following basis:
Involuntary disruptions can be mitigated by:
* Updated deployment template * Accidentally deleting a pod
-Kubernetes provides *pod disruption budgets* for voluntary disruptions,letting you plan for how deployments or replica sets respond when a voluntary disruption event occurs. Using pod disruption budgets, cluster operators can define a minimum available or maximum unavailable resource count.
+Kubernetes provides *pod disruption budgets* for voluntary disruptions, letting you plan for how deployments or replica sets respond when a voluntary disruption event occurs. Using pod disruption budgets, cluster operators can define a minimum available or maximum unavailable resource count.
If you upgrade a cluster or update a deployment template, the Kubernetes scheduler will schedule extra pods on other nodes before allowing voluntary disruption events to continue. The scheduler waits to reboot a node until the defined number of pods are successfully scheduled on other nodes in the cluster.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
Title: Configure a custom container
description: Learn how to configure a custom container in Azure App Service. This article shows the most common configuration tasks. Previously updated : 10/22/2021 Last updated : 01/04/2023 zone_pivot_groups: app-service-containers-windows-linux
Group Managed Service Accounts (gMSAs) are currently not supported in Windows co
## Enable SSH
-SSH enables secure communication between a container and a client. In order for a custom container to support SSH, you must add it into your Docker image itself.
-
-> [!TIP]
-> All built-in Linux containers in App Service have added the SSH instructions in their image repositories. You can go through the following instructions with the [Node.js 10.14 repository](https://github.com/Azure-App-Service/node/blob/master/10.14) to see how it's enabled there. The configuration in the Node.js built-in image is slightly different, but the same in principle.
--- Add [an sshd_config file](https://man.openbsd.org/sshd_config) to your repository, like the following example.
+Secure Shell (SSH) is commonly used to execute administrative commands remotely from a command-line terminal. In order to enable the Azure portal SSH console feature with custom containers, the following steps are required:
+1. Create a standard [sshd_config](https://man.openbsd.org/sshd_config) file with the following example contents and place it on the application project root directory:
+
``` Port 2222 ListenAddress 0.0.0.0
SSH enables secure communication between a container and a client. In order for
PermitRootLogin yes Subsystem sftp internal-sftp ```-
+
> [!NOTE]
- > This file configures OpenSSH and must include the following items:
+ > This file configures OpenSSH and must include the following items in order to comply with the Azure portal SSH feature:
> - `Port` must be set to 2222. > - `Ciphers` must include at least one item in this list: `aes128-cbc,3des-cbc,aes256-cbc`. > - `MACs` must include at least one item in this list: `hmac-sha1,hmac-sha1-96`.--- Add an ssh_setup script file to create the SSH keys [using ssh-keygen](https://man.openbsd.org/ssh-keygen.1) to your repository.-
+
+2. Create an entrypoint script with the name `entrypoint.sh` (or change any existing entrypoint file) and add the command to start the SSH service, along with the application startup command. The following example demonstrates starting a Python application. Please replace the last command according to the project language/stack:
+
+ ### [Debian](#tab/debian)
+
+ ```Bash
+ #!/bin/sh
+ set -e
+ service ssh start
+ exec gunicorn -w 4 -b 0.0.0.0:8000 app:app
```
+
+ ### [Alpine](#tab/alpine)
+
+ ```Bash
#!/bin/sh-
- ssh-keygen -A
-
- #prepare run dir
- if [ ! -d "/var/run/sshd" ]; then
- mkdir -p /var/run/sshd
- fi
+ set -e
+ /usr/sbin/sshd
+ exec gunicorn -w 4 -b 0.0.0.0:8000 app:app
```--- In your Dockerfile, add the following commands:-
+
+
+3. Add to the Dockerfile the following instructions according to the base image distribution. The same will copy the new files, install OpenSSH server, set proper permissions and configure the custom entrypoint, and expose the ports required by the application and SSH server, respectively:
+
+ ### [Debian](#tab/debian)
+
```Dockerfile
- # Install OpenSSH and set the password for root to "Docker!". In this example, "apk add" is the install instruction for an Alpine Linux-based image.
- RUN apk add openssh \
- && echo "root:Docker!" | chpasswd
-
- # Copy the sshd_config file to the /etc/ssh/ directory
+ COPY entrypoint.sh ./
+
+ # Start and enable SSH
+ RUN apt-get update \
+ && apt-get install -y --no-install-recommends dialog \
+ && apt-get install -y --no-install-recommends openssh-server \
+ && echo "root:Docker!" | chpasswd \
+ && chmod u+x ./entrypoint.sh
COPY sshd_config /etc/ssh/-
- # Copy and configure the ssh_setup file
- RUN mkdir -p /tmp
- COPY ssh_setup.sh /tmp
- RUN chmod +x /tmp/ssh_setup.sh \
- && (sleep 1;/tmp/ssh_setup.sh 2>&1 > )
-
- # Open port 2222 for SSH access
- EXPOSE 80 2222
+
+ EXPOSE 8000 2222
+
+ ENTRYPOINT [ "./entrypoint.sh" ]
```-
+
+ ### [Alpine](#tab/alpine)
+
+ ```Dockerfile
+ COPY sshd_config /etc/ssh/
+ COPY entrypoint.sh ./
+
+ # Start and enable SSH
+ RUN apk add openssh \
+ && echo "root:Docker!" | chpasswd \
+ && chmod +x ./entrypoint.sh \
+ && cd /etc/ssh/ \
+ && ssh-keygen -A
+
+ EXPOSE 8000 2222
+
+ ENTRYPOINT [ "./entrypoint.sh" ]
+ ```
+
+
> [!NOTE] > The root password must be exactly `Docker!` as it is used by App Service to let you access the SSH session with the container. This configuration doesn't allow external connections to the container. Port 2222 of the container is accessible only within the bridge network of a private virtual network and is not accessible to an attacker on the internet. -- In the start-up script for your container, start the SSH server.
+4. Rebuild and push the Docker image to the registry, and then test the Web App SSH feature on Azure portal.
- ```bash
- /usr/sbin/sshd
- ```
+For further troubleshooting additional information is available at the Azure App Service OSS blog: [Enabling SSH on Linux Web App for Containers](https://azureossd.github.io/2022/04/27/2022-Enabling-SSH-on-Linux-Web-App-for-Containers/https://docsupdatetracker.net/index.html#troubleshooting)
## Access diagnostic logs
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
Kubernetes. We'll use it to install the `application-gateway-kubernetes-ingress`
1. Install the Application Gateway ingress controller package: ```bash
- helm install -f helm-config.yaml application-gateway-kubernetes-ingress/ingress-azure
+ helm install -f helm-config.yaml --generate-name application-gateway-kubernetes-ingress/ingress-azure
``` ## Install a Sample App
spec:
-apiVersion: extensions/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress metadata: name: aspnetapp
spec:
paths: - path: / backend:
- serviceName: aspnetapp
- servicePort: 80
+ service:
+ name: aspnetapp
+ port:
+ number: 80
+ pathType: Exact
EOF ```
azure-arc Migrate To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-to-managed-instance.md
Example:
Copy the backup file from the local storage to the sql pod in the cluster. ```console
-kubectl cp <source file location> <pod name>:var/opt/mssql/data/<file name> -n <namespace name>
+kubectl cp <source file location> <pod name>:var/opt/mssql/data/<file name> -n <namespace name> -c arc-sqlmi
#Example:
-kubectl cp C:\Backupfiles\test.bak sqlinstance1-0:var/opt/mssql/data/test.bak -n arc
+kubectl cp C:\Backupfiles\test.bak sqlinstance1-0:var/opt/mssql/data/test.bak -n arc -c arc-sqlmi
``` ### Step 3: Restore the database
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 11/22/2022 Last updated : 1/3/2023
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
| Performance | Azure Monitor Metrics (Public preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads | | Windows event logs (including sysmon events) | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system | | Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
+ | Text logs and Windows IIS logs | Log Analytics workspace - custom tables | [Collect text logs with Azure Monitor Agent](data-collection-text-log.md) |
+ <sup>1</sup> On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.<br> <sup>2</sup> Azure Monitor Linux Agent versions 1.15.2 and higher support syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).
In addition to the generally available data collection listed above, Azure Monit
| Azure Monitor feature | Current support | Other extensions installed | More information | | : | : | : | : |
-| Text logs and Windows IIS logs | Public preview | None | [Collect text logs with Azure Monitor Agent (Public preview)](data-collection-text-log.md) |
| [VM insights](../vm/vminsights-overview.md) | Public preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights overview](../vm/vminsights-enable-overview.md) | In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure services in preview:
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/selective-disk-backup-restore.md
You need to pass the above obtained **$item** object to the **ΓÇôItem** paramete
### Modify protection for already backed up VMs with PowerShell ```azurepowershell
-Enable-AzRecoveryServicesBackupProtection -Item $item -InclusionDisksList[Strings] -VaultId $targetVault.ID
-```
-
-```azurepowershell
-Enable-AzRecoveryServicesBackupProtection -Item $item -ExclusionDisksList[Strings] -VaultId $targetVault.ID
+Enable-AzRecoveryServicesBackupProtection -Item $item -InclusionDisksList[Strings] -VaultId $targetVault.ID -Policy $pol
``` ### Backup only OS disk during modify protection with PowerShell ```azurepowershell
-Enable-AzRecoveryServicesBackupProtection -Item $item -ExcludeAllDataDisks -VaultId $targetVault.ID
+Enable-AzRecoveryServicesBackupProtection -Item $item -ExcludeAllDataDisks -VaultId $targetVault.ID -Policy $pol
``` ### Reset disk exclusion setting with PowerShell ```azurepowershell
-Enable-AzRecoveryServicesBackupProtection -Item $item -ResetExclusionSettings -VaultId $targetVault.ID
+Enable-AzRecoveryServicesBackupProtection -Item $item -ResetExclusionSettings -VaultId $targetVault.ID -Policy $pol
``` > [!NOTE]
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
After the container is on the [host computer](#host-computer-requirements-and-re
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. For more information on how to get the `{Endpoint_URI}` and `{API_Key}` values, see [Gather required parameters](#gather-required-parameters). More [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are also available.
-## Run the container in disconnected environments
-
-You must request access to use containers disconnected from the internet. For more information, see [Request access to use containers in disconnected environments](../containers/disconnected-containers.md#request-access-to-use-containers-in-disconnected-environments).
- > [!NOTE] > For general container requirements, see [Container requirements and recommendations](#container-requirements-and-recommendations).
Increasing the number of concurrent calls can affect reliability and latency. Fo
> [!IMPORTANT] > The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container. Otherwise, the container won't start. For more information, see [Billing](#billing).
+## Run the container in disconnected environments
+
+You must request access to use containers disconnected from the internet. For more information, see [Request access to use containers in disconnected environments](../containers/disconnected-containers.md#request-access-to-use-containers-in-disconnected-environments).
+ ## Query the container's prediction endpoint > [!NOTE]
data-lake-analytics Data Lake Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-overview.md
Last updated 10/17/2022
Azure Data Lake Analytics is an on-demand analytics job service that simplifies big data. Instead of deploying, configuring, and tuning hardware, you write queries to transform your data and extract valuable insights. The analytics service can handle jobs of any scale instantly by setting the dial for how much power you need. You only pay for your job when it's running, making it cost-effective.
+ > [!NOTE]
+ > Azure Data Lake Analytics will be retired on 29 February 2024. Learn more [with this announcement](https://azure.microsoft.com/updates/migrate-to-azure-synapse-analytics/).
+ ## Azure Data Lake analytics recent update information Azure Data Lake analytics service is updated on a periodic basis. We continue to provide the support for this service with component update, component beta preview and so on.
defender-for-cloud Plan Defender For Servers Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-scale.md
This article is the fifth in the Defender for Servers planning guide series. Bef
1. [Start planning your deployment](plan-defender-for-servers.md) 1. [Understand where your data is stored, and Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md) 1. [Review access and role requirements](plan-defender-for-servers-roles.md)
+1. [Select a Defender for Servers plan](plan-defender-for-servers-select-plan.md)
1. [Review Azure Arc and agent/extension requirements](plan-defender-for-servers-agents.md)
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
Here's an example policy:
|BaseRCG1 |Rule collection group |200 |8 |Parent policy| |DNATRC1 |DNAT rule collection | 600 | 7 |Parent policy| |DNATRC3|DNAT rule collection|610|3|Parent policy|
-|NetworkRc1 |Network rule collection | 800 | 1 |Parent policy|
+|NetworkRC1 |Network rule collection | 800 | 1 |Parent policy|
|BaseRCG2 |Rule collection group |300 | 3 |Parent policy|
-|AppRCG2 |Application rule collection | 1200 |2 |Parent policy
+|AppRC2 |Application rule collection | 1200 |2 |Parent policy
|NetworkRC2 |Network rule collection |1300 | 1 |Parent policy| |ChildRCG1 | Rule collection group | 300 |5 |-|
-|ChAppRC1 |Application rule collection | 700 | 3 |-|
-|ChNetRC1 | Network rule collection | 900 | 2 |-|
+|ChNetRC1 |Network rule collection | 700 | 3 |-|
+|ChAppRC1 | Application rule collection | 900 | 2 |-|
|ChildRCG2 |Rule collection group | 650 | 9 |-| |ChNetRC2 |Network rule collection | 1100 | 2 |-| |ChAppRC2 | Application rule collection |2000 |7 |-|
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys-powershell.md
The commands in this section complete the following actions:
First, set the variables to be used by the PowerShell cmdlets in the following steps. Be sure to update the \<YourStorageAccountName\> and \<YourKeyVaultName\> placeholders.
-We will also use the Azure PowerShell [New-AzStorageContext](/powershell/module/az.storage/new-azstoragecontext) cmdlets to get the context of your Azure storage account.
- ```azurepowershell-interactive $storageAccountName = <YourStorageAccountName> $keyVaultName = <YourKeyVaultName>-
-$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -Protocol Https -StorageAccountKey Key1 #(or "Primary" for Classic Storage Account)
``` ### Define a shared access signature definition template
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
var operation = await client.StartDeleteSecretAsync("mySecret");
// You only need to wait for completion if you want to purge or recover the key. await operation.WaitForCompletionAsync();
-await client.PurgeDeletedKeyAsync("mySecret");
+await client.PurgeDeletedSecretAsync("mySecret");
``` ## Sample code
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
paths:
- pattern: ./*.txt transformations: - read_delimited:
- delimiter: ,
+ delimiter: ','
encoding: ascii header: all_files_same_headers ```
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
__System-assigned managed identity__
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] ```python
-from azure.ai.ml.entities import UserAssignedIdentity, IdentityConfiguration, AmlCompute
-from azure.ai.ml.constants import IdentityType
+from azure.ai.ml.entities import ManagedIdentityConfiguration, IdentityConfiguration, AmlCompute
+from azure.ai.ml.constants import ManagedServiceIdentityType
# Create an identity configuration from the user-assigned managed identity
-managed_identity = UserAssignedIdentity(resource_id="/subscriptions/<subscription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity>")
-identity_config = IdentityConfiguration(type = IdentityType.USER_ASSIGNED, user_assigned_identities=[managed_identity])
+managed_identity = ManagedIdentityConfiguration(resource_id="/subscriptions/<subscription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity>")
+identity_config = IdentityConfiguration(type = ManagedServiceIdentityType.USER_ASSIGNED, user_assigned_identities=[managed_identity])
# specify aml compute name. cpu_compute_target = "cpu-cluster"
az ml compute create --name cpu-cluster --type <cluster name> --identity-type s
```python from azure.ai.ml.entities import IdentityConfiguration, AmlCompute
-from azure.ai.ml.constants import IdentityType
+from azure.ai.ml.constants import ManagedServiceIdentityType
# Create an identity configuration for a system-assigned managed identity
-identity_config = IdentityConfiguration(type = IdentityType.SYSTEM_ASSIGNED)
+identity_config = IdentityConfiguration(type = ManagedServiceIdentityType.SYSTEM_ASSIGNED)
# specify aml compute name. cpu_compute_target = "cpu-cluster"
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
If you don't see the above options, make sure you have enabled the "Debug & moni
# [Python SDK](#tab/python) 1. Define the interactive services you want to use for your job. Make sure to replace `your compute name` with your own value. If you want to use your own custom environment, follow the examples in [this tutorial](how-to-manage-environments-v2.md) to create a custom environment.
- Note that you have to import the `JobService` class from the `azure.ai.entities` package to configure interactive services via the SDKv2.
+ Note that you have to import the `JobService` class from the `azure.ai.ml.entities` package to configure interactive services via the SDKv2.
```python command_job = command(...
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
class ModelWrapper(PythonModel):
Then, a custom model can be logged in the run like this: ```python
-mport mlflow
+import mlflow
from xgboost import XGBClassifier from sklearn.metrics import accuracy_score from mlflow.models import infer_signature
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
$schema: https://azuremlschemas.azureedge.net/latest/CommandJob.schema.json
# Possible Paths for Data: # Blob: wasbs://<containername>@<accountname>.blob.core.windows.net/<folder>/<file>
-# Datastore: azureml://datastores/paths/<folder>/<file>
+# Datastore: azureml://datastores/<datastore_name>/paths/<folder>/<file>
# Data Asset: azureml:<my_data>:<version> code: src
openshift Howto Add Update Pull Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-add-update-pull-secret.md
This section walks through updating that pull secret with additional values from
Run the following command to update your pull secret. > [!NOTE]
-> Running this command will cause your cluster nodes to restart one by one as they're updated.
+> Running this command will cause your cluster nodes to restart one by one as they're updated.
```console oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./pull-secret.json
First, modify the Samples Operator configuration file. Then, you can run the fol
oc edit configs.samples.operator.openshift.io/cluster -o yaml ```
-Change the `spec.architectures.managementState` value from `Removed` to `Managed`.
+Change the `spec.managementState` value from `Removed` to `Managed`.
The following YAML snippet shows only the relevant sections of the edited YAML file:
spec:
managementState: Managed ```
-Second, run the following command to edit the Operator Hub configuration file:
+Second, run the following command to edit the Operator Hub configuration file:
```console oc edit operatorhub cluster -o yaml
openshift Howto Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-byok.md
You must use an Azure Key Vault instance to store your keys. Create a new Key Va
az keyvault create -n $KEYVAULT_NAME \ -g $RESOURCEGROUP \ -l $LOCATION \
- --enable-purge-protection true \
- --enable-soft-delete true
+ --enable-purge-protection true
az keyvault key create --vault-name $KEYVAULT_NAME \ -n $KEYVAULT_KEY_NAME \
purview Concept Policies Data Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-data-owner.md
The end-user identity from Azure Active Directory for whom this policy statement
### Example
-Deny Read on Data Asset:
+Allow Read on Data Asset:
*/subscription/finance/resourcegroups/prod/providers/Microsoft.Storage/storageAccounts/finDataLake/blobservice/default/containers/FinData to group Finance-analyst*
-In the above policy statement, the effect is *Deny*, the action is *Read*, the data resource is Azure Storage container *FinData*, and the subject is Azure Active Directory group *Finance-analyst*. If any user that belongs to this group attempts to read data from the storage container *FinData*, the request will be denied.
+In the above policy statement, the effect is *Allow*, the action is *Read*, the data resource is Azure Storage container *FinData*, and the subject is Azure Active Directory group *Finance-analyst*. If any user that belongs to this group attempts to read data from the storage container *FinData*, the request will be allowed.
### Hierarchical enforcement of policies
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **Data ingestion method** | [**Syslog**](connect-syslog.md)<br><br>[Configure the ESET SMC logs to be collected](#configure-the-eset-smc-logs-to-be-collected) <br>[Configure OMS agent to pass Eset SMC data in API format](#configure-oms-agent-to-pass-eset-smc-data-in-api-format)<br>[Change OMS agent configuration to catch tag oms.api.eset and parse structured data](#change-oms-agent-configuration-to-catch-tag-omsapieset-and-parse-structured-data)<br>[Disable automatic configuration and restart agent](#disable-automatic-configuration-and-restart-agent)| | **Log Analytics table(s)** | eset_CL | | **DCR support** | Not currently supported |
-| **Vendor documentation/<br>installation instructions** | [ESET Syslog server documentation](https://help.eset.com/esmc_admin/70/en-US/admin_server_settings_syslog.html) |
+| **Vendor documentation/<br>installation instructions** | [ESET Syslog server documentation](https://help.eset.com/protect_admin/10.0/en-US/admin_server_settings_syslog.html) |
| **Supported by** | [ESET](https://support.eset.com/en) |
sentinel Investigate Large Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-large-datasets.md
One of the primary activities of a security team is to search logs for specific events. For example, you might search logs for the activities of a specific user within a given time-frame.
-In Microsoft Sentinel, you can search across long time periods in extremely large datasets by using a search job. While you can run a search job on any type of log, search jobs are ideally suited to search archived logs. If you need to do a full investigation on archived data, you can restore that data into the hot cache to run high performing queries and analytics.
+In Microsoft Sentinel, you can search across long time periods in extremely large datasets by using a search job. While you can run a search job on any type of log, search jobs are ideally suited to search archived logs. If you need to do a full investigation on archived data, you can restore that data into the hot cache to run high performing queries and deeper analysis.
## Search large datasets
virtual-machines Add Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/add-disk.md
Previously updated : 12/05/2022 Last updated : 12/08/2022
In this example, we are using the nano editor, so when you are done editing the
### TRIM/UNMAP support for Linux in Azure
-Some Linux kernels support TRIM/UNMAP operations to discard unused blocks on the disk. This feature is primarily useful to inform Azure that deleted pages are no longer valid and can be discarded. This feature can save money on disks that are billed based on the amount of consumed storage, such as unmanaged standard disks and disk snapshots. Managed disks are billed based on the size of the disk and hence don't benefit.
+Some Linux kernels support TRIM/UNMAP operations to discard unused blocks on the disk. This feature is primarily useful to inform Azure that deleted pages are no longer valid and can be discarded. This feature can save money on disks that are billed based on the amount of consumed storage, such as unmanaged standard disks and disk snapshots.
There are two ways to enable TRIM support in your Linux VM. As usual, consult your distribution for the recommended approach:
virtual-machines Premium Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/premium-storage-performance.md
The best way to measure performance requirements of your application, is to use
The PerfMon counters are available for processor, memory and, each logical disk and physical disk of your server. When you use premium storage disks with a VM, the physical disk counters are for each premium storage disk, and logical disk counters are for each volume created on the premium storage disks. You must capture the values for the disks that host your application workload. If there is a one to one mapping between logical and physical disks, you can refer to physical disk counters; otherwise refer to the logical disk counters. On Linux, the iostat command generates a CPU and disk utilization report. The disk utilization report provides statistics per physical device or partition. If you have a database server with its data and logs on separate disks, collect this data for both disks. Below table describes counters for disks, processors, and memory:
-| Counter | Description | PerfMon | Iostat |
+| Counter | Description | PerfMon | iostat |
| | | | | | **IOPS or Transactions per second** |Number of I/O requests issued to the storage disk per second. |Disk Reads/sec <br> Disk Writes/sec |tps <br> r/s <br> w/s | | **Disk Reads and Writes** |% of Reads and Write operations performed on the disk. |% Disk Read Time <br> % Disk Write Time |r/s <br> w/s |