Updates from: 01/13/2023 02:17:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
Arkose Labs products integration includes the following components:
- Custom HTML, JavaScript, and API connectors integrate with the Arkose platform - **Azure Functions** - Your hosted API endpoint that works with the API connectors feature - This API validates the server-side of the Arkose Labs session token
- - Learn more in the [Azure Functions Overview](/azure/azure-functions/functions-overview)
+ - Learn more in the [Azure Functions Overview](../azure-functions/functions-overview.md)
The following diagram illustrates how the Arkose Labs platform integrates with Azure AD B2C.
Username and password are stored as environment variables, not part of the repos
#### Deploy the application to the web
-1. Deploy your Azure Function to the cloud. Learn more with [Azure Functions documentation](/azure/azure-functions/).
+1. Deploy your Azure Function to the cloud. Learn more with [Azure Functions documentation](../azure-functions/index.yml).
2. Copy the endpoint web URL of your Azure Function. 3. After deployment, select the **Upload settings** option. 4. Your environment variables are uploaded to the Application settings of the app service. Learn more on [Application settings in Azure](../azure-functions/functions-develop-vs-code.md?tabs=csharp#application-settings-in-azure).
Username and password are stored as environment variables, not part of the repos
- [Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose) - Find the Azure AD B2C sign-up user flow - [Azure AD B2C custom policy overview](./custom-policy-overview.md)-- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
For every user in SuccessFactors, Azure AD provisioning service retrieves the fo
| 26 | Manager User | employmentNav/jobInfoNav/managerUserNav | Only if `managerUserNav` is mapped | ## How full sync works
-Based on the attribute-mapping, during full sync Azure AD provisioning service sends the following "GET" OData API query to fetch effective data of all active users.
+Based on the attribute-mapping, during full sync Azure AD provisioning service sends the following "GET" OData API query to fetch effective data of all active and terminated workers.
> [!div class="mx-tdCol2BreakAll"] >| Parameter | Description |
Extending this scenario:
### Mapping employment status to account status
-By default, the Azure AD SuccessFactors connector uses the `activeEmploymentsCount` field of the `PersonEmpTerminationInfo` object to set account status. There is a known SAP SuccessFactors issue documented in [knowledge base article 3047486](https://launchpad.support.sap.com/#/notes/3047486) that at times this may disable the account of a terminated worker one day prior to the termination on the last day of work.
+By default, the Azure AD SuccessFactors connector uses the `activeEmploymentsCount` field of the `PersonEmpTerminationInfo` object to set account status. You may encounter one of the following issues with this attribute.
+1. There is a known SAP SuccessFactors issue documented in [knowledge base article 3047486](https://launchpad.support.sap.com/#/notes/3047486) that at times this may disable the account of a terminated worker one day prior to the termination on the last day of work.
+1. If the `PersonEmpTerminationInfo` object gets set to null, during termination, then AD account disabling will not work, as the provisioning engine filters out records where `personEmpTerminationInfoNav` object is set to null.
-If you are running into this issue or prefer mapping employment status to account status, you can update the mapping to expand the `emplStatus` field and use the employment status code present in the field `emplStatus.externalCode`. Based on [SAP support note 2505526](https://launchpad.support.sap.com/#/notes/2505526), here is a list of employment status codes that you can retrieve in the provisioning app.
+If you are running into any of these issues or prefer mapping employment status to account status, you can update the mapping to expand the `emplStatus` field and use the employment status code present in the field `emplStatus.externalCode`. Based on [SAP support note 2505526](https://launchpad.support.sap.com/#/notes/2505526), here is a list of employment status codes that you can retrieve in the provisioning app.
* A = Active * D = Dormant * U = Unpaid Leave
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Authenticator leverages the native Apple cryptography to achieve FIPS 140, Secur
FIPS 140 compliance for Microsoft Authenticator on Android is in progress and will follow soon.
+## Determining Microsoft Authenticator registration type in My Security-Info
+Managining and adding additional Microsoft Authenticator registrations can be performed by users by accessing https://aka.ms/mysecurityinfo or by selecting Security info from from My Account. Specific icons are used to differentiate whether the Microsoft Authenticator registration is capable of passwordless phone sign-in or MFA.
+
+Authenticator registration type | Icon
+ |
+Microsoft Authenticator: Passwordless phone sign-in | <img width="43" alt="Microsoft Authenticator passwordless sign-in Capable" src="https://user-images.githubusercontent.com/50213291/211923744-d025cd70-4b88-4603-8baf-db0fc5d28486.png">
+Microsoft Authenticator: MFA capable | <img width="43" alt="Microsoft Authenticator MFA Capable" src="https://user-images.githubusercontent.com/50213291/211921054-d11983ad-4e0d-4612-9a14-0fef625a9a2a.png">
++ ## Next steps - To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md).
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
An authentication strength Conditional Access policy works together with [MFA tr
- **Users who signed in by using certificate-based authentication aren't prompted to reauthenticate** - If a user first authenticated by using certificate-based authentication and the authentication strength requires another method, such as a FIDO2 security key, the user isn't prompted to use a FIDO2 security key and authentication fails. The user must restart their session to sign-in with a FIDO2 security key. -- **Authentication methods that are currently not supported by authentication strength** - The Email one-time pass (Guest) authentication method is not included in the available combinations.- - **Using 'Require one of the selected controls' with 'require authentication strength' control** - After you select authentication strengths grant control and additional controls, all the selected controls must be satisfied in order to gain access to the resource. Using **Require one of the selected controls** isn't applicable, and will default to requiring all the controls in the policy. -- **Multiple Conditional Access policies may be created when using "Require authentication strength" grant control**. These are two different policies and you can safely delete one of them.--- **Windows Hello for Business** ΓÇô If the user has used Windows Hello for Business as their primary authentication method it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. However, if the user has used another method as their primary authenticating method (for example, password) and the authentication strength requires them to use Windows Hello for Business they will not be prompted to use not register for Windows Hello for Business. --- **Authentication loop** can happen in one of the following scenarios:
-1. **Microsoft Authenticator (Phone Sign-in)** - When the user is required to use Microsoft Authenticator (Phone Sign-in) but the user is not registered for this method, they will be given instructions on how to set up the Microsoft Authenticator, that does not include how to enable Passwordless sign-in. As a result, the user can get into an authentication loop. To avoid this issue, make sure the user is registered for the method before the Conditional Access policy is enforced. Phone Sign-in can be registered using the steps outlined here: [Add your work or school account to the Microsoft Authenticator app ("Sign in with your credentials")](https://support.microsoft.com/en-us/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c)
-2. **Conditional Access Policy is targeting all apps** - When the Conditional Access policy is targeting "All apps" but the user is not registered for any of the methods required by the authentication strength, the user will get into an authentication loop. To avoid this issue, target specific applications in the Conditional Access policy or make sure the user is registered for at least one of the authentication methods required by the authentication strength Conditional Access policy.
+- **Authentication loop** - When the user is required to use Microsoft Authenticator (Phone Sign-in) but the user is not registered for this method, they will be given instructions on how to set up the Microsoft Authenticator, that does not include how to enable Passwordless sign-in. As a result, the user can get into an authentication loop. To avoid this issue, make sure the user is registered for the method before the Conditional Access policy is enforced. Phone Sign-in can be registered using the steps outlined here: [Add your work or school account to the Microsoft Authenticator app ("Sign in with your credentials")](https://support.microsoft.com/en-us/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c)
## Limitations
An authentication strength Conditional Access policy works together with [MFA tr
- **Require multifactor authentication and Require authentication strength can't be used together in the same Conditional Access policy** - These two Conditional Access grant controls can't be used together because the built-in authentication strength **Multifactor authentication** is equivalent to the **Require multifactor authentication** grant control.
+- **Authentication methods that are currently not supported by authentication strength** - The Email one-time pass (Guest) authentication method is not included in the available combinations.
-<!place holder: Auth Strength with CCS - will be documented in resilience-defaults doc-->
+- **Windows Hello for Business** ΓÇô If the user has used Windows Hello for Business as their primary authentication method it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. However, if the user has used another method as their primary authenticating method (for example, password) and the authentication strength requires them to use Windows Hello for Business they will not be prompted to use not register for Windows Hello for Business.
## FAQ
active-directory Troubleshoot Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-authentication-strengths.md
Previously updated : 09/26/2022 Last updated : 01/11/2023
To verify if a method can be used:
If the user is registered for an enabled method that meets the authentication strength, they might need to use another method that isn't available after primary authentication, such as Windows Hello for Business or certificate-based authentication. For more information, see [How each authentication method works](concept-authentication-methods.md#how-each-authentication-method-works). The user will need to restart the session and choose **Sign-in options** and select a method required by the authentication strength. + ## A user can't access a resource If an authentication strength requires a method that a user canΓÇÖt use, the user is blocked from sign-in. To check which method is required by an authentication strength, and which method the user is registered and enabled to use, follow the steps in the [previous section](#a-user-is-asked-to-sign-in-with-another-method-but-they-dont-see-a-method-they-expect).
active-directory Concept Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/concept-attributes.md
na Previously updated : 02/25/2021 Last updated : 01/11/2023
active-directory Concept How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/concept-how-it-works.md
Previously updated : 12/05/2019 Last updated : 01/11/2023
Cloud sync is built on top of the Azure AD services and has 2 key components: - **Provisioning agent**: The Azure AD Connect cloud provisioning agent is the same agent as Workday inbound and built on the same server-side technology as app proxy and Pass Through Authentication. It requires an outbound connection only and agents are auto-updated. -- **Provisioning service**: Same provisioning service as outbound provisioning and Workday inbound provisioning which uses a scheduler-based model. In case of cloud sync, the changes are provisioned every 2 mins.
+- **Provisioning service**: Same provisioning service as outbound provisioning and Workday inbound provisioning, which uses a scheduler-based model. Cloud sync provisions change every 2 mins.
## Initial setup
-During initial setup, a few things are done that makes cloud sync happen. These are:
+During initial setup, a few things are done that makes cloud sync happen.
- **During agent installation**: You configure the agent for the AD domains you want to provision from. This configuration registers the domains in the hybrid identity service and establishes an outbound connection to the service bus listening for requests.-- **When you enable provisioning**: You select the AD domain and enable provisioning which runs every 2 mins. Optionally you may deselect password hash sync and define notification email. You can also manage attribute transformation using Microsoft Graph APIs.
+- **When you enable provisioning**: You select the AD domain and enable provisioning, which runs every 2 mins. Optionally you may deselect password hash sync and define notification email. You can also manage attribute transformation using Microsoft Graph APIs.
## Agent installation
-The following is a walk-through of what occurs when the cloud provisioning agent is installed.
+The following items occur when the cloud provisioning agent is installed.
-- First, the Installer installs the Agent binaries and the Agent Service running under the Virtual Service Account (NETWORK SERVICE\AADProvisioningAgent). A virtual service account is a special type of account that does not have a password and is managed by Windows.
+- First, the Installer installs the Agent binaries and the Agent Service running under the Virtual Service Account (NETWORK SERVICE\AADProvisioningAgent). A virtual service account is a special type of account that doesn't have a password and is managed by Windows.
- The Installer then starts the Wizard. - The Wizard will prompt for Azure AD credentials, will then authenticate, and retrieve a token. - The wizard then asks for the current machine Domain Administrators credentials. - Using these credentials, the agent general managed service account (GMSA) for this domain is either created or located and reused if it already exists. - The agent service is now reconfigured to run under the GMSA. - The wizard now asks for domain configuration along with the Enterprise Admin (EA)/Domain Admin(DA) Account for each domain you want the agent to service.-- The GMSA account is then updated with permissions that enable it access to each domain entered above.
+- The GMSA account is then updated with permissions that enable it access to each domain entered during setup.
- Next, the wizard triggers agent registration - The agent creates a certificate and using the Azure AD token, registers itself and the certificate with the Hybrid Identity Service(HIS) Registration Service - The Wizard triggers an AgentResourceGrouping call. This call to HIS Admin Service is to assign the agent to one or more AD Domains in the HIS configuration. - The wizard now restarts the agent service.-- The agent calls a Bootstrap Service on restart (and every 10 mins afterwards) to check for configuration updates. The bootstrap service validates the agent identity. It also updates the last bootstrap time. This is important because if agents don't bootstrap, they are not getting updated Service Bus endpoints and may not be able to receive requests.
+- The agent calls a Bootstrap Service on restart (and every 10 mins afterwards) to check for configuration updates. The bootstrap service validates the agent identity. It also updates the last bootstrap time. This is important because if agents don't bootstrap, they aren't getting updated Service Bus endpoints and may not be able to receive requests.
## What is System for Cross-domain Identity Management (SCIM)?
-The [SCIM specification](https://tools.ietf.org/html/draft-scim-core-schema-01) is a standard that is used to automate the exchanging of user or group identity information between identity domains such as Azure AD. SCIM is becoming the de facto standard for provisioning and, when used in conjunction with federation standards like SAML or OpenID Connect, provides administrators an end-to-end standards-based solution for access management.
+The [SCIM specification](https://tools.ietf.org/html/draft-scim-core-schema-01) is a standard that is used to automate the exchanging of user or group identity information between identity domains such as Azure AD. SCIM is becoming the de facto standard for provisioning and, when used with federation standards like SAML or OpenID Connect, provides administrators an end-to-end standards-based solution for access management.
The Azure AD Connect cloud provisioning agent uses SCIM with Azure AD to provision and deprovision users and groups. ## Synchronization flow ![provisioning](media/concept-how-it-works/provisioning-4.png)
-Once you have installed the agent and enabled provisioning, the following flow occurs.
+Once you've installed the agent and enabled provisioning, the following flow occurs.
1. Once configured, the Azure AD Provisioning service calls the Azure AD hybrid service to add a request to the Service bus. The agent constantly maintains an outbound connection to the Service Bus listening for requests and picks up the System for Cross-domain Identity Management (SCIM) request immediately. 2. The agent breaks up the request into separate queries based on object type. 3. AD returns the result to the agent and the agent filters this data before sending it to Azure AD. 4. Agent returns the SCIM response to Azure AD. These responses are based on the filtering that happened within the agent. The agent uses scoping to filter the results. 5. The provisioning service writes the changes to Azure AD.
-6. If this is a delta Sync as opposed to a full sync, then cookie/watermark is used. New queries will get changes from that cookie/watermark onwards.
+6. If a delta Sync occurs, as opposed to a full sync, then the cookie/watermark is used. New queries will get changes from that cookie/watermark onwards.
## Supported scenarios: The following scenarios are supported for cloud sync. -- **Existing hybrid customer with a new forest**: Azure AD Connect sync is used for primary forests. Cloud sync is used for provisioning from an AD forest (including disconnected). For more information see the tutorial [here](tutorial-existing-forest.md).
+- **Existing hybrid customer with a new forest**: Azure AD Connect sync is used for primary forests. Cloud sync is used for provisioning from an AD forest (including disconnected). For more information, see the tutorial [here](tutorial-existing-forest.md).
![Existing hybrid](media/tutorial-existing-forest/existing-forest-new-forest-2.png)-- **New hybrid customer**: Azure AD Connect sync is not used. Cloud sync is used for provisioning from an AD forest. For more information see the tutorial [here](tutorial-single-forest.md).
+- **New hybrid customer**: Azure AD Connect sync isn't used. Cloud sync is used for provisioning from an AD forest. For more information, see the tutorial [here](tutorial-single-forest.md).
![New customers](media/tutorial-single-forest/diagram-2.png)
active-directory How To Accidental Deletes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-accidental-deletes.md
Previously updated : 09/10/2021 Last updated : 01/11/2023
The following document describes the accidental deletion feature for Azure AD Co
To use this feature, you set the threshold for the number of objects that, if deleted, synchronization should stop. So if this number is reached, the synchronization will stop and a notification will be sent to the email that is specified. This notification will allow you to investigate what is going on.
-For additional information and an example, see the following video.
+For more information and an example, see the following video.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWK5mV]
To use the new feature, follow the steps below.
2. Select **Azure AD Connect**. 3. Select **Manage cloud sync**. 4. Under **Configuration**, select your configuration.
-5. Under **Settings** fill in the following:
+5. Under **Settings** fill in the following information.
- **Notification email** - email used for notifications - **Prevent accidental deletions** - check this box to enable the feature - **Accidental deletion threshold** - enter the number of objects to stop synchronization and send a notification
To use the new feature, follow the steps below.
![Accidental deletes](media/how-to-accidental-deletes/accident-1.png) ## Recovering from an accidental delete instance
-If you encounter an accidental delete you will see this on the status of your provisioning agent configuration. It will say **Delete threshold exceeded**.
+If you encounter an accidental delete you'll see this on the status of your provisioning agent configuration. It will say **Delete threshold exceeded**.
![Accidental delete status](media/how-to-accidental-deletes/delete-1.png)
-By clicking on **Delete threshold exceeded**, you will see the sync status info. This will provide additional details.
+By clicking on **Delete threshold exceeded**, you'll see the sync status info. This action will provide more details.
![Sync status](media/how-to-accidental-deletes/delete-2.png)
-By right-clicking on the ellipses, you will get the following options:
+By right-clicking on the ellipses, you'll get the following options:
- View provisioning log - View agent - Allow deletes
The **Allow deletes** action will delete the objects that triggered the accident
![Yes on confirmation](media/how-to-accidental-deletes/delete-4.png)
-3. You will see confirmation that the deletions were accepted and the status will return to healthy with the next cycle.
+3. You'll see confirmation that the deletions were accepted and the status will return to healthy with the next cycle.
![Accept deletes](media/how-to-accidental-deletes/delete-8.png) ### Rejecting deletions
-If you do not want to allow the deletions, you need to do the following:
+If you don't want to allow the deletions, you need to do the following:
- investigate the source of the deletions-- fix the issue (example, OU was moved out of scope accidentally and you have now re-added it back to the scope)
+- fix the issue (example, OU was moved out of scope accidentally and you've now re-added it back to the scope)
- Run **Restart sync** on the agent configuration ## Next steps
active-directory How To Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-attribute-mapping.md
Previously updated : 04/30/2021 Last updated : 01/11/2023
active-directory How To Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-automatic-upgrade.md
na Previously updated : 12/02/2019 Last updated : 01/11/2023
active-directory How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-configure.md
Previously updated : 12/14/2021 Last updated : 01/11/2023
active-directory How To Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-expression-builder.md
Previously updated : 04/19/2021 Last updated : 01/11/2023
active-directory How To Gmsa Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-gmsa-cmdlets.md
Previously updated : 07/01/2022 Last updated : 01/11/2023
active-directory How To Inbound Synch Ms Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-inbound-synch-ms-graph.md
Previously updated : 12/04/2020 Last updated : 01/11/2023
active-directory How To Install Pshell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install-pshell.md
Previously updated : 01/31/2021 Last updated : 01/11/2023
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install.md
Previously updated : 11/11/2022 Last updated : 01/11/2023
active-directory How To Manage Registry Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-manage-registry-options.md
na Previously updated : 12/11/2020 Last updated : 01/11/2023
active-directory How To Map Usertype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-map-usertype.md
Previously updated : 05/04/2021 Last updated : 01/11/2023
active-directory How To On Demand Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-on-demand-provision.md
Previously updated : 09/10/2021 Last updated : 01/11/2023
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-prerequisites.md
Previously updated : 03/04/2022 Last updated : 01/11/2023
active-directory How To Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-transformation.md
description: This article describes how to use transformations to alter the defa
Previously updated : 12/02/2019 Last updated : 01/11/2023 ms.prod: windows-server-threshold ms.technology: identity-adfs
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
You're not required to maintain the resources that are used for verification aft
If your tenant has verified domains, in the **Select a verified domain** dropdown, select one of the domains. > [!NOTE]
-> The expected `Content-Type` header that should return is `application/json`. If you use any other header, like `application/json; charset=utf-8`, you might see this error message:
+> Content will be interpreted as UTF-8 JSON for deserialization. Supported `Content-Type` headers that should return are `application/json`, `application/json; charset=utf-8`, or ` `. If you use any other header, you might see this error message:
> > `Verification of publisher domain failed. Error getting JSON file from https:///.well-known/microsoft-identity-association. The server returned an unexpected content type header value.` >
active-directory Msal Logging Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-java.md
This article shows how to enable MSAL4J logging using the logback framework in a
} ```
-In your tenant, you'll need separate app registrations for the web app and the web API. For app registration and exposing the web API scope, follow the steps in the scenario [A web app that authenticates users and calls web APIs](/azure/active-directory/develop/scenario-web-app-call-api-overview).
+In your tenant, you'll need separate app registrations for the web app and the web API. For app registration and exposing the web API scope, follow the steps in the scenario [A web app that authenticates users and calls web APIs](./scenario-web-app-call-api-overview.md).
For instructions on how to bind to other logging frameworks, see the [SLF4J manual](http://www.slf4j.org/manual.html).
PublicClientApplication app2 = PublicClientApplication.builder(PUBLIC_CLIENT_ID)
## Next steps
-For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
+For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
There are other ways in which applications can be granted authorization for app-
### Comparison of delegated and application permissions
-| <!-- No header--> | Delegated permissions | Application permissions |
+| | Delegated permissions | Application permissions |
|--|--|--| | Types of apps | Web / Mobile / single-page app (SPA) | Web / Daemon | | Access context | Get access on behalf of a user | Get access without a user |
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS50029 | Invalid URI - domain name contains invalid characters. Contact the tenant admin. | | AADSTS50032 | WeakRsaKey - Indicates the erroneous user attempt to use a weak RSA key. | | AADSTS50033 | RetryableError - Indicates a transient error not related to the database operations. |
-| AADSTS50034 | UserAccountNotFound - To sign into this application, the account must be added to the directory. This error can occur because the user mis-typed their username, or isn't in the tenant. An application may have chosen the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. See docs here: [Add B2B users](/azure/active-directory/external-identities/add-users-administrator). |
+| AADSTS50034 | UserAccountNotFound - To sign into this application, the account must be added to the directory. This error can occur because the user mis-typed their username, or isn't in the tenant. An application may have chosen the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. See docs here: [Add B2B users](../external-identities/add-users-administrator.md). |
| AADSTS50042 | UnableToGeneratePairwiseIdentifierWithMissingSalt - The salt required to generate a pairwise identifier is missing in principle. Contact the tenant admin. | | AADSTS50043 | UnableToGeneratePairwiseIdentifierWithMultipleSalts | | AADSTS50048 | SubjectMismatchesIssuer - Subject mismatches Issuer claim in the client assertion. Contact the tenant admin. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS51005 | TemporaryRedirect - Equivalent to HTTP status 307, which indicates that the requested information is located at the URI specified in the location header. When you receive this status, follow the location header associated with the response. When the original request method was POST, the redirected request will also use the POST method. | | AADSTS51006 | ForceReauthDueToInsufficientAuth - Integrated Windows authentication is needed. User logged in using a session token that is missing the integrated Windows authentication claim. Request the user to log in again. | | AADSTS52004 | DelegationDoesNotExistForLinkedIn - The user has not provided consent for access to LinkedIn resources. |
-| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. For additional information, please visit [Conditional Access device remediation](/azure/active-directory/conditional-access/troubleshoot-conditional-access). |
+| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. For additional information, please visit [Conditional Access device remediation](../conditional-access/troubleshoot-conditional-access.md). |
| AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device isn't domain joined. Have the user use a domain joined device. | | AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used isn't an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. |
-| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. If this is unexpected, see the conditional access policy that applied to this request in the Azure Portal or contact your administrator. For additional information, please visit [troubleshooting sign-in with Conditional Access](/azure/active-directory/conditional-access/troubleshoot-conditional-access). |
+| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. If this is unexpected, see the conditional access policy that applied to this request in the Azure Portal or contact your administrator. For additional information, please visit [troubleshooting sign-in with Conditional Access](../conditional-access/troubleshoot-conditional-access.md). |
| AADSTS53004 | ProofUpBlockedDueToRisk - User needs to complete the multi-factor authentication registration process before accessing this content. User should register for multi-factor authentication. | | AADSTS53010 | ProofUpBlockedDueToSecurityInfoAcr - Cannot configure multi-factor authentication methods because the organization requires this information to be set from specific locations or devices. | | AADSTS53011 | User blocked due to risk on home tenant. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS90055 | TenantThrottlingError - There are too many incoming requests. This exception is thrown for blocked tenants. | | AADSTS90056 | BadResourceRequest - To redeem the code for an access token, the app should send a POST request to the `/token` endpoint. Also, prior to this, you should provide an authorization code and send it in the POST request to the `/token` endpoint. Refer to this article for an overview of [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). Direct the user to the `/authorize` endpoint, which will return an authorization_code. By posting a request to the `/token` endpoint, the user gets the access token. Log in the Azure portal, and check **App registrations > Endpoints** to confirm that the two endpoints were configured correctly. | | AADSTS900561 | BadResourceRequestInvalidRequest - The endpoint only accepts {valid_verbs} requests. Received a {invalid_verb} request. {valid_verbs} represents a list of HTTP verbs supported by the endpoint (for example, POST), {invalid_verb} is an HTTP verb used in the current request (for example, GET). This can be due to developer error, or due to users pressing the back button in their browser, triggering a bad request. It can be ignored. |
-| AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. For more information, please visit [configuring external identities](/azure/active-directory/external-identities/external-identities-overview). |
+| AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. For more information, please visit [configuring external identities](../external-identities/external-identities-overview.md). |
| AADSTS90081 | OrgIdWsFederationMessageInvalid - An error occurred when the service tried to process a WS-Federation message. The message isn't valid. | | AADSTS90082 | OrgIdWsFederationNotSupported - The selected authentication policy for the request isn't currently supported. | | AADSTS90084 | OrgIdWsFederationGuestNotAllowed - Guest accounts aren't allowed for this site. |
The `error` field has several possible values - review the protocol documentatio
## Next steps
-* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
+* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
sampleApp/
In the next steps, you'll create a new folder for the JavaScript SPA and set up the user interface (UI). > [!TIP]
-> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization. It's primarily associated with a domain, like Microsoft.com. If you want to learn how applications can work with multiple tenants, refer to the [application model](/azure/active-directory/develop/application-model).
+> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization. It's primarily associated with a domain, like Microsoft.com. If you want to learn how applications can work with multiple tenants, refer to the [application model](./application-model.md).
## Create the SPA UI
The Microsoft Graph API requires the `User.Read` scope to read a user's profile.
Delve deeper into SPA development on the Microsoft identity platform in the first part of a scenario series: > [!div class="nextstepaction"]
-> [Scenario: Single-page application](scenario-spa-overview.md)
+> [Scenario: Single-page application](scenario-spa-overview.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 01/05/2023 Last updated : 01/11/2023
Welcome to what's new in the Microsoft identity platform documentation. This art
### Updated articles -- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](quickstart-v2-aspnet-core-web-api.md)
+- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)
- [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](quickstart-v2-netcore-daemon.md)
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](quickstart-v2-aspnet-core-web-api.md)
+- [Tutorial: Create a Blazor Server app that uses the Microsoft identity platform for authentication](tutorial-blazor-server.md)
- [Tutorial: Sign in users and call a protected API from a Blazor WebAssembly app](tutorial-blazor-webassembly.md)-- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)-- [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md) - [Web app that signs in users: App registration](scenario-web-app-sign-user-app-registration.md)-- [Microsoft identity platform docs: What's new](whats-new-docs.md)-- [Tutorial: Create a Blazor Server app that uses the Microsoft identity platform for authentication](tutorial-blazor-server.md)
+- [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md)
+ ## November 2022 ### New articles
active-directory Workload Identity Federation Block Using Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-block-using-azure-policy.md
# Block workload identity federation on managed identities using a policy
-This article describes how to block the creation of federated identity credentials on user-assigned managed identities by using Azure Policy. By blocking the creation of federated identity credentials, you can block everyone from using [workload identity federation](workload-identity-federation.md) to access Azure AD protected resources. [Azure Policy](/azure/governance/policy/overview) helps enforce certain business rules on your Azure resources and assess compliance of those resources.
+This article describes how to block the creation of federated identity credentials on user-assigned managed identities by using Azure Policy. By blocking the creation of federated identity credentials, you can block everyone from using [workload identity federation](workload-identity-federation.md) to access Azure AD protected resources. [Azure Policy](../../governance/policy/overview.md) helps enforce certain business rules on your Azure resources and assess compliance of those resources.
The Not allowed resource types built-in policy can be used to block the creation of federated identity credentials on user-assigned managed identities.
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
In this article, you learn how to create, list, and delete federated identity cr
## Important considerations and restrictions
-To create, update, or delete a federated identity credential, the account performing the action must have the [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator), [Application Developer](/azure/active-directory/roles/permissions-reference#application-developer), [Cloud Application Administrator](/azure/active-directory/roles/permissions-reference#cloud-application-administrator), or Application Owner role. The [microsoft.directory/applications/credentials/update permission](/azure/active-directory/roles/custom-available-permissions#microsoftdirectoryapplicationscredentialsupdate) is required to update a federated identity credential.
+To create, update, or delete a federated identity credential, the account performing the action must have the [Application Administrator](../roles/permissions-reference.md#application-administrator), [Application Developer](../roles/permissions-reference.md#application-developer), [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator), or Application Owner role. The [microsoft.directory/applications/credentials/update permission](../roles/custom-available-permissions.md#microsoftdirectoryapplicationscredentialsupdate) is required to update a federated identity credential.
[!INCLUDE [federated credential configuration](./includes/federated-credential-configuration-considerations.md)]
az rest -m DELETE -u 'https://graph.microsoft.com/applications/f6475511-fd81-49
- To learn how to use workload identity federation for GitHub Actions, see [Configure a GitHub Actions workflow to get an access token](/azure/developer/github/connect-from-azure). - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources. - For more information, read about how Azure AD uses the [OAuth 2.0 client credentials grant](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.-- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
A few enterprise applications can't be deleted in the Azure portal and might blo
> > Before you proceed, verify that you're connected to the tenant that you want to delete with the MSOnline module. We recommend that you run the `Get-MsolDomain` command to confirm that you're connected to the correct tenant ID and `onmicrosoft.com` domain.
-5. Run the following command to set the tenant context:
+5. Run the following commands to set the tenant context. DO NOT skip these steps or you run the risk of deleting enterprise apps from the wrong teant.
+ `Clear-AzContext -Scope CurrentUser`
`Connect-AzAccount -Tenant \<object id of the tenant you are attempting to delete\>`
+ `Get-AzContext`
>[!WARNING]
- > Before you proceed, verify that you're connected to the tenant that you want to delete with the Az PowerShell module. We recommend that you run the `Get-AzContext` command to check the connected tenant ID and `onmicrosoft.com` domain.
+ > Before you proceed, verify that you're connected to the tenant that you want to delete with the Az PowerShell module. We recommend that you run the `Get-AzContext` command to check the connected tenant ID and `onmicrosoft.com` domain. Do NOT skip the above steps or you run the risk of deleting enterprise apps from the wrong tenant.
6. Run the following command to remove any enterprise apps that you can't delete:
active-directory 3 Secure Access Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/3-secure-access-plan.md
To group resources for access:
* Microsoft Teams groups files, conversation threads, and other resources. Formulate an external access strategy for Microsoft Teams. * See, [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md) * Use entitlement management access packages to create and delegate management of packages of applications, groups, teams, SharePoint sites, etc.
- * [Create a new access package in entitlement management](/azure/active-directory/governance/entitlement-management-access-package-create)
+ * [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md)
* Apply Conditional Access policies to up to 250 applications, with the same access requirements
- * [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
+ * [What is Conditional Access?](../conditional-access/overview.md)
* Use Cross Tenant Access Settings Inbound Access to define access for application groups of external users
- * [Overview: Cross-tenant access with Azure AD External Identities](/azure/active-directory/external-identities/cross-tenant-access-overview)
+ * [Overview: Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md)
Document the applications to be grouped. Considerations include:
Items in bold are recommended.
* [Manage external access with entitlement management](6-secure-access-entitlement-managment.md) * [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) * [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
-* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
+* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
After you sign in to the Azure portal, you can create a new tenant for your orga
![Azure Active Directory - Create a tenant page - configuration tab ](media/active-directory-access-create-new-tenant/azure-ad-create-new-tenant.png)
- - Type _Contoso Organization_ into the **Organization name** box.
+ - Type your desired Organization name (for example _Contoso Organization_) into the **Organization name** box.
- - Type _Contosoorg_ into the **Initial domain name** box.
+ - Type your desired Initial domain name (for example _Contosoorg_) into the **Initial domain name** box.
- - Leave the _United States_ option in the **Country or region** box.
+ - Select your desired Country/Region or leave the _United States_ option in the **Country or region** box.
1. Select **Next: Review + Create**. Review the information you entered and if the information is correct, select **create**.
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md
Use the following list to plan for authentication deployment.
* See the video, [How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM) * See, [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md) * **Conditional Access** - Implement automated access-control decisions for users to access cloud apps, based on conditions:
- * See, [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
+ * See, [What is Conditional Access?](../conditional-access/overview.md)
* See, [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md) * **Azure AD self-service password reset (SSPR)** - Help users reset a password without administrator intervention: * See, [Passwordless authentication options for Azure AD](/articles/active-directory/authentication/concept-authentication-passwordless.md) * See, [Plan an Azure Active Directory self-service password reset deployment](../authentication/howto-sspr-deployment.md) * **Passordless authentication** - Implement passwordless authentication using the Microsoft Authenticator app or FIDO2 Security keys:
- * See, [Enable passwordless sign-in with Microsoft Authenticator](/azure/active-directory/authentication/howto-authentication-passwordless-phone)
+ * See, [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md)
* See, [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md) ## Applications and devices
Use the following list to help deploy applications and devices.
* See, [What is SSO in Azure AD?](/articles/active-directory/manage-apps/what-is-single-sign-on.md) * See, [Plan a SSO deployment](../manage-apps/plan-sso-deployment.md) * **My Apps portal** - A web-based portal to discover and access applications. Enable user productivity with self-service, for instance requesting access to groups, or managing access to resources on behalf of others.
- * See, [My Apps portal overview](/azure/active-directory/manage-apps/myapps-overview)
+ * See, [My Apps portal overview](../manage-apps/myapps-overview.md)
* **Devices** - Evaluate device integration methods with Azure AD, choose the implementation plan, and more. * See, [Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)
The following list describes features and services for productivity gains in hyb
* **Identity governance** - Create identity governance and enhance business processes that rely on identity data. With HR products, such as Workday or Successfactors, manage employee and contingent-staff identity lifecycle with rules. These rules map Joiner-Mover-Leaver processes, such as New Hire, Terminate, Transfer, to IT actions such as Create, Enable, Disable. * See, [Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) * **Azure AD B2B collaboration** - Improve external-user collaboration with secure access to applications:
- * See, [B2B collaboration overview](/azure/active-directory/external-identities/what-is-b2b)
+ * See, [B2B collaboration overview](../external-identities/what-is-b2b.md)
* See, [Plan an Azure Active Directory B2B collaboration deployment](../fundamentals/secure-external-access-resources.md) ## Governance and reporting
Use the following list to learn about governance and reporting. Items in the lis
Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https://www.microsoft.com/en-us/security/blog/?p=114039) * **Privileged identity management (PIM)** - Manage privileged administrative roles across Azure AD, Azure resources, and other Microsoft Online Services. Use it for just-in-time access, request approval workflows, and fully integrated access reviews to help prevent malicious activities:
- * See, [Start using Privileged Identity Management](/azure/active-directory/privileged-identity-management/pim-getting-started)
+ * See, [Start using Privileged Identity Management](../privileged-identity-management/pim-getting-started.md)
* See, [Plan a Privileged Identity Management deployment](../privileged-identity-management/pim-deployment-plan.md) * **Reporting and monitoring** - Your Azure AD reporting and monitoring solution design has dependencies and constraints: legal, security, operations, environment, and processes. * See, [Azure Active Directory reporting and monitoring deployment dependencies](../reports-monitoring/plan-monitoring-and-reporting.md)
Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https:/
* **Identity governance** - Meet your compliance and risk management objectives for access to critical applications. Learn how to enforce accurate access. * See, [Govern access for applications in your environment](../governance/identity-governance-applications-prepare.md)
-Learn more: [Azure governance documentation](/azure/governance/)
+Learn more: [Azure governance documentation](../../governance/index.yml)
## Best practices for a pilot
In your first phase, target IT, usability, and other users who can test and prov
Widen the pilot to larger groups of users by using dynamic membership, or by manually adding users to the targeted group(s).
-Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)]
+Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)]
active-directory Azure Ad Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-ad-data-residency.md
# Azure Active Directory and data residency
-Azure AD is an Identity as a Service (IDaaS) solution that stores and manages identity and access data in the cloud. You can use the data to enable and manage access to cloud services, achieve mobility scenarios, and secure your organization. An instance of the Azure AD service, called a [tenant](/azure/active-directory/develop/developer-glossary#tenant), is an isolated set of directory object data that the customer provisions and owns.
+Azure AD is an Identity as a Service (IDaaS) solution that stores and manages identity and access data in the cloud. You can use the data to enable and manage access to cloud services, achieve mobility scenarios, and secure your organization. An instance of the Azure AD service, called a [tenant](../develop/developer-glossary.md#tenant), is an isolated set of directory object data that the customer provisions and owns.
## Core Store
Use the following table to see Azure AD cloud solution models based on infrastru
Learn more:
-* [Customer data storage and processing for European customers in Azure AD](/azure/active-directory/fundamentals/active-directory-data-storage-eu)
+* [Customer data storage and processing for European customers in Azure AD](./active-directory-data-storage-eu.md)
* Power BI: [Azure Active Directory ΓÇô Where is your data located?](https://aka.ms/aaddatamap) * [What is the Azure Active Directory architecture?](https://aka.ms/aadarch) * [Find the Azure geography that meets your needs](https://azure.microsoft.com/overview/datacenters/how-to-choose/)
Learn more: [Azure Active Directory, Product overview](https://www.microsoft.com
|||| |Azure AD Authentication Service|This service is stateless. The data for authentication is in the Azure AD Core Store. It has no directory data. Azure AD Authentication Service generates log data in Azure storage, and in the data center where the service instance runs. When users attempt to authenticate using Azure AD, theyΓÇÖre routed to an instance in the geographically nearest data center that is part of its Azure AD logical region. |In geo location| |Azure AD Identity and Access Management (IAM) Services|**User and management experiences**: The Azure AD management experience is stateless and has no directory data. It generates log and usage data stored in Azure Tables storage. The user experience is like the Azure portal. <br>**Identity management business logic and reporting services**: These services have locally cached data storage for groups and users. The services generate log and usage data that goes to Azure Tables storage, Azure SQL, and in Microsoft Elastic Search reporting services. |In geo location|
-|Azure AD Multi-Factor Authentication (MFA)|For details about MFA-operations data storage and retention, see [Data residency and customer data for Azure AD multifactor authentication](/azure/active-directory/authentication/concept-mfa-data-residency). Azure AD MFA logs the User Principal Name (UPN), voice-call telephone numbers, and SMS challenges. For challenges to mobile app modes, the service logs the UPN and a unique device token. Data centers in the North America region store Azure AD MFA, and the logs it creates.|North America|
+|Azure AD Multi-Factor Authentication (MFA)|For details about MFA-operations data storage and retention, see [Data residency and customer data for Azure AD multifactor authentication](../authentication/concept-mfa-data-residency.md). Azure AD MFA logs the User Principal Name (UPN), voice-call telephone numbers, and SMS challenges. For challenges to mobile app modes, the service logs the UPN and a unique device token. Data centers in the North America region store Azure AD MFA, and the logs it creates.|North America|
|Azure AD Domain Services|See regions where Azure AD Domain Services is published on [Products available by region](https://azure.microsoft.com/regions/services/). The service holds system metadata globally in Azure Tables, and it contains no personal data.|In geo location| |Azure AD Connect Health|Azure AD Connect Health generates alerts and reports in Azure Tables storage and blob storage.|In geo location| |Azure AD dynamic membership for groups, Azure AD self-service group management|Azure Tables storage holds dynamic membership rule definitions.|In geo location|
Learn more: [Azure Active Directory, Product overview](https://www.microsoft.com
|Azure AD provisioning|Azure AD provisioning creates, removes, and updates users in systems, such as software as service (SaaS) applications. It manages user creation in Azure AD and on-premises AD from cloud HR sources, like Workday. The service stores its configuration in an Azure Cosmos DB, which stores the group membership data for the user directory it keeps. Cosmos DB replicates the database to multiple datacenters in the same region as the tenant, which isolates the data, according to the Azure AD cloud solution model. Replication creates high availability and multiple reading and writing endpoints. Cosmos DB has encryption on the database information, and the encryption keys are stored in the secrets storage for Microsoft.|In geo location| |Azure AD business-to-business (B2B) collaboration|Azure AD B2B collaboration has no directory data. Users and other directory objects in a B2B relationship, with another tenant, result in user data copied in other tenants, which might have data residency implications.|In geo location| |Azure AD Identity Protection|Azure AD Identity Protection uses real-time user log-in data, with multiple signals from company and industry sources, to feed its machine-learning systems that detect anomalous logins. Personal data is scrubbed from real-time log-in data before itΓÇÖs passed to the machine learning system. The remaining log-in data identifies potentially risky usernames and logins. After analysis, the data goes to Microsoft reporting systems. Risky logins and usernames appear in reporting for Administrators.|In geo location|
-|Azure AD managed identities for Azure resources|Azure AD managed identities for Azure resources with managed identities systems can authenticate to Azure services, without storing credentials. Rather than use username and password, managed identities authenticate to Azure services with certificates. The service writes certificates it issues in Azure Cosmos DB in the East US region, which fail over to another region, as needed. Azure Cosmos DB geo-redundancy occurs by global data replication. Database replication puts a read-only copy in each region that Azure AD managed identities runs. To learn more, see [Azure services that can use managed identities to access other services](/azure/active-directory/managed-identities-azure-resources/managed-identities-status#azure-services-that-support-managed-identities-for-azure-resources). Microsoft isolates each Cosmos DB instance in an Azure AD cloud solution model. </br> The resource provider, such as the virtual machine (VM) host, stores the certificate for authentication, and identity flows, with other Azure services. The service stores its master key to access Azure Cosmos DB in a datacenter secrets management service. Azure Key Vault stores the master encryption keys.|In geo location|
-|Azure Active Directory B2C |[Azure AD B2C](/azure/active-directory-b2c/data-residency) is an identity management service to customize and manage how customers sign up, sign in, and manage their profiles when using applications. B2C uses the Core Store to keep user identity information. The Core Store database follows known storage, replication, deletion, and data-residency rules. B2C uses an Azure Cosmos DB system to store service policies and secrets. Cosmos DB has encryption and replication services on database information. Its encryption key is stored in the secrets storage for Microsoft. Microsoft isolates Cosmos DB instances in an Azure AD cloud solution model.|Customer-selectable geo location|
+|Azure AD managed identities for Azure resources|Azure AD managed identities for Azure resources with managed identities systems can authenticate to Azure services, without storing credentials. Rather than use username and password, managed identities authenticate to Azure services with certificates. The service writes certificates it issues in Azure Cosmos DB in the East US region, which fail over to another region, as needed. Azure Cosmos DB geo-redundancy occurs by global data replication. Database replication puts a read-only copy in each region that Azure AD managed identities runs. To learn more, see [Azure services that can use managed identities to access other services](../managed-identities-azure-resources/managed-identities-status.md). Microsoft isolates each Cosmos DB instance in an Azure AD cloud solution model. </br> The resource provider, such as the virtual machine (VM) host, stores the certificate for authentication, and identity flows, with other Azure services. The service stores its master key to access Azure Cosmos DB in a datacenter secrets management service. Azure Key Vault stores the master encryption keys.|In geo location|
+|Azure Active Directory B2C |[Azure AD B2C](../../active-directory-b2c/data-residency.md) is an identity management service to customize and manage how customers sign up, sign in, and manage their profiles when using applications. B2C uses the Core Store to keep user identity information. The Core Store database follows known storage, replication, deletion, and data-residency rules. B2C uses an Azure Cosmos DB system to store service policies and secrets. Cosmos DB has encryption and replication services on database information. Its encryption key is stored in the secrets storage for Microsoft. Microsoft isolates Cosmos DB instances in an Azure AD cloud solution model.|Customer-selectable geo location|
## Related resources
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-principal.md
Learn more about securing service accounts:
Conditional Access:
-Use Conditional Access to block service principals from untrusted locations. See, [Create a location-based Conditional Access policy](/azure/active-directory/conditional-access/workload-identity#create-a-location-based-conditional-access-policy).
-
+Use Conditional Access to block service principals from untrusted locations. See, [Create a location-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-location-based-conditional-access-policy).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
In this Public Preview refresh, we have enhanced the user experience with an upd
For more information, see: [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md). ---
-### Public Preview - Enabling customization capabilities for the conditional error messages in Company Branding.
---
-**Type:** New feature
-**Service category:** Other
-**Product capability:** End User Experiences
-
-Updates to the Company Branding functionality on the Azure AD/Microsoft 365 login experience, to allow customizing conditional access (CA) error messages. For more information, see: [Company Branding](../fundamentals/customize-branding.md).
-- ### Public Preview - Admins can restrict their users from creating tenants
Azure AD supports provisioning users into applications hosted on-premises or in
In December 2022 we have added the following 44 new applications in our App gallery with Federation support
-[Bionexo IDM](https://login.bionexo.com/), [SMART Meeting Pro](https://www.smarttech.com/en/business/software/meeting-pro), [Venafi Control Plane ΓÇô Datacenter](/azure/active-directory/saas-apps/venafi-control-plane-tutorial), [HighQ](../saas-apps/highq-tutorial.md), [Drawboard PDF](https://pdf.drawboard.com/), [ETU Skillsims](../saas-apps/etu-skillsims-tutorial.md), [TencentCloud IDaaS](../saas-apps/tencent-cloud-idaas-tutorial.md), [TeamHeadquarters Email Agent OAuth](https://thq.entry.com/), [Verizon MDM](https://verizonmdm.vzw.com/), [QRadar SOAR](../saas-apps/qradar-soar-tutorial.md), [Tripwire Enterprise](../saas-apps/tripwire-enterprise-tutorial.md), [Cisco Unified Communications Manager](../saas-apps/cisco-unified-communications-manager-tutorial.md), [Howspace](https://login.in.howspace.com/), [Flipsnack SAML](../saas-apps/flipsnack-saml-tutorial.md), [Albert](http://www.albertinvent.com/), [Altinget.no](https://www.altinget.no/), [Coveo Hosted Services](../saas-apps/coveo-hosted-services-tutorial.md), [Cybozu(cybozu.com)](../saas-apps/cybozu-tutorial.md), [BombBomb](https://app.bombbomb.com/app), [VMware Identity Service](../saas-apps/vmware-identity-service-tutorial.md), [Cimmaron Exchange Sync - Delegated](https://cimmaronsoftware.com/Mortgage-CRM-Exchange-Sync.aspx), [HexaSync](https://app-az.hexasync.com/login), [Trifecta Teams](https://app.trifectateams.net/), [VerosoftDesign](https://verosoft-design.vercel.app/login), [Mazepay](https://app.mazepay.com/), [Wistia](../saas-apps/wistia-tutorial.md), [Begin.AI](https://app.begin.ai/), [WebCE](../saas-apps/webce-tutorial.md), [Dream Broker Studio](https://dreambroker.com/studio/login/), [PKSHA Chatbot](../saas-apps/pksha-chatbot-tutorial.md), [PGM-BCP](https://ups-pgm-bcp.4gfactor.com/azure/), [ChartDesk SSO](../saas-apps/chartdesk-sso-tutorial.md), [Elsevier SP](../saas-apps/elsevier-sp-tutorial.md), [GreenCommerce IdentityServer](https://identity.jem-id.nl/Account/Login), [Fullview](https://app.fullview.io/sign-in), [Aqua Platform](../saas-apps/aqua-platform-tutorial.md), [SpedTrack](../saas-apps/spedtrack-tutorial.md), [Pinpoint](https://pinpoint.ddiworld.com/psg2?sso=true), [Darzin Outlook Add-in](https://outlook.darzin.com/graph-login.html), [Simply Stakeholders Outlook Add-in](https://outlook.simplystakeholders.com/graph-login.html), [tesma](../saas-apps/tesma-tutorial.md), [Parkable](../saas-apps/parkable-tutorial.md), [Unite Us](../saas-apps/unite-us-tutorial.md)
+[Bionexo IDM](https://login.bionexo.com/), [SMART Meeting Pro](https://www.smarttech.com/en/business/software/meeting-pro), [Venafi Control Plane ΓÇô Datacenter](../saas-apps/venafi-control-plane-tutorial.md), [HighQ](../saas-apps/highq-tutorial.md), [Drawboard PDF](https://pdf.drawboard.com/), [ETU Skillsims](../saas-apps/etu-skillsims-tutorial.md), [TencentCloud IDaaS](../saas-apps/tencent-cloud-idaas-tutorial.md), [TeamHeadquarters Email Agent OAuth](https://thq.entry.com/), [Verizon MDM](https://verizonmdm.vzw.com/), [QRadar SOAR](../saas-apps/qradar-soar-tutorial.md), [Tripwire Enterprise](../saas-apps/tripwire-enterprise-tutorial.md), [Cisco Unified Communications Manager](../saas-apps/cisco-unified-communications-manager-tutorial.md), [Howspace](https://login.in.howspace.com/), [Flipsnack SAML](../saas-apps/flipsnack-saml-tutorial.md), [Albert](http://www.albertinvent.com/), [Altinget.no](https://www.altinget.no/), [Coveo Hosted Services](../saas-apps/coveo-hosted-services-tutorial.md), [Cybozu(cybozu.com)](../saas-apps/cybozu-tutorial.md), [BombBomb](https://app.bombbomb.com/app), [VMware Identity Service](../saas-apps/vmware-identity-service-tutorial.md), [Cimmaron Exchange Sync - Delegated](https://cimmaronsoftware.com/Mortgage-CRM-Exchange-Sync.aspx), [HexaSync](https://app-az.hexasync.com/login), [Trifecta Teams](https://app.trifectateams.net/), [VerosoftDesign](https://verosoft-design.vercel.app/login), [Mazepay](https://app.mazepay.com/), [Wistia](../saas-apps/wistia-tutorial.md), [Begin.AI](https://app.begin.ai/), [WebCE](../saas-apps/webce-tutorial.md), [Dream Broker Studio](https://dreambroker.com/studio/login/), [PKSHA Chatbot](../saas-apps/pksha-chatbot-tutorial.md), [PGM-BCP](https://ups-pgm-bcp.4gfactor.com/azure/), [ChartDesk SSO](../saas-apps/chartdesk-sso-tutorial.md), [Elsevier SP](../saas-apps/elsevier-sp-tutorial.md), [GreenCommerce IdentityServer](https://identity.jem-id.nl/Account/Login), [Fullview](https://app.fullview.io/sign-in), [Aqua Platform](../saas-apps/aqua-platform-tutorial.md), [SpedTrack](../saas-apps/spedtrack-tutorial.md), [Pinpoint](https://pinpoint.ddiworld.com/psg2?sso=true), [Darzin Outlook Add-in](https://outlook.darzin.com/graph-login.html), [Simply Stakeholders Outlook Add-in](https://outlook.simplystakeholders.com/graph-login.html), [tesma](../saas-apps/tesma-tutorial.md), [Parkable](../saas-apps/parkable-tutorial.md), [Unite Us](../saas-apps/unite-us-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
We recognize that changing libraries is not an easy task, and cannot be accompli
### How to find out which applications in my tenant are using ADAL?
-Refer to our post on [Microsoft Q&A](/answers/questions/360928/information-how-to-find-apps-using-adal-in-your-te.html) for details on identifying ADAL apps with the help of [Azure Workbooks](/azure/azure-monitor/visualize/workbooks-overview).
-### If IΓÇÖm using ADAL, what can I expect after the deadline?
+Refer to our post on [Microsoft Q&A](/answers/questions/360928/information-how-to-find-apps-using-adal-in-your-te.html) for details on identifying ADAL apps with the help of [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md).
+### If IΓÇÖm using ADAL, what can I expect after the deadline?
- There will be no new releases (security or otherwise) to the library after June 2023. - We will not be accepting any incident reports or support requests for ADAL. ADAL to MSAL migration support would continue.
Developers can now use managed identities for their software workloads running a
For more information, see: - [Configure a user-assigned managed identity to trust an external identity provider (preview)](../develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md) - [Workload identity federation](../develop/workload-identity-federation.md)-- [Use an Azure AD workload identity (preview) on Azure Kubernetes Service (AKS)](/azure/aks/workload-identity-overview)
+- [Use an Azure AD workload identity (preview) on Azure Kubernetes Service (AKS)](../../aks/workload-identity-overview.md)
Authenticator version 6.6.8 and higher on iOS will be FIPS 140 compliant for all
In November 2022, we've added the following 22 new applications in our App gallery with Federation support
-[Adstream](/azure/active-directory/saas-apps/adstream-tutorial), [Databook](/azure/active-directory/saas-apps/databook-tutorial), [Ecospend IAM](https://ecospend.com/), [Digital Pigeon](/azure/active-directory/saas-apps/digital-pigeon-tutorial), [Drawboard Projects](/azure/active-directory/saas-apps/drawboard-projects-tutorial), [Vellum](https://www.vellum.ink/request-demo), [Veracity](https://aie-veracity.com/connect/azure), [Microsoft OneNote to Bloomberg Note Sync](https://www.bloomberg.com/professional/support/software-updates/), [DX NetOps Portal](/azure/active-directory/saas-apps/dx-netops-portal-tutorial), [itslearning Outlook integration](https://itslearning.com/global/), [Tranxfer](/azure/active-directory/saas-apps/tranxfer-tutorial), [Occupop](https://app.occupop.com/), [Nialli Workspace](https://ws.nialli.com/), [Tideways](https://app.tideways.io/login), [SOWELL](https://manager.sowellapp.com/#/?sso=true), [Prewise Learning](https://prewiselearning.com/), [CAPTOR for Intune](https://www.inkscreen.com/microsoft), [wayCloud Platform](https://app.way-cloud.de/login), [Nura Space Meeting Room](https://play.google.com/store/apps/details?id=com.meetingroom.prod), [Flexopus Exchange Integration](https://help.flexopus.com/de/microsoft-graph-integration), [Ren Systems](https://app.rensystems.com/login), [Nudge Security](https://www.nudgesecurity.io/login)
+[Adstream](../saas-apps/adstream-tutorial.md), [Databook](../saas-apps/databook-tutorial.md), [Ecospend IAM](https://ecospend.com/), [Digital Pigeon](../saas-apps/digital-pigeon-tutorial.md), [Drawboard Projects](../saas-apps/drawboard-projects-tutorial.md), [Vellum](https://www.vellum.ink/request-demo), [Veracity](https://aie-veracity.com/connect/azure), [Microsoft OneNote to Bloomberg Note Sync](https://www.bloomberg.com/professional/support/software-updates/), [DX NetOps Portal](../saas-apps/dx-netops-portal-tutorial.md), [itslearning Outlook integration](https://itslearning.com/global/), [Tranxfer](../saas-apps/tranxfer-tutorial.md), [Occupop](https://app.occupop.com/), [Nialli Workspace](https://ws.nialli.com/), [Tideways](https://app.tideways.io/login), [SOWELL](https://manager.sowellapp.com/#/?sso=true), [Prewise Learning](https://prewiselearning.com/), [CAPTOR for Intune](https://www.inkscreen.com/microsoft), [wayCloud Platform](https://app.way-cloud.de/login), [Nura Space Meeting Room](https://play.google.com/store/apps/details?id=com.meetingroom.prod), [Flexopus Exchange Integration](https://help.flexopus.com/de/microsoft-graph-integration), [Ren Systems](https://app.rensystems.com/login), [Nudge Security](https://www.nudgesecurity.io/login)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
Beginning September 30, 2024, Azure Multi-Factor Authentication Server deploymen
-### General Availability - Change of Default User Consent Settings
---
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Developer Experience
-
-Starting Sept 30th, 2022, Microsoft will require all new tenants to follow a new user consent configuration. While this won't impact any existing tenants that were created before September 30, 2022, all new tenants created after September 30, 2022, will have the default setting of ΓÇ£Enable automatic updates (Recommendation)ΓÇ¥ under User consent settings. This change reduces the risk of malicious applications attempting to trick users into granting them access to your organization's data. For more information, see: [Configure how users consent to applications](../manage-apps/configure-user-consent.md).
--- ### Public Preview - Lifecycle Workflows is now available
With this new parity update, customers can now integrate non-gallery application
For more information, see [Claims mapping policy - Microsoft Entra | Microsoft Docs](../develop/reference-claims-mapping-policy-type.md#claim-schema-entry-elements). -+
active-directory How To Connect Group Writeback Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-enable.md
Group writeback requires enabling both the original and new versions of the feat
> >The enhanced group writeback feature is enabled on the tenant and not per Azure AD Connect client instance. Please be sure that all Azure AD Connect client instances are updated to a minimal build version of 1.6.4.0 or later.
+> [!NOTE]
+> If you don't want to writeback all existing Microsoft 365 groups to Active Directory, you need to make changes to group writeback default behaviour before performing the steps in this article to enable the feature. See [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md).
+> Also the new and original versions of the feature need to be enabled in the order documented. If the original feature is enabled first, all existing Microsoft 365 groups will be written back to Active Directory.
+ ### Enable group writeback by using PowerShell 1. On your Azure AD Connect server, open a PowerShell prompt as an administrator.
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
You can modify the default behavior as follows:
- Microsoft 365 groups with up to 250,000 members can be written back to on-premises. If you plan to make changes to the default behavior, we recommend that you do so before you enable group writeback. However, you can still modify the default behavior if group writeback is already enabled. For more information, see [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md).
-
+
+> [!NOTE]
+> You need to make these changes before enabling group writeback; otherwise, all existing Microsoft 365 groups will be automatically written back to Active Directory. Also, the new and original versions of the feature need to be enabled in the order documented. If the original feature is enabled first, all existing Microsoft 365 groups will be written back to Active Directory.
+ ## Understand limitations of public previewΓÇ» Although this release has undergone extensive testing, you might still encounter issues. One of the goals of this public preview release is to find and fix any issues before the feature moves to general availability. Please also note that any public preview functionality can still receive breaking changes which may require you to make changes to you configuration to continue using this feature. We may also decide to change or remove certain functionality without prior notice.
These limitations and known issues are specific to group writeback:
- [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md) - [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md)-- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
+- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
In this tutorial, learn to deploy BIG-IP Vitural Edition (VE) in Azure infrastru
- A prepared BIG-IP virtual machine (VM) to model a secure hybrid access (SHA) proof-of-concept - A staging instance to test new BIG-IP system updates and hotfixes
-Learn more: [SHA: Secure legacy apps with Azure Active Directory](/azure/active-directory/manage-apps/secure-hybrid-access)
+Learn more: [SHA: Secure legacy apps with Azure Active Directory](./secure-hybrid-access.md)
## Prerequisites
Get-AzVmSnapshot -ResourceGroupName '<E.g.contoso-RG>' -VmName '<E.g.BIG-IP-VM>'
## Next steps
-Select a [deployment scenario](f5-aad-integration.md) and start your implementation.
+Select a [deployment scenario](f5-aad-integration.md) and start your implementation.
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
We recommend you become a verified publisher, so customers know you're the trust
## Enable single sign-on for IT admins
-There are several ways to enable SSO for IT administrators to your solution. See, [Plan a single sign-on deployment, SSO options](/azure/active-directory/manage-apps/plan-sso-deployment#single-sign-on-options).
+There are several ways to enable SSO for IT administrators to your solution. See, [Plan a single sign-on deployment, SSO options](./plan-sso-deployment.md#single-sign-on-options).
Microsoft Graph uses OIDC/OAuth. Customers use OIDC to sign in to your solution. Use the JSON Web Token (JWT) Azure AD issues to interact with Microsoft Graph. See, [OpenID Connect on the Microsoft identity platform](../develop/v2-protocols-oidc.md).
-If your solution uses SAML for IT administrator SSO, the SAML token won't enable your solution to interact with Microsoft Graph. You can use SAML for IT administrator SSO, but your solution needs to support OIDC integration with Azure AD, so it can get a JWT from Azure AD to interact with Microsoft Graph. See, [How the Microsoft identity platform uses the SAML protocol](/azure/active-directory/develop/active-directory-saml-protocol-reference).
+If your solution uses SAML for IT administrator SSO, the SAML token won't enable your solution to interact with Microsoft Graph. You can use SAML for IT administrator SSO, but your solution needs to support OIDC integration with Azure AD, so it can get a JWT from Azure AD to interact with Microsoft Graph. See, [How the Microsoft identity platform uses the SAML protocol](../develop/active-directory-saml-protocol-reference.md).
You can use one of the following SAML approaches:
https://login.microsoftonline.com/{Tenant_ID}/federationmetadata/2007-06/federat
### Assign users and groups
-After you publish the application to Azure AD, you can assign the app to users and groups to ensure it appears on the My Apps portal. This assignment is on the service principal object generated when you created the application. See, [My Apps portal overview](/azure/active-directory/manage-apps/myapps-overview).
+After you publish the application to Azure AD, you can assign the app to users and groups to ensure it appears on the My Apps portal. This assignment is on the service principal object generated when you created the application. See, [My Apps portal overview](./myapps-overview.md).
Get `AppRole` instances the application might have associated with it. It's common for SaaS applications to have various `AppRole` instances associated with them. Typically, for custom applications, there's one default `AppRole` instance. Get the `AppRole` instance ID you want to assign:
The following software-defined perimeter (SDP) solutions providers connect with
* **Strata Maverics Identity Orchestrator** * [Integrate Azure AD SSO with Maverics Identity Orchestrator SAML Connector](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) * **Zscaler Private Access**
- * [Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
+ * [Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
A common challenge for developers is the management of secrets, credentials, certificates, and keys used to secure communication between services. Managed identities eliminate the need for developers to manage these credentials.
-While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having to manage any credentials.
+While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory (Azure AD) for applications to use when connecting to resources that support Azure AD authentication. Applications can use managed identities to obtain Azure AD tokens without having to manage any credentials.
The following video shows how you can use managed identities:</br>
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-concept.md
Previously updated : 09/26/2022 Last updated : 01/11/2023
The following are known issues with role-assignable groups:
- Use the new [Exchange admin center](/exchange/exchange-admin-center) for role assignments via group membership. The old Exchange admin center doesn't support this feature. If accessing the old Exchange admin center is required, assign the eligible role directly to the user (not via role-assignable groups). Exchange PowerShell cmdlets will work as expected. - If an administrator role is assigned to a role-assignable group instead of individual users, members of the group will not be able to access Rules, Organization, or Public Folders in the new [Exchange admin center](/exchange/exchange-admin-center). The workaround is to assign the role directly to users instead of the group. - Azure Information Protection Portal (the classic portal) doesn't recognize role membership via group yet. You can [migrate to the unified sensitivity labeling platform](/azure/information-protection/configure-policy-migrate-labels) and then use the Office 365 Security & Compliance center to use group assignments to manage roles.-- [Apps admin center](https://config.office.com/) doesn't support this feature yet. Assign the Office Apps Administrator role directly to users. ## License requirements
active-directory Netsuite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netsuite-provisioning-tutorial.md
- Title: 'Tutorial: Configure NetSuite OneWorld for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and NetSuite OneWorld.
------- Previously updated : 11/21/2022--
-# Tutorial: Configuring NetSuite for automatic user provisioning
-
-The objective of this tutorial is to show you the steps you need to perform in NetSuite OneWorld and Azure AD to automatically provision and de-provision user accounts from Azure AD to NetSuite.
-
-> [!WARNING]
-> This provisioning integration will stop working with the release of NetSuite's Spring 2021 update due to a change to the NetSuite APIs that are used by Microsoft to provision users into NetSuite. This update will reach NetSuite customers between February and April of 2021. As a result of this, the provisioning functionality of the NetSuite application in the Azure Active Directory Enterprise App Gallery will be removed soon. The application's SSO functionality will remain intact. Microsoft is working with NetSuite to build a new modernized provisioning integration, but there is currently no ETA on when this will be completed.
-
-## Prerequisites
-
-The scenario outlined in this tutorial assumes that you already have the following items:
-
-* An Azure Active directory tenant.
-* A NetSuite OneWorld subscription. Note that automatic user provisioning is presently only supported with NetSuite OneWorld.
-* A user account in Netsuite with administrator permissions.
-* Integration with Azure AD requires a 2FA exemption. Please contact NetSuite's support team to request this exemption.
-
-## Assigning users to NetSuite OneWorld
-
-Azure Active Directory uses a concept called "assignments" to determine which users should receive access to selected apps. In the context of automatic user account provisioning, only the users and groups that have been "assigned" to an application in Azure AD are synchronized.
-
-Before configuring and enabling the provisioning service, you need to decide what users and/or groups in Azure AD represent the users who need access to your NetSuite app. Once decided, you can assign these users to your NetSuite app by following the instructions here:
-
-[Assign a user or group to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md)
-
-### Important tips for assigning users to NetSuite OneWorld
-
-* It is recommended that a single Azure AD user is assigned to NetSuite to test the provisioning configuration. Additional users and/or groups may be assigned later.
-
-* When assigning a user to NetSuite, you must select a valid user role. The "Default Access" role does not work for provisioning.
-
-## Enable User Provisioning
-
-This section guides you through connecting your Azure AD to NetSuite's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in NetSuite based on user and group assignment in Azure AD.
-
-> [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for NetSuite, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
-
-### To configure user account provisioning:
-
-The objective of this section is to outline how to enable user provisioning of Active Directory user accounts to NetSuite.
-
-1. In the [Azure portal](https://portal.azure.com), browse to the **Azure Active Directory > Enterprise Apps > All applications** section.
-
-1. If you have already configured NetSuite for single sign-on, search for your instance of NetSuite using the search field. Otherwise, select **Add** and search for **NetSuite** in the application gallery. Select NetSuite from the search results, and add it to your list of applications.
-
-1. Select your instance of NetSuite, then select the **Provisioning** tab.
-
-1. Set the **Provisioning Mode** to **Automatic**.
-
- ![Screenshot shows the NetSuite Provisioning page, with Provisioning Mode set to Automatic and other values you can set.](./media/netsuite-provisioning-tutorial/provisioning.png)
-
-1. Under the **Admin Credentials** section, provide the following configuration settings:
-
- a. In the **Admin User Name** textbox, type a NetSuite account name that has the **System Administrator** profile in NetSuite.com assigned.
-
- b. In the **Admin Password** textbox, type the password for this account.
-
-1. In the Azure portal, click **Test Connection** to ensure Azure AD can connect to your NetSuite app.
-
-1. In the **Notification Email** field, enter the email address of a person or group who should receive provisioning error notifications, and check the checkbox.
-
-1. Click **Save.**
-
-1. Under the Mappings section, select **Synchronize Azure Active Directory Users to NetSuite.**
-
-1. In the **Attribute Mappings** section, review the user attributes that are synchronized from Azure AD to NetSuite. Note that the attributes selected as **Matching** properties are used to match the user accounts in NetSuite for update operations. Select the Save button to commit any changes.
-
-1. To enable the Azure AD provisioning service for NetSuite, change the **Provisioning Status** to **On** in the Settings section
-
-1. Click **Save.**
-
-It starts the initial synchronization of any users and/or groups assigned to NetSuite in the Users and Groups section. Note that the initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity logs, which describe all actions performed by the provisioning service on your NetSuite app.
-
-For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
-
-## Additional resources
-
-* [Managing user account provisioning for Enterprise Apps](tutorial-list.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Configure Single Sign-on](netsuite-tutorial.md)
active-directory Otsuka Shokai Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/otsuka-shokai-tutorial.md
- Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Otsuka Shokai | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Otsuka Shokai.
-------- Previously updated : 11/21/2022---
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Otsuka Shokai
-
-In this tutorial, you'll learn how to integrate Otsuka Shokai with Azure Active Directory (Azure AD). When you integrate Otsuka Shokai with Azure AD, you can:
-
-* Control in Azure AD who has access to Otsuka Shokai.
-* Enable your users to be automatically signed-in to Otsuka Shokai with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Otsuka Shokai single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* Otsuka Shokai supports **IDP** initiated SSO
-
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-
-## Adding Otsuka Shokai from the gallery
-
-To configure the integration of Otsuka Shokai into Azure AD, you need to add Otsuka Shokai from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Otsuka Shokai** in the search box.
-1. Select **Otsuka Shokai** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-
-## Configure and test Azure AD single sign-on for Otsuka Shokai
-
-Configure and test Azure AD SSO with Otsuka Shokai using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Otsuka Shokai.
-
-To configure and test Azure AD SSO with Otsuka Shokai, complete the following building blocks:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Otsuka Shokai SSO](#configure-otsuka-shokai-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Otsuka Shokai test user](#create-otsuka-shokai-test-user)** - to have a counterpart of B.Simon in Otsuka Shokai that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Otsuka Shokai** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section the application is pre-configured in **IDP** initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
-
-1. Otsuka Shokai application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. Otsuka Shokai application expects **nameidentifier** to be mapped with **user.objectid**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
-
- ![image](common/default-attributes.png)
-
-1. In addition to above, PureCloud by Genesys application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
-
- | Name | Source Attribute|
- | | |
- | Appid | `<Application ID>` |
-
- >[!NOTE]
- >`<Application ID>` is the value which you have copied from the **Properties** tab of Azure portal.
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Otsuka Shokai.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Otsuka Shokai**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Otsuka Shokai SSO
-
-1. When you connect to Customer's My Page from SSO app, the wizard of SSO setting starts.
-
-2. If Otsuka ID is not registered, proceed to Otsuka-ID new registration. If you have registered Otsuka-ID, proceed to the linkage setting.
-
-3. Proceed to the end and when the top screen is displayed after logging in to Customer's My Page, the SSO settings are complete.
-
-4. The next time you connect to Customer's My Page from the SSO app, after the guidance screen opens, the top screen is displayed after logging in to Customer's My Page.
-
-### Create Otsuka Shokai test user
-
-New registration of SaaS account will be performed at the first access to Otsuka Shokai. In addition, we will also associate Azure AD account and SaaS account at the time of new creation.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Otsuka Shokai tile in the Access Panel, you should be automatically signed in to the Otsuka Shokai for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)--- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)--- [Try Otsuka Shokai with Azure AD](https://aad.portal.azure.com/)
active-directory Configure Cmmc Level 1 Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-1-controls.md
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - | | AC.L1-3.1.1<br><br>**Practice statement:** Limit information system access to authorized users, processes acting on behalf of authorized users, or devices (including other information systems).<br><br>**Objectives:**<br>Determine if:<br>[a.] authorized users are identified;<br>[b.] processes acting on behalf of authorized users are identified;<br>[c.] devices (and other systems) authorized to connect to the system are identified;<br>[d.] system access is limited to authorized users;<br>[e.] system access is limited to processes acting on behalf of authorized users; and<br>[f.] system access is limited to authorized devices (including other systems). | You're responsible for setting up Azure AD accounts, which is accomplished from external HR systems, on-premises Active Directory, or directly in the cloud. You configure Conditional Access to only grant access from a known (Registered/Managed) device. In addition, apply the concept of least privilege when granting application permissions. Where possible, use delegated permission. <br><br>Set up users<br><li>[Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) <li>[Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)<li>[Add or delete users ΓÇô Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br><br>Set up devices<li>[What is device identity in Azure Active Directory](../devices/overview.md)<br><br>Configure applications<li>[QuickStart: Register an app in the Microsoft identity platform](../develop/quickstart-register-app.md)<li>[Microsoft identity platform scopes, permissions, & consent](../develop/v2-permissions-and-consent.md)<li>[Securing service principals in Azure Active Directory](../fundamentals/service-accounts-principal.md)<br><br>Conditional access<li>[What is Conditional Access in Azure Active Directory](../conditional-access/overview.md)<li>[Conditional Access require managed device](../conditional-access/require-managed-devices.md) |
-| AC.L1-3.1.2<br><br>**Practice statement:** Limit information system access to the types of transactions and functions that authorized users are permitted to execute.<br><br>**Objectives:**<br>Determine if:<br>[a.] the types of transactions and functions that authorized users are permitted to execute are defined; and<br>[b.] system access is limited to the defined types of transactions and functions for authorized users. | You're responsible for configuring access controls such as Role Based Access Controls (RBAC) with built-in or custom roles. Use role assignable groups to manage role assignments for multiple users requiring same access. Configure Attribute Based Access Controls (ABAC) with default or custom security attributes. The objective is to granularly control access to resources protected with Azure AD.<br><br>Set up RBAC<li>[Overview of role-based access control in Active Directory ](../roles/custom-overview.md)[Azure AD built-in roles](../roles/permissions-reference.md)<li>[Create and assign a custom role in Azure Active Directory](../roles/custom-create.md)<br><br>Set up ABAC<li>[What is Azure attribute-based access control (Azure ABAC)](/azure/role-based-access-control/conditions-overview)<li>[What are custom security attributes in Azure AD?](/azure/active-directory/fundamentals/custom-security-attributes-overview)<br><br>Configure groups for role assignment<li>[Use Azure AD groups to manage role assignments](../roles/groups-concept.md) |
-| AC.L1-3.1.20<br><br>**Practice statement:** Verify and control/limit connections to and use of external information systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] connections to external systems are identified;<br>[b.] the use of external systems is identified;<br>[c.] connections to external systems are verified;<br>[d.] the use of external systems is verified;<br>[e.] connections to external systems are controlled and or limited; and<br>[f.] the use of external systems is controlled and or limited. | You're responsible for configuring conditional access policies using device controls and or network locations to control and or limit connections and use of external systems. Configure Terms of Use (TOU) for recorded user acknowledgment of terms and conditions for use of external systems for access.<br><br>Set up Conditional Access as required<li>[What is Conditional Access?](../conditional-access/overview.md)<li>[Require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md)<li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<li>[Conditional Access: Filter for devices](/azure/active-directory/conditional-access/concept-condition-filters-for-devices)<br><br>Use Conditional Access to block access<li>[Conditional Access - Block access by location](../conditional-access/howto-conditional-access-policy-location.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md) |
+| AC.L1-3.1.2<br><br>**Practice statement:** Limit information system access to the types of transactions and functions that authorized users are permitted to execute.<br><br>**Objectives:**<br>Determine if:<br>[a.] the types of transactions and functions that authorized users are permitted to execute are defined; and<br>[b.] system access is limited to the defined types of transactions and functions for authorized users. | You're responsible for configuring access controls such as Role Based Access Controls (RBAC) with built-in or custom roles. Use role assignable groups to manage role assignments for multiple users requiring same access. Configure Attribute Based Access Controls (ABAC) with default or custom security attributes. The objective is to granularly control access to resources protected with Azure AD.<br><br>Set up RBAC<li>[Overview of role-based access control in Active Directory ](../roles/custom-overview.md)[Azure AD built-in roles](../roles/permissions-reference.md)<li>[Create and assign a custom role in Azure Active Directory](../roles/custom-create.md)<br><br>Set up ABAC<li>[What is Azure attribute-based access control (Azure ABAC)](../../role-based-access-control/conditions-overview.md)<li>[What are custom security attributes in Azure AD?](../fundamentals/custom-security-attributes-overview.md)<br><br>Configure groups for role assignment<li>[Use Azure AD groups to manage role assignments](../roles/groups-concept.md) |
+| AC.L1-3.1.20<br><br>**Practice statement:** Verify and control/limit connections to and use of external information systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] connections to external systems are identified;<br>[b.] the use of external systems is identified;<br>[c.] connections to external systems are verified;<br>[d.] the use of external systems is verified;<br>[e.] connections to external systems are controlled and or limited; and<br>[f.] the use of external systems is controlled and or limited. | You're responsible for configuring conditional access policies using device controls and or network locations to control and or limit connections and use of external systems. Configure Terms of Use (TOU) for recorded user acknowledgment of terms and conditions for use of external systems for access.<br><br>Set up Conditional Access as required<li>[What is Conditional Access?](../conditional-access/overview.md)<li>[Require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md)<li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<li>[Conditional Access: Filter for devices](../conditional-access/concept-condition-filters-for-devices.md)<br><br>Use Conditional Access to block access<li>[Conditional Access - Block access by location](../conditional-access/howto-conditional-access-policy-location.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md) |
| AC.L1-3.1.22<br><br>**Practice statement:** Control information posted or processed on publicly accessible information systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] individuals authorized to post or process information on publicly accessible systems are identified;<br>[b.] procedures to ensure FCI isn't posted or processed on publicly accessible systems are identified;<br>[c.] a review process is in place prior to posting of any content to publicly accessible systems; and<br>[d.] content on publicly accessible systems is reviewed to ensure that it doesn't include federal contract information (FCI). | You're responsible for configuring Privileged Identity Management (PIM) to manage access to systems where posted information is publicly accessible. Require approvals with justification prior to role assignment in PIM. Configure Terms of Use (TOU) for systems where posted information is publicly accessible for recorded acknowledgment of terms and conditions for posting of publicly accessible information.<br><br>Plan PIM deployment<li>[What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<li>[Plan a Privileged Identity Management deployment](../privileged-identity-management/pim-deployment-plan.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md)<li>[Configure Azure AD role settings in PIM - Require Justification](../privileged-identity-management/pim-how-to-change-default-settings.md) | ## Identification and Authentication (IA) domain
The following table provides a list of practice statement and objectives, and Az
* [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md) * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md) * [Configure CMMC Level 2 Identification and Authentication (IA) controls](configure-cmmc-level-2-identification-and-authentication.md)
-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
+* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Configure Cmmc Level 2 Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-access-control.md
The following table provides a list of practice statement and objectives, and Az
| AC.L2-3.1.9<br><br>**Practice statement:** Provide privacy and security notices consistent with applicable CUI rules.<br><br>**Objectives:**<br>Determine if:<br>[a.] privacy and security notices required by CUI-specified rules are identified, consistent, and associated with the specific CUI category; and<br>[b.] privacy and security notices are displayed. | With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<br><br>**Conditional access** <br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><br>**Terms of use**<br>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) | | AC.L2-3.1.10<br><br>**Practice statement:** Use session lock with pattern-hiding displays to prevent access and viewing of data after a period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] the period of inactivity after which the system initiates a session lock is defined;<br>[b.] access to the system and viewing of data is prevented by initiating a session lock after the defined period of inactivity; and<br>[c.] previously visible information is concealed via a pattern-hiding display after the defined period of inactivity. | Implement device lock by using a conditional access policy to restrict access to compliant or hybrid Azure AD joined devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)).| | AC.L2-3.1.11<br><br>**Practice statement:** Terminate (automatically) a user session after a defined condition.<br><br>**Objectives:**<br>Determine if:<br>[a.] conditions requiring a user session to terminate are defined; and<br>[b.] a user session is automatically terminated after any of the defined conditions occur. | Enable Continuous Access Evaluation (CAE) for all supported applications. For application that don't support CAE, or for conditions not applicable to CAE, implement policies in Microsoft Defender for Cloud Apps to automatically terminate sessions when conditions occur. Additionally, configure Azure Active Directory Identity Protection to evaluate user and sign-in Risk. Use conditional access with Identity protection to allow user to automatically remediate risk.<br>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<br>[Control cloud app usage by creating policies](/defender-cloud-apps/control-cloud-apps-with-policies)<br>[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
-|AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad.md)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts.md) |
+|AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad.md)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts.md) |
| AC.L2-3.1.13<br><br>**Practice statement:** Employ cryptographic mechanisms to protect the confidentiality of remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] cryptographic mechanisms to protect the confidentiality of remote access sessions are identified; and<br>[b.] cryptographic mechanisms to protect the confidentiality of remote access sessions are implemented. | All Azure AD customer-facing web services are secured with the Transport Layer Security (TLS) protocol and are implemented using FIPS-validated cryptography.<br>[Azure Active Directory Data Security Considerations (microsoft.com)](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) |
-| AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview.md) |
-| AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview.md)<br>[Filter for devices as a condition in Conditional Access policy](/azure/active-directory/conditional-access/concept-condition-filters-for-devices.md) |
-| AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md) |
+| AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](../conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview.md) |
+| AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview.md)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md) |
+| AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md) |
| AC.L2-3.1.19<br><br>**Practice statement:** Encrypt CUI on mobile devices and mobile computing platforms.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices and mobile computing platforms that process, store, or transmit CUI are identified; and<br>[b.] encryption is employed to protect CUI on identified mobile devices and mobile computing platforms. | **Managed Device**<br>Configure conditional access policies to enforce compliant or HAADJ device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI.<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) |
-| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb.md)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad.md)
+| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb.md)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad.md)
### Next steps * [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md) * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md) * [Configure CMMC Level 2 Identification and Authentication (IA) controls](configure-cmmc-level-2-identification-and-authentication.md)
-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
+* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Configure Cmmc Level 2 Additional Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-additional-controls.md
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| AU.L2-3.3.1<br><br>**Practice statement:** Create and retain system audit logs and records to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit logs (for example, event types to be logged) to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity are specified;<br>[b.] the content of audit records needed to support monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity is defined;<br>[c.] audit records are created (generated);<br>[d.] audit records, once created, contain the defined content;<br>[e.] retention requirements for audit records are defined; and<br>[f.] audit records are retained as defined.<br><br>AU.L2-3.3.2<br><br>**Practice statement:** Ensure that the actions of individual system users can be uniquely traced to those users so they can be held accountable for their actions.<br><br>**Objectives:**<br>Determine if:<br>[a.] the content of the audit records needed to support the ability to uniquely trace users to their actions is defined; and<br>[b.] audit records, once created, contain the defined content. | All operations are audited within the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
-| AU.L2-3.3.4<br><br>**Practice statement:** Alert if an audit logging process fails.<br><br>**Objectives:**<br>Determine if:<br>[a.] personnel or roles to be alerted if an audit logging process failure is identified;<br>[b.] types of audit logging process failures for which alert will be generated are defined; and<br>[c] identified personnel or roles are alerted in the event of an audit logging process failure. | Azure Service Health notifies you about Azure service incidents so you can take action to mitigate downtime. Configure customizable cloud alerts for Azure Active Directory. <br>[What is Azure Service Health?](/azure/service-health/overview.md)<br>[Three ways to get notified about Azure service issues](https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/)<br>[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) |
-| AU.L2-3.3.6<br><br>**Practice statement:** Provide audit record reduction and report generation to support on-demand analysis and reporting.<br><br>**Objectives:**<br>Determine if:<br>[a.] an audit record reduction capability that supports on-demand analysis is provided; and<br>[b.] a report generation capability that supports on-demand reporting is provided. | Ensure Azure AD events are included in event logging strategy. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts. <br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory.md)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
-| AU.L2-3.3.8<br><br>**Practice statement:** Protect audit information and audit logging tools from unauthorized access, modification, and deletion.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit information is protected from unauthorized access;<br>[b.] audit information is protected from unauthorized modification;<br>[c.] audit information is protected from unauthorized deletion;<br>[d.] audit logging tools are protected from unauthorized access;<br>[e.] audit logging tools are protected from unauthorized modification; and<br>[f.] audit logging tools are protected from unauthorized deletion.<br><br>AU.L2-3.3.9<br><br>**Practice statement:** Limit management of audit logging functionality to a subset of privileged users.<br><br>**Objectives:**<br>Determine if:<br>[a.] a subset of privileged users granted access to manage audit logging functionality is defined; and<br>[b.] management of audit logging functionality is limited to the defined subset of privileged users. | Azure AD logs are retained by default for 30 days. These logs are unable to modified or deleted and are only accessible to limited set of privileged roles.<br>[Sign-in logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-sign-ins.md)<br>[Audit logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-audit-logs.md)
+| AU.L2-3.3.1<br><br>**Practice statement:** Create and retain system audit logs and records to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit logs (for example, event types to be logged) to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity are specified;<br>[b.] the content of audit records needed to support monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity is defined;<br>[c.] audit records are created (generated);<br>[d.] audit records, once created, contain the defined content;<br>[e.] retention requirements for audit records are defined; and<br>[f.] audit records are retained as defined.<br><br>AU.L2-3.3.2<br><br>**Practice statement:** Ensure that the actions of individual system users can be uniquely traced to those users so they can be held accountable for their actions.<br><br>**Objectives:**<br>Determine if:<br>[a.] the content of the audit records needed to support the ability to uniquely trace users to their actions is defined; and<br>[b.] audit records, once created, contain the defined content. | All operations are audited within the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md)<br>[Tutorial: Stream logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| AU.L2-3.3.4<br><br>**Practice statement:** Alert if an audit logging process fails.<br><br>**Objectives:**<br>Determine if:<br>[a.] personnel or roles to be alerted if an audit logging process failure is identified;<br>[b.] types of audit logging process failures for which alert will be generated are defined; and<br>[c] identified personnel or roles are alerted in the event of an audit logging process failure. | Azure Service Health notifies you about Azure service incidents so you can take action to mitigate downtime. Configure customizable cloud alerts for Azure Active Directory. <br>[What is Azure Service Health?](../../service-health/overview.md)<br>[Three ways to get notified about Azure service issues](https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/)<br>[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) |
+| AU.L2-3.3.6<br><br>**Practice statement:** Provide audit record reduction and report generation to support on-demand analysis and reporting.<br><br>**Objectives:**<br>Determine if:<br>[a.] an audit record reduction capability that supports on-demand analysis is provided; and<br>[b.] a report generation capability that supports on-demand reporting is provided. | Ensure Azure AD events are included in event logging strategy. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts. <br>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md)<br>[Tutorial: Stream logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| AU.L2-3.3.8<br><br>**Practice statement:** Protect audit information and audit logging tools from unauthorized access, modification, and deletion.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit information is protected from unauthorized access;<br>[b.] audit information is protected from unauthorized modification;<br>[c.] audit information is protected from unauthorized deletion;<br>[d.] audit logging tools are protected from unauthorized access;<br>[e.] audit logging tools are protected from unauthorized modification; and<br>[f.] audit logging tools are protected from unauthorized deletion.<br><br>AU.L2-3.3.9<br><br>**Practice statement:** Limit management of audit logging functionality to a subset of privileged users.<br><br>**Objectives:**<br>Determine if:<br>[a.] a subset of privileged users granted access to manage audit logging functionality is defined; and<br>[b.] management of audit logging functionality is limited to the defined subset of privileged users. | Azure AD logs are retained by default for 30 days. These logs are unable to modified or deleted and are only accessible to limited set of privileged roles.<br>[Sign-in logs in Azure Active Directory](../reports-monitoring/concept-sign-ins.md)<br>[Audit logs in Azure Active Directory](../reports-monitoring/concept-audit-logs.md)
## Configuration Management (CM)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| CM.L2-3.4.2<br><br>**Practice statement:** Establish and enforce security configuration settings for information technology products employed in organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] security configuration settings for information technology products employed in the system are established and included in the baseline configuration; and<br>[b.] security configuration settings for information technology products employed in the system are enforced. | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity.md)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](/azure/active-directory/conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune.md)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview.md) |
-| CM.L2-3.4.5<br><br>**Practice statement:** Define, document, approve, and enforce physical and logical access restrictions associated with changes to organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] physical access restrictions associated with changes to the system are defined;<br>[b.] physical access restrictions associated with changes to the system are documented;<br>[c.] physical access restrictions associated with changes to the system are approved;<br>[d.] physical access restrictions associated with changes to the system are enforced;<br>[e.] logical access restrictions associated with changes to the system are defined;<br>[f.] logical access restrictions associated with changes to the system are documented;<br>[g.] logical access restrictions associated with changes to the system are approved; and<br>[h.] logical access restrictions associated with changes to the system are enforced. | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](/azure/active-directory/roles/custom-overview.md)<br>[What is Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure.md)<br>[Approve or deny requests for Azure AD roles in PIM](/azure/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow.md) |
+| CM.L2-3.4.2<br><br>**Practice statement:** Establish and enforce security configuration settings for information technology products employed in organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] security configuration settings for information technology products employed in the system are established and included in the baseline configuration; and<br>[b.] security configuration settings for information technology products employed in the system are enforced. | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity.md)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](../conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune.md)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview.md) |
+| CM.L2-3.4.5<br><br>**Practice statement:** Define, document, approve, and enforce physical and logical access restrictions associated with changes to organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] physical access restrictions associated with changes to the system are defined;<br>[b.] physical access restrictions associated with changes to the system are documented;<br>[c.] physical access restrictions associated with changes to the system are approved;<br>[d.] physical access restrictions associated with changes to the system are enforced;<br>[e.] logical access restrictions associated with changes to the system are defined;<br>[f.] logical access restrictions associated with changes to the system are documented;<br>[g.] logical access restrictions associated with changes to the system are approved; and<br>[h.] logical access restrictions associated with changes to the system are enforced. | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](../roles/custom-overview.md)<br>[What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<br>[Approve or deny requests for Azure AD roles in PIM](../privileged-identity-management/azure-ad-pim-approval-workflow.md) |
| CM.L2-3.4.6<br><br>**Practice statement:** Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.<br><br>**Objectives:**<br>Determine if:<br>[a.] essential system capabilities are defined based on the principle of least functionality; and<br>[b.] the system is configured to provide only the defined essential capabilities. | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) |
-| CM.L2-3.4.7<br><br>**Practice statement:** Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.<br><br>**Objectives:**<br>Determine if:<br>[a.]essential programs are defined;<br>[b.] the use of nonessential programs is defined;<br>[c.] the use of nonessential programs is restricted, disabled, or prevented as defined;<br>[d.] essential functions are defined;<br>[e.] the use of nonessential functions is defined;<br>[f.] the use of nonessential functions is restricted, disabled, or prevented as defined;<br>[g.] essential ports are defined;<br>[h.] the use of nonessential ports is defined;<br>[i.] the use of nonessential ports is restricted, disabled, or prevented as defined;<br>[j.] essential protocols are defined;<br>[k.] the use of nonessential protocols is defined;<br>[l.] the use of nonessential protocols is restricted, disabled, or prevented as defined;<br>[m.] essential services are defined;<br>[n.] the use of nonessential services is defined; and<br>[o.] the use of nonessential services is restricted, disabled, or prevented as defined. | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](/azure/active-directory/roles/permissions-reference.md)<br>[Azure AD App Roles - App Roles vs. Groups ](/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md)<br>[Configure how users consent to applications](/azure/active-directory/manage-apps/configure-user-consent?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](/azure/active-directory/manage-apps/configure-user-consent-groups?tabs=azure-portal.md)<br>[Configure the admin consent workflow](/azure/active-directory/manage-apps/configure-admin-consent-workflow.md)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps.d)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it.md) |
-| CM.L2-3.4.8<br><br>**Practice statement:** Apply deny-by-exception (blocklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (allowlist) policy to allow the execution of authorized software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy specifying whether allowlist or blocklist is to be implemented is specified;<br>[b.] the software allowed to execute under allowlist or denied use under blocklist is specified; and<br>[c.] allowlist to allow the execution of authorized software or blocklist to prevent the use of unauthorized software is implemented as specified.<br><br>CM.L2-3.4.9<br><br>**Practice statement:** Control and monitor user-installed software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy for controlling the installation of software by users is established;<br>[b.] installation of software by users is controlled based on the established policy; and<br>[c.] installation of software by users is monitored. | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Conditional Access - Require compliant or hybrid joined devices](/azure/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md) |
+| CM.L2-3.4.7<br><br>**Practice statement:** Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.<br><br>**Objectives:**<br>Determine if:<br>[a.]essential programs are defined;<br>[b.] the use of nonessential programs is defined;<br>[c.] the use of nonessential programs is restricted, disabled, or prevented as defined;<br>[d.] essential functions are defined;<br>[e.] the use of nonessential functions is defined;<br>[f.] the use of nonessential functions is restricted, disabled, or prevented as defined;<br>[g.] essential ports are defined;<br>[h.] the use of nonessential ports is defined;<br>[i.] the use of nonessential ports is restricted, disabled, or prevented as defined;<br>[j.] essential protocols are defined;<br>[k.] the use of nonessential protocols is defined;<br>[l.] the use of nonessential protocols is restricted, disabled, or prevented as defined;<br>[m.] essential services are defined;<br>[n.] the use of nonessential services is defined; and<br>[o.] the use of nonessential services is restricted, disabled, or prevented as defined. | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](../roles/permissions-reference.md)<br>[Azure AD App Roles - App Roles vs. Groups ](../develop/howto-add-app-roles-in-azure-ad-apps.md)<br>[Configure how users consent to applications](../manage-apps/configure-user-consent.md?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](../manage-apps/configure-user-consent-groups.md?tabs=azure-portal.md)<br>[Configure the admin consent workflow](../manage-apps/configure-admin-consent-workflow.md)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps.d)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it.md) |
+| CM.L2-3.4.8<br><br>**Practice statement:** Apply deny-by-exception (blocklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (allowlist) policy to allow the execution of authorized software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy specifying whether allowlist or blocklist is to be implemented is specified;<br>[b.] the software allowed to execute under allowlist or denied use under blocklist is specified; and<br>[c.] allowlist to allow the execution of authorized software or blocklist to prevent the use of unauthorized software is implemented as specified.<br><br>CM.L2-3.4.9<br><br>**Practice statement:** Control and monitor user-installed software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy for controlling the installation of software by users is established;<br>[b.] installation of software by users is controlled based on the established policy; and<br>[c.] installation of software by users is monitored. | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Conditional Access - Require compliant or hybrid joined devices](../conditional-access/howto-conditional-access-policy-compliant-device.md) |
## Incident Response (IR)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| IR.L2-3.6.1<br><br>**Practice statement:** Establish an operational incident-handling capability for organizational systems that includes preparation, detection, analysis, containment, recovery, and user response activities.<br><br>**Objectives:**<br>Determine if:<br>[a.] an operational incident-handling capability is established;<br>[b.] the operational incident-handling capability includes preparation;<br>[c.] the operational incident-handling capability includes detection;<br>[d.] the operational incident-handling capability includes analysis;<br>[e.] the operational incident-handling capability includes containment;<br>[f.] the operational incident-handling capability includes recovery; and<br>[g.] the operational incident-handling capability includes user response activities. | Implement incident handling and monitoring capabilities. The audit logs record all configuration changes. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. You can stream each of these logs directly into a SIEM solution, such as Microsoft Sentinel. Alternatively, use Azure Event Hubs to integrate logs with third-party SIEM solutions.<br><br>**Audit events**<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Sign-in activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-sign-ins.md)<br>[How To: Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk.md)<br><br>**SIEM integrations**<br>[Microsoft Sentinel : Connect data from Azure Active Directory (Azure AD)](/azure/sentinel/connect-azure-active-directory.md)[Stream to Azure event hub and other SIEMs](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| IR.L2-3.6.1<br><br>**Practice statement:** Establish an operational incident-handling capability for organizational systems that includes preparation, detection, analysis, containment, recovery, and user response activities.<br><br>**Objectives:**<br>Determine if:<br>[a.] an operational incident-handling capability is established;<br>[b.] the operational incident-handling capability includes preparation;<br>[c.] the operational incident-handling capability includes detection;<br>[d.] the operational incident-handling capability includes analysis;<br>[e.] the operational incident-handling capability includes containment;<br>[f.] the operational incident-handling capability includes recovery; and<br>[g.] the operational incident-handling capability includes user response activities. | Implement incident handling and monitoring capabilities. The audit logs record all configuration changes. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. You can stream each of these logs directly into a SIEM solution, such as Microsoft Sentinel. Alternatively, use Azure Event Hubs to integrate logs with third-party SIEM solutions.<br><br>**Audit events**<br>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br>[Sign-in activity reports in the Azure Active Directory portal](../reports-monitoring/concept-sign-ins.md)<br>[How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md)<br><br>**SIEM integrations**<br>[Microsoft Sentinel : Connect data from Azure Active Directory (Azure AD)](../../sentinel/connect-azure-active-directory.md)[Stream to Azure event hub and other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
## Maintenance (MA)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - | | MA.L2-3.7.5<br><br>**Practice statement:** Require multifactor authentication to establish nonlocal maintenance sessions via external network connections and terminate such connections when nonlocal maintenance is complete.<br><br>**Objectives:**<br>Determine if:<br>[a.] multifactor authentication is used to establish nonlocal maintenance sessions via external network connections; and<br>[b.] nonlocal maintenance sessions established via external network connections are terminated when nonlocal maintenance is complete.| Accounts assigned administrative rights are targeted by attackers, including accounts used to establish non-local maintenance sessions. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.<br>[Conditional Access - Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) |
-| MP.L2-3.8.7<br><br>**Practice statement:** Control the use of removable media on system components.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of removable media on system components is controlled. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant#require-device-to-be-marked-as-compliant.md)<br>[Require hybrid Azure AD joined device](/conditional-access/concept-conditional-access-grant#require-hybrid-azure-ad-joined-device.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
+| MP.L2-3.8.7<br><br>**Practice statement:** Control the use of removable media on system components.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of removable media on system components is controlled. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/conditional-access/concept-conditional-access-grant#require-hybrid-azure-ad-joined-device.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
## Personnel Security (PS)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| PS.L2-3.9.2<br><br>**Practice statement:** Ensure that organizational systems containing CUI are protected during and after personnel actions such as terminations and transfers.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy and/or process for terminating system access and any credentials coincident with personnel actions is established;<br>[b.] system access and credentials are terminated consistent with personnel actions such as termination or transfer; and<br>[c] the system is protected during and after personnel transfer actions. | Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions.<br><br>**Account provisioning**<br>[What is identity provisioning with Azure AD?](/azure/active-directory/cloud-sync/what-is-provisioning.md)<br>[Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis.md)<br>[What is Azure AD Connect cloud sync?](/azure/active-directory/cloud-sync/what-is-cloud-sync.md)<br><br>**Revoke all associated authenticators**<br>[Revoke user access in an emergency in Azure Active Directory](/azure/active-directory/enterprise-users/users-revoke-access.md) |
+| PS.L2-3.9.2<br><br>**Practice statement:** Ensure that organizational systems containing CUI are protected during and after personnel actions such as terminations and transfers.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy and/or process for terminating system access and any credentials coincident with personnel actions is established;<br>[b.] system access and credentials are terminated consistent with personnel actions such as termination or transfer; and<br>[c] the system is protected during and after personnel transfer actions. | Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions.<br><br>**Account provisioning**<br>[What is identity provisioning with Azure AD?](../cloud-sync/what-is-provisioning.md)<br>[Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)<br>[What is Azure AD Connect cloud sync?](../cloud-sync/what-is-cloud-sync.md)<br><br>**Revoke all associated authenticators**<br>[Revoke user access in an emergency in Azure Active Directory](../enterprise-users/users-revoke-access.md) |
## System and Communications Protection (SC)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - | | SC.L2-3.13.3<br><br>**Practice statement:** Separate user functionality form system management functionality. <br><br>**Objectives:**<br>Determine if:<br>[a.] user functionality is identified;<br>[b.] system management functionality is identified; and<br>[c.] user functionality is separated from system management functionality. | Maintain separate user accounts in Azure Active Directory for everyday productivity use and administrative or system/privileged management. Privileged accounts should be cloud-only or managed accounts and not synchronized from on-premises to protect the cloud environment from on-premises compromise. System/privileged access should only be permitted from a security hardened privileged access workstation (PAW). Configure Conditional Access device filters to restrict access to administrative applications from PAWs that are enabled using Azure Virtual Desktops.<br>[Why are privileged access devices important](/security/compass/privileged-access-devices.md)<br>[Device Roles and Profiles](/security/compass/privileged-access-devices.md)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md)<br>[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) |
-| SC.L2-3.13.4<br><br>**Practice statement:** Prevent unauthorized and unintended information transfer via shared system resources.<br><br>**Objectives:**<br>Determine if:<br>[a.] unauthorized and unintended information transfer via shared system resources is prevented. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md) |
-| SC.L2-3.13.13<br><br>**Practice statement:** Control and monitor the use of mobile code.<br><br>**Objectives:**<br>Determine if:<br>[a.] use of mobile code is controlled; and<br>[b.] use of mobile code is monitored. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SC.L2-3.13.4<br><br>**Practice statement:** Prevent unauthorized and unintended information transfer via shared system resources.<br><br>**Objectives:**<br>Determine if:<br>[a.] unauthorized and unintended information transfer via shared system resources is prevented. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md) |
+| SC.L2-3.13.13<br><br>**Practice statement:** Control and monitor the use of mobile code.<br><br>**Objectives:**<br>Determine if:<br>[a.] use of mobile code is controlled; and<br>[b.] use of mobile code is monitored. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
## System and Information Integrity (SI)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| SI.L2-3.14.7<br><br>**Practice statement:**<br><br>**Objectives:** Identify unauthorized use of organizational systems.<br>Determine if:<br>[a.] authorized use of the system is defined; and<br>[b.] unauthorized use of the system is identified. | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SI.L2-3.14.7<br><br>**Practice statement:**<br><br>**Objectives:** Identify unauthorized use of organizational systems.<br>Determine if:<br>[a.] authorized use of the system is defined; and<br>[b.] unauthorized use of the system is identified. | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
### Next steps
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| IA.L2-3.5.3<br><br>**Practice statement:** Use multifactor authentication for local and network access to privileged accounts and for network access to non-privileged accounts. <br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] multifactor authentication is implemented for local access to privileged accounts;<br>[c.] multifactor authentication is implemented for network access to privileged accounts; and<br>[d.] multifactor authentication is implemented for network access to non-privileged accounts. | The following items are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the previous requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and higher.<br>[Grant controls in Conditional Access policy - Azure Active Directory](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview.md)<br>[Authentication methods and features - Azure Active Directory](/azure/active-directory/authentication/concept-authentication-methods.md) |
-| IA.L2-3.5.4<br><br>**Practice statement:** Employ replay-resistant authentication mechanisms for network access to privileged and non-privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] replay-resistant authentication mechanisms are implemented for network account access to privileged and non-privileged accounts. | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview.md) |
+| IA.L2-3.5.3<br><br>**Practice statement:** Use multifactor authentication for local and network access to privileged accounts and for network access to non-privileged accounts. <br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] multifactor authentication is implemented for local access to privileged accounts;<br>[c.] multifactor authentication is implemented for network access to privileged accounts; and<br>[d.] multifactor authentication is implemented for network access to non-privileged accounts. | The following items are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the previous requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and higher.<br>[Grant controls in Conditional Access policy - Azure Active Directory](../conditional-access/concept-conditional-access-grant.md)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](./nist-overview.md)<br>[Authentication methods and features - Azure Active Directory](../authentication/concept-authentication-methods.md) |
+| IA.L2-3.5.4<br><br>**Practice statement:** Employ replay-resistant authentication mechanisms for network access to privileged and non-privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] replay-resistant authentication mechanisms are implemented for network account access to privileged and non-privileged accounts. | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](./nist-overview.md) |
| IA.L2-3.5.5<br><br>**Practice statement:** Prevent reuse of identifiers for a defined period.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period within which identifiers can't be reused is defined; and<br>[b.] reuse of identifiers is prevented within the defined period. | All user, group, device object globally unique identifiers (GUIDs) are guaranteed unique and non-reusable for the lifetime of the Azure AD tenant.<br>[user resource type - Microsoft Graph v1.0](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br>[group resource type - Microsoft Graph v1.0](/graph/api/resources/group?view=graph-rest-1.0&preserve-view=true)<br>[device resource type - Microsoft Graph v1.0](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) |
-| IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](/azure/active-directory/devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/users.md)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser.md)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser.md)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice.md)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice.md) |
+| IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/users.md)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser.md)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser.md)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice.md)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice.md) |
| IA.L2-3.5.7<br><br>**Practice statement:**<br><br>**Objectives:** Enforce a minimum password complexity and change of characters when new passwords are created.<br>Determine if:<br>[a.] password complexity requirements are defined;<br>[b.] password change of character requirements are defined;<br>[c.] minimum password complexity requirements as defined are enforced when new passwords are created; and<br>[d.] minimum password change of character requirements as defined are enforced when new passwords are created.<br><br>IA.L2-3.5.8<br><br>**Practice statement:** Prohibit password reuse for a specified number of generations.<br><br>**Objectives:**<br>Determine if:<br>[a.] the number of generations during which a password cannot be reused is specified; and<br>[b.] reuse of passwords is prohibited during the specified number of generations. | We **strongly encourage** passwordless strategies. This control is only applicable to password authenticators, so removing passwords as an available authenticator renders this control not applicable.<br><br>Per NIST SP 800-63 B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<br><br>With Azure AD password protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<br>For customers that require strict password character change, password reuse and complexity requirements use hybrid accounts configured with Password-Hash-Sync. This action ensures the passwords synchronized to Azure AD inherit the restrictions configured in Active Directory password policies. Further protect on-premises passwords by configuring on-premises Azure AD Password Protection for Active Directory Domain Services.<br>[NIST Special Publication 800-63 B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5 (IA-5 - Control enhancement (1)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf)<br>[Eliminate bad passwords using Azure AD password protection](../authentication/concept-password-ban-bad.md)<br>[What is password hash synchronization with Azure AD?](../hybrid/whatis-phs.md) |
-| IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](/azure/active-directory/fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](/azure/active-directory/authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/security/business/solutions/passwordless-authentication?ef_id=369464fc2ba818d0bd6507de2cde3d58:G:s&OCID=AIDcmmdamuj0pc_SEM_369464fc2ba818d0bd6507de2cde3d58:G:s&msclkid=369464fc2ba818d0bd6507de2cde3d58) |
+| IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/security/business/solutions/passwordless-authentication?ef_id=369464fc2ba818d0bd6507de2cde3d58:G:s&OCID=AIDcmmdamuj0pc_SEM_369464fc2ba818d0bd6507de2cde3d58:G:s&msclkid=369464fc2ba818d0bd6507de2cde3d58) |
| IA.L2-3.5.10<br><br>**Practice statement:** Store and transmit only cryptographically protected passwords.<br><br>**Objectives:**<br>Determine if:<br>[a.] passwords are cryptographically protected in storage; and<br>[b.] passwords are cryptographically protected in transit. | **Secret Encryption at Rest**:<br>In addition to disk level encryption, when at rest, secrets stored in the directory are encrypted using the Distributed Key Manager(DKM). The encryption keys are stored in Azure AD core store and in turn are encrypted with a scale unit key. The key is stored in a container that is protected with directory ACLs, for highest privileged users and specific services. The symmetric key is typically rotated every six months. Access to the environment is further protected with operational controls and physical security.<br><br>**Encryption in Transit**:<br>To assure data security, Directory Data in Azure AD is signed and encrypted while in transit between data centers within a scale unit. The data is encrypted and unencrypted by the Azure AD core store tier, which resides inside secured server hosting areas of the associated Microsoft data centers.<br><br>Customer-facing web services are secured with the Transport Layer Security (TLS) protocol.<br>For more information, [download](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) *Data Protection Considerations - Data Security*. On page 15, there are more details.<br>[Demystifying Password Hash Sync (microsoft.com)](https://www.microsoft.com/security/blog/2019/05/30/demystifying-password-hash-sync/)<br>[Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) | |IA.L2-3.5.11<br><br>**Practice statement:** Obscure feedback of authentication information.<br><br>**Objectives:**<br>Determine if:<br>[a.] authentication information is obscured during the authentication process. | By default, Azure AD obscures all authenticator feedback. |
The following table provides a list of practice statement and objectives, and Az
* [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md) * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md) * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md)
-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
+* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
To learn more about VU Security and its complete set of solutions, visit
To get started with the VU Identity Card, ensure the following prerequisites are met: -- A tenant [configured](/azure/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant)
+- A tenant [configured](./verifiable-credentials-configure-tenant.md)
for Entra Verified ID service. - If you don\'t have an existing tenant, you can [create an Azure
User flow is specific to your application or website. However if you are using o
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Use the traditional VNet option when:
The overlay solution has the following limitations today
-* Only available for Linux and not for Windows.
* You can't deploy multiple overlay clusters on the same subnet. * Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay. * You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
You can create an AKS cluster using a system-assigned managed identity by runnin
az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \
- --node-count 3 \
--network-plugin kubenet \
- --vnet-subnet-id $SUBNET_ID
+ --service-cidr 10.0.0.0/16 \
+ --dns-service-ip 10.0.0.10 \
+ --pod-cidr 10.244.0.0/16 \
+ --docker-bridge-address 172.17.0.1/16 \
+ --vnet-subnet-id $SUBNET_ID
```
+* The *--service-cidr* is optional. This address is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+
+* The *--dns-service-ip* is optional. The address should be the *.10* address of your service IP address range.
+
+* The *--pod-cidr* is optional. This address should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+ * This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed if you need more addresses for additional nodes.
+ * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*.
+ * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
+
+* The *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network.
> [!Note] > If you wish to enable an AKS cluster to include a [Calico network policy][calico-network-policies] you can use the following command.
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
description: Learn how to use a public load balancer with a Standard SKU to expose your services with Azure Kubernetes Service (AKS). ++ Last updated 12/19/2022-- #Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an Azure Load Balancer with a Standard SKU.
You can customize different settings for your standard public load balancer at c
> [!IMPORTANT] > Only one outbound IP option (managed IPs, bring your own IP, or IP prefix) can be used at a given time.
+### Change the inbound pool type (PREVIEW)
+
+AKS nodes can be referenced in the load balancer backend pools by either their IP configuration (VMSS based membership) or by their IP address only. Utilizing the IP address based backend pool membership provides higher efficiencies when updating services and provisioning load balancers, especially at high node counts. Provisioning new clusters with IP based backend pools and converting existing clusters is now supported. When combined with NAT Gateway or user-defined routing egress types, provisioning of new nodes and services will be more performant.
+
+Two different pool membership types are available:
+
+- `nodeIPConfiguration` - legacy VMSS IP configuration based pool membership type
+- `nodeIP` - IP-based membership type
+
+#### Requirements
+
+* The `aks-preview` extension must be at least version 0.5.103.
+* The AKS cluster must be version 1.23 or newer.
+* The AKS cluster must be using standard load balancers and virtual machine scale sets.
+
+#### Limitations
+
+* Clusters using IP based backend pools are limited to 2500 nodes.
++
+#### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+#### Register the `IPBasedLoadBalancerPreview` preview feature
+
+To create an AKS cluster with IP based backend pools, you must enable the `IPBasedLoadBalancerPreview` feature flag on your subscription.
+
+Register the `IPBasedLoadBalancerPreview` feature flag by using the `az feature register` command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "IPBasedLoadBalancerPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/IPBasedLoadBalancerPreview')].{Name:name,State:properties.state}"
+```
+
+When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+#### Create a new AKS cluster with IP-based inbound pool membership
+
+```azurecli-interactive
+az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --load-balancer-backend-pool-type=nodeIP
+```
+
+#### Update an existing AKS cluster to use IP-based inbound pool membership
+
+> [!WARNING]
+> This operation will cause a temporary disruption to incoming service traffic in the cluster. The impact time will increase with larger clusters that have many nodes.
+
+```azurecli-interactive
+az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --load-balancer-backend-pool-type=nodeIP
+```
+ ### Scale the number of managed outbound public IPs Azure Load Balancer provides outbound and inbound connectivity from a virtual network. Outbound rules make it simple to configure network address translation for the public standard load balancer.
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
helm repo update
kubectl create namespace kured # Install kured in that namespace with Helm 3 (only on Linux nodes, kured is not working on Windows nodes)
-helm install my-release kubereboot/kured --namespace kured --set nodeSelector."kubernetes.io/os"=linux
+helm install my-release kubereboot/kured --namespace kured --set nodeSelector."kubernetes\.io/os"=linux
``` You can also configure additional parameters for `kured`, such as integration with Prometheus or Slack. For more information about additional configuration parameters, see the [kured Helm chart][kured-install].
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Although you can sign in to and change agent nodes, doing this operation is disc
You may only customize the NSGs on custom subnets. You may not customize NSGs on managed subnets or at the NIC level of the agent nodes. AKS has egress requirements to specific endpoints, to control egress and ensure the necessary connectivity, see [limit egress traffic](limit-egress-traffic.md). For ingress, the requirements are based on the applications you have deployed to cluster.
-## Stopped or de-allocated clusters
+## Stopped, de-allocated, and "Not Ready" nodes
-As stated earlier, manually de-allocating all cluster nodes via the IaaS APIs/CLI/portal renders the cluster out of support. The only supported way to stop/de-allocate all nodes is to [stop the AKS cluster](start-stop-cluster.md#stop-an-aks-cluster), which preserves the cluster state for up to 12 months.
+If you do not need your AKS workloads to run continuously, you can [stop the AKS cluster](start-stop-cluster.md#stop-an-aks-cluster) which stops all nodepools and the control plane, and start it again when needed. When you stop a cluster using the `az aks stop` command, the cluster state will be preserved for up to 12 months. After 12 months the cluster state and all of its resources will be deleted.
-Clusters that are stopped for more than 12 months will no longer preserve state.
+Manually de-allocating all cluster nodes via the IaaS APIs/CLI/portal is not a supported way to stop an AKS cluster or nodepool. The cluster will be considered out of support and will be stopped by AKS after 30 days. The clusters will then be subject to the same 12 month preservation policy as a correctly stopped cluster.
-Clusters that are de-allocated outside of the AKS APIs have no state preservation guarantees. The control planes for clusters in this state will be archived after 30 days, and deleted after 12 months.
+Clusters with 0 "Ready" nodes (or all "Not Ready") and 0 Running VMs will be stopped after 30 days.
AKS reserves the right to archive control planes that have been configured out of support guidelines for extended periods equal to and beyond 30 days. AKS maintains backups of cluster etcd metadata and can readily reallocate the cluster. This reallocation can be initiated by any PUT operation bringing the cluster back into support, such as an upgrade or scale to active agent nodes.
-If your subscription is suspended or deleted, your cluster's control plane and state will be deleted after 90 days.
+All clusters in a suspended or deleted subscription will be stopped immediately and deleted after 30 days
## Unsupported alpha and beta Kubernetes features
api-management Api Management Howto Configure Custom Domain Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-custom-domain-gateway.md
When you provision a [self-hosted Azure API Management gateway](self-hosted-gateway-overview.md), it is not assigned a host name and has to be referenced by its IP address. This article shows how to map an existing custom DNS name (also referred to as hostname) to a self-hosted gateway. + ## Prerequisites To perform the steps described in this article, you must have:
api-management Api Management Howto Provision Self Hosted Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-provision-self-hosted-gateway.md
Provisioning a gateway resource in your Azure API Management instance is a prerequisite for deploying a self-hosted gateway. This article walks through the steps to provision a gateway resource in API Management. + ## Prerequisites Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management Identity Provider Adal Retirement Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/identity-provider-adal-retirement-sep-2025.md
Your service is impacted by this change if:
## What is the deadline for the change?
-On 30 September, 2025, these identity providers will stop functioning. To avoid disruption of your developer portal, you need to update your Azure AD applications and identity provider configuration in Azure API Management by that date. Your developer portal might be at a security risk after Microsoft ADAL support ends in June 1, 2023. Learn more in [the official announcement](/azure/active-directory/fundamentals/whats-new#adal-end-of-support-announcement).
+On 30 September, 2025, these identity providers will stop functioning. To avoid disruption of your developer portal, you need to update your Azure AD applications and identity provider configuration in Azure API Management by that date. Your developer portal might be at a security risk after Microsoft ADAL support ends in June 1, 2023. Learn more in [the official announcement](../../active-directory/fundamentals/whats-new.md#adal-end-of-support-announcement).
Developer portal sign-in and sign-up with Azure AD or Azure AD B2C will stop working past 30 September, 2025 if you don't update your ADAL-based Azure AD or Azure AD B2C identity providers. This new authentication method is more secure, as it relies on the OAuth 2.0 authorization code flow with PKCE and uses an up-to-date software library.
If you have questions, get answers from community experts in [Microsoft Q&A](htt
## Next steps
-See all [upcoming breaking changes and feature retirements](overview.md).
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
Review [Automated API deployments with APIOps][28] in the Azure Architecture Cen
[26]: https://github.com/microsoft/api-guidelines/blob/vNext/azure/Guidelines.md [27]: https://github.com/Azure/azure-api-style-guide [28]: /azure/architecture/example-scenario/devops/automated-api-deployments-apiops
-[29]: /azure/api-management/api-management-howto-properties
-[30]: /azure/api-management/backends
+[29]: ./api-management-howto-properties.md
+[30]: ./backends.md
api-management How To Configure Cloud Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-cloud-metrics-logs.md
This article provides details for configuring cloud metrics and logs for the [se
The self-hosted gateway has to be associated with an API management service and requires outbound TCP/IP connectivity to Azure on port 443. The gateway leverages the outbound connection to send telemetry to Azure, if configured to do so. + ## Metrics By default, the self-hosted gateway emits a number of metrics through [Azure Monitor](https://azure.microsoft.com/services/monitor/), same as the managed gateway [in the cloud](api-management-howto-use-azure-monitor.md).
api-management How To Configure Local Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md
This article provides details for configuring local metrics and logs for the [self-hosted gateway](./self-hosted-gateway-overview.md) deployed on a Kubernetes cluster. For configuring cloud metrics and logs, see [this article](how-to-configure-cloud-metrics-logs.md). + ## Metrics The self-hosted gateway supports [StatsD](https://github.com/statsd/statsd), which has become a unifying protocol for metrics collection and aggregation. This section walks through the steps for deploying StatsD to Kubernetes, configuring the gateway to emit metrics via StatsD, and using [Prometheus](https://prometheus.io/) to monitor the metrics.
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster
> [!NOTE] > You can also deploy the self-hosted gateway [directly to Kubernetes](./how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md). + ## Prerequisites * [Connect your Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md) within a supported Azure Arc region.
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
This article provides the steps for deploying self-hosted gateway component of A
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). + ## Prerequisites - [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management How To Deploy Self Hosted Gateway Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-docker.md
This article provides the steps for deploying self-hosted gateway component of A
> [!NOTE] > Hosting self-hosted gateway in Docker is best suited for evaluation and development use cases. Kubernetes is recommended for production use. Learn how to [deploy with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md) or using [deployment YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md) to learn how to deploy self-hosted gateway to Kubernetes. + ## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
This article provides the steps for deploying self-hosted gateway component of A
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). + ## Prerequisites - Create a Kubernetes cluster, or have access to an existing one.
api-management How To Deploy Self Hosted Gateway Kubernetes Opentelemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md
You learn how to:
> * Generate metrics by consuming APIs on the self-hosted gateway. > * Use the metrics from the OpenTelemetry Collector. + ## Prerequisites - [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
This article describes the steps for deploying the self-hosted gateway component
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). + ## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
This article provides guidance on how to run [self-hosted gateway](./self-hosted
[!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] + ## Access token Without a valid access token, a self-hosted gateway can't access and download configuration data from the endpoint of the associated API Management service. The access token can be valid for a maximum of 30 days. It must be regenerated, and the cluster configured with a fresh token, either manually or via automation before it expires.
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
This article explains how to migrate existing self-hosted gateway deployments to
> [!IMPORTANT] > Support for Azure API Management self-hosted gateway version 0 and version 1 container images is ending on 1 October 2023, along with its corresponding Configuration API v1. [Learn more in our deprecation documentation](./breaking-changes/self-hosted-gateway-v0-v1-retirement-oct-2023.md) + ## What's new? As we strive to make it easier for customers to deploy our self-hosted gateway, we've **introduced a new configuration API** that removes the dependency on Azure Storage, unless you're using [API inspector](api-management-howto-api-inspector.md) or quotas.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
This article explains how the self-hosted gateway feature of Azure API Managemen
For an overview of the features across the various gateway offerings, see [API gateway in API Management](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways). + ## Hybrid and multi-cloud API management The self-hosted gateway feature expands API Management support for hybrid and multi-cloud environments and enables organizations to efficiently and securely manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
az webapp list-runtimes --os linux | grep PHP
::: zone pivot="platform-windows"
-Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 8.0:
+Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 7.4:
```azurecli-interactive
-az webapp config set --resource-group <resource-group-name> --name <app-name> --php-version 8.0
+az webapp config set --resource-group <resource-group-name> --name <app-name> --php-version 7.4
``` ::: zone-end
application-gateway Tutorial Protect Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-protect-application-gateway.md
This article helps you create an Azure Application Gateway with a DDoS protected virtual network. Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your application gateways from large scale DDoS attacks. > [!IMPORTANT]
-> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](/azure/ddos-protection/ddos-protection-overview).
+> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
In this tutorial, you learn how to:
To delete the resource group:
Advance to the next article to learn how to: > [!div class="nextstepaction"]
-> [Configure an application gateway with TLS termination using the Azure portal](create-ssl-portal.md)
+> [Configure an application gateway with TLS termination using the Azure portal](create-ssl-portal.md)
attestation Custom Tcb Baseline Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/custom-tcb-baseline-enforcement.md
# Custom TCB baseline enforcement for SGX attestation
-Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](/azure/security/fundamentals/trusted-hardware-identity-management) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM lags the latest baseline offered by Intel and is expected to remain at tcbEvaluationDataNumber 10.
+Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](../security/fundamentals/trusted-hardware-identity-management.md) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM lags the latest baseline offered by Intel and is expected to remain at tcbEvaluationDataNumber 10.
-The custom TCB baseline enforcement feature in Azure Attestation will enable you to perform SGX attestation against a desired TCB baseline, as opposed to the Azure default TCB baseline which is applied across [Azure Confidential Computing](/azure/confidential-computing/) (ACC) fleet today.
+The custom TCB baseline enforcement feature in Azure Attestation will enable you to perform SGX attestation against a desired TCB baseline, as opposed to the Azure default TCB baseline which is applied across [Azure Confidential Computing](../confidential-computing/index.yml) (ACC) fleet today.
## Why use custom TCB baseline enforcement feature?
Minimum PSW Windows version: "2.7.101.2"
### New users
-1. Create an attestation provider using Azure portal experience. [Details here](/azure/attestation/quickstart-portal#create-and-configure-the-provider-with-unsigned-policies)
+1. Create an attestation provider using Azure portal experience. [Details here](./quickstart-portal.md#create-and-configure-the-provider-with-unsigned-policies)
-2. Go to overview page and view the current default policy of the attestation provider. [Details here](/azure/attestation/quickstart-portal#view-an-attestation-policy)
+2. Go to overview page and view the current default policy of the attestation provider. [Details here](./quickstart-portal.md#view-an-attestation-policy)
3. Click on **View current and available TCB baselines for attestation**, view **Available TCB baselines**, identify the desired TCB identifier and click Cancel
Minimum PSW Windows version: "2.7.101.2"
Shared provider users need to migrate to custom providers to be able to perform attestation against custom TCB baseline
-1. Create an attestation provider using Azure portal experience. [Details here](/azure/attestation/quickstart-portal#create-and-configure-the-provider-with-unsigned-policies)
+1. Create an attestation provider using Azure portal experience. [Details here](./quickstart-portal.md#create-and-configure-the-provider-with-unsigned-policies)
-2. Go to overview page and view the current default policy of the attestation provider. [Details here](/azure/attestation/quickstart-portal#view-an-attestation-policy)
+2. Go to overview page and view the current default policy of the attestation provider. [Details here](./quickstart-portal.md#view-an-attestation-policy)
3. Click on **View current and available TCB baselines for attestation**, view **Available TCB baselines**, identify the desired TCB identifier and click Cancel
Shared provider users need to migrate to custom providers to be able to perform
### Existing custom provider users
-1. Go to overview page and view the current default policy of the attestation provider. [Details here](/azure/attestation/quickstart-portal#view-an-attestation-policy)
+1. Go to overview page and view the current default policy of the attestation provider. [Details here](./quickstart-portal.md#view-an-attestation-policy)
2. Click on **View current and available TCB baselines for attestation**, view **Available TCB baselines**, identify the desired TCB identifier and click Cancel
c:[type=="x-ms-attestation-type"] => issue(type="tee", value=c.value);
- If the PSW version of ACC node is lower than the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will fail - If the PSW version of ACC node is greater than or equal to the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will pass - For customers who do not configure a custom TCB baseline in attestation policy, attestation will be performed against the Azure default TCB baseline-- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline--
+- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
You can delete an empty Hybrid Runbook Worker group from the portal.
## Automatic upgrade of extension
-Hybrid Worker extension supports [Automatic upgrade](/azure/virtual-machines/automatic-extension-upgrade) of minor versions by default. We recommend that you enable Automatic upgrades to take advantage of any security or feature updates without manual overhead. However, to prevent the extension from automatically upgrading (for example, if there is a strict change windows and can only be updated at specific time), you can opt out of this feature by setting the `enableAutomaticUpgrade`property in ARM, Bicep template, PowerShell cmdlets to *false*. Set the same property to *true* whenever you want to re-enable the Automatic upgrade.
+Hybrid Worker extension supports [Automatic upgrade](../virtual-machines/automatic-extension-upgrade.md) of minor versions by default. We recommend that you enable Automatic upgrades to take advantage of any security or feature updates without manual overhead. However, to prevent the extension from automatically upgrading (for example, if there is a strict change windows and can only be updated at specific time), you can opt out of this feature by setting the `enableAutomaticUpgrade`property in ARM, Bicep template, PowerShell cmdlets to *false*. Set the same property to *true* whenever you want to re-enable the Automatic upgrade.
```powershell $extensionType = "HybridWorkerForLinux/HybridWorkerForWindows"
New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Locati
#### [Bicep template](#tab/bicep-template)
-You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/azure/azure-resource-manager/bicep/overview)
+You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](../azure-resource-manager/bicep/overview.md)
```Bicep param automationAccount string
Using [VM insights](../azure-monitor/vm/vminsights-overview.md), you can monitor
- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md). - To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).-- To learn about VM extensions for Arc-enabled VMware vSphere VMs, see [Manage VMware VMs in Azure through Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md).
+- To learn about VM extensions for Arc-enabled VMware vSphere VMs, see [Manage VMware VMs in Azure through Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md).
automation Start Stop Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/start-stop-vm.md
# Troubleshoot Start/Stop VMs during off-hours issues > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
+> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](../../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
This article provides information on troubleshooting and resolving issues that arise when you deploy the Azure Automation Start/Stop VMs during off-hours feature on your VMs.
If you don't see your problem here or you can't resolve your issue, try one of t
* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
+* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
azure-app-configuration Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api.md
Last updated 11/28/2022
# Azure App Configuration Data Plane REST API
-The documentation on the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane#control-plane) REST API for Azure App Configuration is available in the [Azure REST documentation](/rest/api/appconfiguration/). The following reference pages describe the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane#data-plane) REST API for Azure App Configuration. The data plane REST API is available at the endpoint of an App Configuration store, for example, `https://{store-name}.azconfig.io`.
+The documentation on the [control plane](../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) REST API for Azure App Configuration is available in the [Azure REST documentation](/rest/api/appconfiguration/). The following reference pages describe the [data plane](../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) REST API for Azure App Configuration. The data plane REST API is available at the endpoint of an App Configuration store, for example, `https://{store-name}.azconfig.io`.
## Resources
The documentation on the [control plane](/azure/azure-resource-manager/managemen
## Development - [Fiddler](./rest-api-fiddler.md)-- [Postman](./rest-api-postman.md)
+- [Postman](./rest-api-postman.md)
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--| |HPE Superdome Flex 280|1.20.0|1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1)
-|HPE Apollo 4200 Gen10 Plus (directly connected mode) |1.7.18 <sup>*</sup>|1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|
-|HPE Apollo 4200 Gen10 Plus (indirectly connected mode) |1.22.6 <sup>*</sup>|v1.10.0_2022-08-09 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|
-
-<sup>*</sup>Azure Kubernetes Service (AKS) on Azure Stack HCI
+|HPE Apollo 4200 Gen10 Plus | 1.22.6 | v1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|
### Kublr
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
Follow the instructions to sign in again. An error message states that you're su
## Configure just-in-time cluster access with Azure AD
-Another option for cluster access control is to use [Privileged Identity Management (PIM)](/azure/active-directory/privileged-identity-management/pim-configure) for just-in-time requests.
+Another option for cluster access control is to use [Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-configure.md) for just-in-time requests.
>[!NOTE]
-> [Azure AD PIM](/azure/active-directory/privileged-identity-management/pim-configure) is an Azure AD Premium capability that requires a Premium P2 SKU. For more on Azure AD SKUs, see the [pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
+> [Azure AD PIM](../../active-directory/privileged-identity-management/pim-configure.md) is an Azure AD Premium capability that requires a Premium P2 SKU. For more on Azure AD SKUs, see the [pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
To configure just-in-time access requests for your cluster, complete the following steps:
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
## Next steps - Securely connect to the cluster by using [Cluster Connect](cluster-connect.md).-- Read about the [architecture of Azure RBAC on Arc-enabled Kubernetes](conceptual-azure-rbac.md).
+- Read about the [architecture of Azure RBAC on Arc-enabled Kubernetes](conceptual-azure-rbac.md).
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
False whl k8s-extension C:\Users\somename\.azure\c
* `Microsoft.KubernetesConfiguration/extensions` * `Microsoft.KubernetesConfiguration/fluxConfigurations`
-* [Registration](/azure/azure-resource-manager/management/resource-providers-and-types#azure-portal) of the following Azure resource providers:
+* [Registration](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) of the following Azure resource providers:
* Microsoft.ContainerService * Microsoft.Kubernetes
az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connec
#### Using Kubelet identity as authentication method for AKS clusters
-When working with AKS clusters, one of the authentication options to use is kubelet identity. By default, AKS creates its own kubelet identity in the managed resource group. If you prefer, you can use a [pre-created kubelet managed identity](/azure/aks/use-managed-identity#use-a-pre-created-kubelet-managed-identity). To do so, add the parameter `--config useKubeletIdentity=true` at the time of Flux extension installation.
+When working with AKS clusters, one of the authentication options to use is kubelet identity. By default, AKS creates its own kubelet identity in the managed resource group. If you prefer, you can use a [pre-created kubelet managed identity](../../aks/use-managed-identity.md#use-a-pre-created-kubelet-managed-identity). To do so, add the parameter `--config useKubeletIdentity=true` at the time of Flux extension installation.
```azurecli az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config useKubeletIdentity=true
az k8s-extension delete -g <resource-group> -c <cluster-name> -n flux -t managed
## Next steps * Read more about [configurations and GitOps](conceptual-gitops-flux2.md).
-* Learn how to [use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md).
+* Learn how to [use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
# What is Azure Arc resource bridge (preview)?
-Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/) preview), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/) preview).
+Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/index.yml) preview), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](../system-center-virtual-machine-manager/index.yml) preview).
Arc resource bridge is a packaged virtual machine that hosts a *management* Kubernetes cluster and requires no user management. The virtual machine is deployed on the on-premises infrastructure, and an ARM resource of Arc resource bridge is created in Azure. The two resources are then connected, allowing VM self-service and management from Azure. The on-premises resource bridge uses guest management to tag local resources, making them available in Azure.
You may need to allow specific URLs to [ensure outbound connectivity is not bloc
## Next steps * Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md).
-* Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+* Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Azure Arc-enabled servers support the installation of the Connected Machine agen
Azure Arc-enabled servers do not support installing the agent on virtual machines running in Azure, or on virtual machines running on Azure Stack Hub or Azure Stack Edge, as they are already modeled as Azure VMs and able to be managed directly in Azure.
+> [!NOTE]
+> For additional information on using Arc-enabled servers in VMware environments, see the [VMware FAQ](vmware-faq.md).
+ ## Supported operating systems The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent. Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, are not supported operating environments.
azure-arc Vmware Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/vmware-faq.md
+
+ Title: Azure Arc-enabled servers VMware Frequently Asked Questions
+description: Learn how to use Azure Arc-enabled servers on virtual machines running in VMware environments.
Last updated : 12/21/2022+++
+# Azure Arc-enabled servers VMware Frequently Asked Questions
+
+This article addresses frequently asked questions about Arc-enabled servers on virtual machines running in VMware environments.
+
+## What is Azure Arc?
+
+Azure Arc is the overarching brand for a suite of Azure hybrid products that extend specific Azure public cloud services and/or management capabilities beyond Azure to on-premises environments and 3rd-party clouds. Azure Arc-enabled server, for example, allows you to use the same Azure management tools you would with a VM running in Azure with a VM running on-premises in a VMware cluster.
+
+## What's the difference between Arc-enabled server and Arc-enabled\<hypervisor\>?
+
+> [!NOTE]
+> Arc-enabled\<hypervisor\> refers to Arc-enabled VMare environments such as Arc-enabled VMware vSphere. **Arc-enabled VMware vSphere is currently in Public Preview**.
+
+The easiest way to think of this is as follows:
+
+- Arc-enabled server is responsible for the guest operating system and knows nothing of the virtualization platform that itΓÇÖs running on. Since Arc-enabled server also supports bare-metal machines, there may, in fact, not even be a host hypervisor.
+
+- Arc-enabled VMware vSphere is a superset of Arc-enabled server that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management such as VM start, stop, resize, create, and delete. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. See [What is Azure Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview.md) to learn more.
+
+> [!NOTE]
+> Arc-enabled VMware vSphere also provides guest operating system managementΓÇöin fact, it uses the same components as Arc-enabled server. However, during Public Preview, not all Azure services supported by Arc-enabled server are available for Arc-enabled VMware vSphereΓÇöcurrently Azure Monitor, Update Management and Microsoft Defender for Cloud are not supported. Arc-enabled VMware vSphere is not supported by Azure VMware Solution (AVS).
+>
+
+## Can I use Azure Arc-enabled server on VMs running in VMware environments?
+
+Yes. Azure Arc-enabled server works with VMs running on VMware vSphere as well as Azure VMware Solution (AVS) and supports the full breadth of guest management capabilities across security, monitoring, and governance.
+
+## Which operating systems does Azure Arc work with?
+
+Arc-enabled server and/or Arc-enabled \<hypervisor\> works with all supported versions of Windows Server and major distributions of Linux.
+
+<!--To address this question properly, we need to specify which Arc service the question applies to. LetΓÇÖs assume the question applies to Arc-enabled server and/or Arc-enabled \<hypervisor\>: it works with all supported versions of Windows Server and major distributions of Linux. -->
+
+## Should I use Arc-enabled server, Arc-enabled\<hypervisor\>, and can I use both?
+
+While Arc-enabled server and Arc-enabled VMware vSphere can be used in conjunction with one another, please note that this will produce dual representations of the same underlying Virtual Machine. This scenario may produce potentially duplicate guest management and is not advisable.
+
azure-cache-for-redis Cache Retired Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-retired-features.md
If you don't upgrade your Redis 4 cache by June 30, 2023, the cache is automatic
Cloud Service version 4 caches can't be upgraded to version 6 until they're migrated to a cache based on Azure Virtual Machine Scale Set.
-For more information, see [Caches with a dependency on Cloud Services (classic)](/azure/azure-cache-for-redis/cache-faq).
+For more information, see [Caches with a dependency on Cloud Services (classic)](./cache-faq.yml).
Starting on April 30, 2023, Cloud Service caches receive only critical security updates and critical bug fixes. Cloud Service caches won't support any new features released after April 30, 2023. We highly recommend migrating your caches to Azure Virtual Machine Scale Set.
No, the upgrade can't be rolled back.
## Next steps <!-- Add a context sentence for the following links --> - [What's new](cache-whats-new.md)-- [Azure Cache for Redis FAQ](cache-faq.yml)
+- [Azure Cache for Redis FAQ](cache-faq.yml)
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
Use the following table to compare feature and functional differences between th
| Feature/behavior | In-process<sup>3</sup> | Isolated worker process | | - | - | - |
-| [Supported .NET versions](./dotnet-isolated-process-guide.md#supported-versions) | Long Term Support (LTS) versions | All supported versions + .NET Framework |
+| [Supported .NET versions](dotnet-isolated-process-guide.md#supported-versions) | Long Term Support (LTS) versions | [All supported versions](dotnet-isolated-process-guide.md#supported-versions) + .NET Framework |
| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | | Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | | Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported (public preview)](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions) |
-| Model types exposed by bindings | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient]<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations |
-| HTTP trigger model types| [HttpRequest]/[ObjectResult] | [HttpRequestData]/[HttpResponseData] |
+| Model types exposed by bindings | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient)<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations |
+| HTTP trigger model types| [HttpRequest](/dotnet/api/system.net.http.httpclient) / [ObjectResult](/dotnet/api/microsoft.aspnetcore.mvc.objectresult) | [HttpRequestData](/dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true) / [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata?view=azure-dotnet&preserve-view=true) |
| Output binding interaction | Return values (single output only)<br/>`out` parameters<br/>`IAsyncCollector` | Return values (expanded model with single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)) | | Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported | | Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](dotnet-isolated-process-guide.md#dependency-injection) | | Middleware | Not supported | [Supported](dotnet-isolated-process-guide.md#middleware) |
-| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via dependency injection | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext] or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)|
+| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)|
| Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported (public preview)](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights) | | Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) | | Cold start times<sup>2</sup> | (Baseline) | Additionally includes process launch |
To learn more, see:
+ [Develop .NET class library functions](functions-dotnet-class-library.md) + [Develop .NET isolated worker process functions](dotnet-isolated-process-guide.md)
+[ILogger]: /dotnet/api/microsoft.extensions.logging.ilogger
+[ILogger&lt;T&gt;]: /dotnet/api/microsoft.extensions.logging.logger-1
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
Azure Functions version 4.x is highly backwards compatible to version 3.x. Most
> > After the deadline, function apps can be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps are not eligible for new features, security patches, and performance optimizations. You'll get related service support once you upgraded them to version 4.x. >
->End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all Azure Functions runtime languages (e.g .NET, Python, node.js, PowerShell etc).
+>End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
> >We highly recommend you migrating your function apps to version 4.x of the Functions runtime by following this article. >
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Azure Monitor Agent replaces the Azure Monitor legacy monitoring agents:
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-rule-overview.md), where you define which data you want each agent to collect. Data collection rules let you manage data collection settings at scale and define unique, scoped configurations for subsets of machines. You can define a rule to send data from multiple machines to multiple destinations across regions and tenants. > [!NOTE]
-> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
+> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
**To collect data using Azure Monitor Agent:**
In addition to the generally available data collection listed above, Azure Monit
| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - | | [Change Tracking](../../automation/change-tracking/overview.md) | Change Tracking: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) |
-| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](/azure/network-watcher/azure-monitor-agent-with-connection-monitor) |
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
## Supported regions
View [supported operating systems for Azure Arc Connected Machine agent](../../a
## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Previously updated : 9/14/2022 Last updated : 1/10/2023 # Customer intent: As an IT manager, I want to understand how I should move from using legacy agents to Azure Monitor Agent.
Azure Monitor Agent provides the following benefits over legacy agents:
Your migration plan to the Azure Monitor Agent should take into account: -- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to the new agent to benefit from added security and reduced cost immediately. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to *discover what solutions and features you're using today that depend on the legacy agent*.-
- If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
--- **Installing Azure Monitor Agent alongside a legacy agent:** If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, and you still need a legacy agent, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort.-
- Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use existing functionality during evaluation or migration. You can begin the transition, but ensure you understand the limitations:
- - Be careful when you collect duplicate data from the same machine. Duplicate data could skew query results and affect downstream features like alerts, dashboards, or workbooks. For example, VM Insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents.
- If you install Azure Monitor Agent and create a data collection rule for these events and performance data, you'll collect duplicate data. If you're using both agents to collect the same type of data, make sure the agents are *collecting data from different machines* or *sending the data to different destinations*. Collecting duplicate data also generates more charges for data ingestion and retention.
-
+- **Service (legacy Solutions) requirements:**
+ - Review [Azure Monitor Agent's supported services list](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent supports the services you require. If you currently use service(s) in preview, start testing your scenarios during the preview phase. This will save time and ensure you're ready to deploy to production as soon as the service becomes generally available. Moreover you benefit from added security and reduced cost immediately.
+ - Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to *discover what solutions and features you're using today that depend on the legacy agents*.
+ - If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
+
+- **Installing Azure Monitor Agent alongside a legacy agent:**
+ - If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later.
+ - Azure Monitor Agent **can run alongside the legacy Log Analytics agents on the same machine** so that you can continue to use existing functionality during evaluation or migration. You can begin the transition, but ensure you understand the **limitations below**:
+ - Be careful when you collect duplicate data from the same machine, as this could skew query results, affect downstream features like alerts, dashboards, workbooks and generate more charges for data ingestion and retention. To avoid data duplication, ensure the agents are *collecting data from different machines* or *sending the data to different destinations*. Additionally,
+ - For **Defender for Cloud**, you will only be [billed once per machine](/azure/defender-for-cloud/auto-deploy-azure-monitoring-agent#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when running both agents
+ - For **Sentinel**, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents.
- Running two telemetry agents on the same machine consumes double the resources, including but not limited to CPU, memory, storage space, and network bandwidth. ## Prerequisites
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
The [data collection rule](../essentials/data-collection-rule-overview.md) defin
You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace. > [!NOTE]
-> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
+> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
To create the data collection rule in the Azure portal:
Learn more about:
- [Azure Monitor Agent](azure-monitor-agent-overview.md). - [Data collection rules](../essentials/data-collection-rule-overview.md).-- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
+- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To complete this procedure, you need:
You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace. > [!NOTE]
-> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
+> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
### [Portal](#tab/portal) 1. On the **Monitor** menu, select **Data Collection Rules**.
Examples of using a custom XPath to filter events:
- [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md). - Learn more about [Azure Monitor Agent](azure-monitor-agent-overview.md).-- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
+- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
The data collection rule defines:
You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace. > [!NOTE]
-> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
+> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
### [Portal](#tab/portal)
Learn more about:
- [Azure Monitor Agent](azure-monitor-agent-overview.md). - [Data collection rules](../essentials/data-collection-rule-overview.md).-- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
+- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
When a log alert rule is created, the query is validated for correct syntax. But
- Rules were created via the API, and validation was skipped by the user. - The query [runs on multiple resources](../logs/cross-workspace-query.md), and one or more of the resources was deleted or moved.-- The [query fails](/azure/azure-monitor/logs/api/errors) because:
+- The [query fails](../logs/api/errors.md) because:
- The logging solution wasn't [deployed to the workspace](../insights/solutions.md#install-a-monitoring-solution), so tables aren't created. - Data stopped flowing to a table in the query for more than 30 days. - [Custom logs tables](../agents/data-sources-custom-logs.md) aren't yet created, because the data flow hasn't started.
Try the following steps to resolve the problem:
- Learn about [log alerts in Azure](./alerts-unified-log.md). - Learn more about [configuring log alerts](../logs/log-query-overview.md).-- Learn more about [log queries](../logs/log-query-overview.md).
+- Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Application Insights
-description: Application performance monitoring for Azure Virtual Machines and Azure Virtual Machine Scale Sets. Chart load and response time, dependency information, and set alerts on performance.
+ Title: Monitor performance on Azure VMs - Azure Application Insights
+description: Application performance monitoring for Azure Virtual Machine and Azure Virtual Machine Scale Sets.
Previously updated : 11/15/2022 Last updated : 01/11/2023 ms.devlang: csharp, java, javascript, python
-# Deploy Application Insights Agent on virtual machines and Virtual Machine Scale Sets
+# Application Insights for Azure VMs and Virtual Machine Scale Sets
-Enabling monitoring for your .NET or Java-based web applications running on [Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) and [Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
+Enabling monitoring for your ASP.NET and ASP.NET Core IIS-hosted applications running on [Azure virtual machines](https://azure.microsoft.com/services/virtual-machines/) or [Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
-This article walks you through enabling Application Insights monitoring by using Application Insights Agent. It also provides preliminary guidance for automating the process for large-scale deployments.
-
-Java-based applications running on Azure Virtual Machines and Azure Virtual Machine Scale Sets are monitored with the [Application Insights Java 3.0 agent](./java-in-process-agent.md), which is generally available.
-
-> [!IMPORTANT]
-> Application Insights Agent for ASP.NET and ASP.NET Core applications running on Azure Virtual Machines and Azure Virtual Machine Scale Sets is currently in public preview. For monitoring your ASP.NET applications running on-premises, use [Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
->
-> The preview version for Azure Virtual Machines and Azure Virtual Machine Scale Sets is provided without a service-level agreement. We don't recommend it for production workloads. Some features might not be supported, and some might have constrained capabilities.
->
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This article walks you through enabling Application Insights monitoring using the Application Insights Agent and provides preliminary guidance for automating the process for large-scale deployments.
## Enable Application Insights
Auto-instrumentation is easy to enable. Advanced configuration isn't required.
For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). > [!NOTE]
-> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications, and Java. Use an SDK to instrument Node.js and Python applications hosted on Azure Virtual Machines and Azure Virtual Machine Scale Sets.
-
+> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and Virtual Machine Scale Sets.
### [.NET Framework](#tab/net)
-The Application Insights Agent autocollects the same dependency signals out-of-the-box as the SDK. To learn more, see [Dependency autocollection](asp-net-dependencies.md#net).
+The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
-### [.NET Core/.NET](#tab/core)
+### [.NET Core / .NET](#tab/core)
-The Application Insights Agent autocollects the same dependency signals out-of-the-box as the SDK. To learn more, see [Dependency autocollection](asp-net-dependencies.md#net).
+The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
### [Java](#tab/Java)
-We recommend [Application Insights Java 3.0 agent](./java-in-process-agent.md) for Java. The most popular libraries, frameworks, logs, and dependencies are [autocollected](./java-in-process-agent.md#autocollected-requests) along with many [other configurations](./java-standalone-config.md).
+We recommend [Application Insights Java 3.0 agent](./java-in-process-agent.md) for Java. The most popular libraries, frameworks, logs, and dependencies are [auto-collected](./java-in-process-agent.md#autocollected-requests), with a multitude of [other configurations](./java-standalone-config.md)
### [Node.js](#tab/nodejs)
To monitor Python apps, use the [SDK](./opencensus-python.md).
-## Manage Application Insights Agent for .NET applications on virtual machines by using PowerShell
+Before installing the Application Insights Agent, you'll need a connection string. [Create a new Application Insights Resource](./create-workspace-resource.md) or copy the connection string from an existing application insights resource.
+
+### Enable Monitoring for Virtual Machines
+
+### Method 1 - Azure portal / GUI
+1. Go to Azure portal and navigate to your Application Insights resource and copy your connection string to the clipboard.
+
+ :::image type="content"source="./media/azure-vm-vmss-apps/connect-string.png" alt-text="Screenshot of the connection string." lightbox="./media/azure-vm-vmss-apps/connect-string.png":::
+
+2. Navigate to your virtual machine, open the "Extensions + applications" pane under the "Settings" section in the left side navigation menu, and select "+ Add"
-Before you install Application Insights Agent, you'll need a connection string. [Create a new Application Insights resource](./create-new-resource.md) or copy the connection string from an existing Application Insights resource.
+ :::image type="content"source="./media/azure-vm-vmss-apps/add-extension.png" alt-text="Screenshot of the extensions pane with an add button." lightbox="media/azure-vm-vmss-apps/add-extension.png":::
+
+3. Select the "Application Insights Agent" card, and select "Next"
+
+ :::image type="content"source="./media/azure-vm-vmss-apps/select-extension.png" alt-text="Screenshot of the install an extension pane with a next button." lightbox="media/azure-vm-vmss-apps/select-extension.png":::
+
+4. Paste the connection string you copied at step 1 and select "Review + Create"
+
+ :::image type="content"source="./media/azure-vm-vmss-apps/install-extension.png" alt-text="Screenshot of the create pane with a review and create button." lightbox="media/azure-vm-vmss-apps/install-extension.png":::
+
+#### Method 2 - PowerShell
> [!NOTE]
-> If you're new to PowerShell, see the [Get Started Guide](/powershell/azure/get-started-azureps).
+> New to PowerShell? Check out the [Get Started Guide](/powershell/azure/get-started-azureps).
-Install or update Application Insights Agent as an extension for virtual machines:
+Install or update the Application Insights Agent as an extension for Azure virtual machines
```powershell
-$publicCfgJsonString = '
+# define variables to match your environment before running
+$ResourceGroup = "<myVmResourceGroup>"
+$VMName = "<myVmName>"
+$Location = "<myVmLocation>"
+$ConnectionString = "<myAppInsightsResourceConnectionString>"
+
+$publicCfgJsonString = @"
{
- "redfieldConfiguration": {
- "instrumentationKeyMap": {
- "filters": [
- {
- "appFilter": ".*",
- "machineFilter": ".*",
- "virtualPathFilter": ".*",
- "instrumentationSettings" : {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/" # Application Insights connection string, create new Application Insights resource if you don't have one. https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.insights%2Fcomponents
- }
+ "redfieldConfiguration": {
+ "instrumentationKeyMap": {
+ "filters": [
+ {
+ "appFilter": ".*",
+ "machineFilter": ".*",
+ "virtualPathFilter": ".*",
+ "instrumentationSettings" : {
+ "connectionString": "$ConnectionString"
+ }
+ }
+ ]
}
- ]
}
- }
-}
-';
-$privateCfgJsonString = '{}';
+ }
+"@
-Set-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Location "<myVmLocation>" -Name "ApplicationMonitoring" -Publisher "Microsoft.Azure.Diagnostics" -Type "ApplicationMonitoringWindows" -Version "2.8" -SettingString $publicCfgJsonString -ProtectedSettingString $privateCfgJsonString
-```
+$privateCfgJsonString = '{}'
+
+Set-AzVMExtension -ResourceGroupName $ResourceGroup -VMName $VMName -Location $Location -Name "ApplicationMonitoringWindows" -Publisher "Microsoft.Azure.Diagnostics" -Type "ApplicationMonitoringWindows" -Version "2.8" -SettingString $publicCfgJsonString -ProtectedSettingString $privateCfgJsonString
+```
> [!NOTE]
-> You can install or update Application Insights Agent as an extension across multiple virtual machines at scale by using a PowerShell loop.
-
-Uninstall Application Insights Agent extension from a virtual machine:
+> For more complicated at-scale deployments you can use a PowerShell loop to install or update the Application Insights Agent extension across multiple VMs.
+Query Application Insights Agent extension status for Azure Virtual Machine
```powershell
-Remove-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name "ApplicationMonitoring"
+Get-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name ApplicationMonitoringWindows -Status
```
-Query Application Insights Agent extension status for a virtual machine:
-
+Get list of installed extensions for Azure Virtual Machine
```powershell
-Get-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name ApplicationMonitoring -Status
+Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions"
```-
-Get a list of installed extensions for a virtual machine:
-
+Uninstall Application Insights Agent extension from Azure Virtual Machine
```powershell
-Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions"
-
-# Name : ApplicationMonitoring
-# ResourceGroupName : <myVmResourceGroup>
-# ResourceType : Microsoft.Compute/virtualMachines/extensions
-# Location : southcentralus
-# ResourceId : /subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions/ApplicationMonitoring
+Remove-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name "ApplicationMonitoring"
```
-You can also view installed extensions in the [Azure Virtual Machine section](../../virtual-machines/extensions/overview.md) of the Azure portal.
- > [!NOTE]
-> Verify installation by selecting **Live Metrics Stream** within the Application Insights resource associated with the connection string you used to deploy the Application Insights Agent extension. If you're sending data from multiple virtual machines, select the target virtual machines under **Server Name**. It might take up to a minute for data to begin flowing.
+> Verify installation by selecting **Live Metrics Stream** within the Application Insights resource associated with the connection string you used to deploy the Application Insights Agent extension. If you're sending data from multiple Virtual Machines, select the target Azure virtual machines under **Server Name**. It might take up to a minute for data to begin flowing.
-## Manage Application Insights Agent for .NET applications on Virtual Machine Scale Sets by using PowerShell
+## Enable Monitoring for Virtual Machine Scale Sets
-Install or update Application Insights Agent as an extension for a Virtual Machine Scale Set:
+### Method 1 - Azure portal / GUI
+Follow the prior steps for VMs, but navigate to your Virtual Machine Scale Sets instead of your VM.
+### Method 2 - PowerShell
+Install or update the Application Insights Agent as an extension for Azure Virtual Machine Scale Set
```powershell
+# set resource group, vmss name, and connection string to reflect your enivornment
+$ResourceGroup = "<myVmResourceGroup>"
+$VMSSName = "<myVmName>"
+$ConnectionString = "<myAppInsightsResourceConnectionString>"
$publicCfgHashtable = @{ "redfieldConfiguration"= @{
$publicCfgHashtable =
"machineFilter"= ".*"; "virtualPathFilter"= ".*"; "instrumentationSettings" = @{
- "connectionString"= "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/" # Application Insights connection string, create new Application Insights resource if you don't have one. https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.insights%2Fcomponents
+ "connectionString"= "$ConnectionString"
} } )
$publicCfgHashtable =
} }; $privateCfgHashtable = @{};-
-$vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myVmssName>"
-
+$vmss = Get-AzVmss -ResourceGroupName $ResourceGroup -VMScaleSetName $VMSSName
Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitoringWindows" -Publisher "Microsoft.Azure.Diagnostics" -Type "ApplicationMonitoringWindows" -TypeHandlerVersion "2.8" -Setting $publicCfgHashtable -ProtectedSetting $privateCfgHashtable-
-Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
-
-# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance.
-```
-
-Uninstall the application monitoring extension from Virtual Machine Scale Sets:
-
-```powershell
-$vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myVmssName>"
-
-Remove-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitoring"
-
-Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
-
-# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance.
+Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss
+# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance
```
-Query the application monitoring extension status for Virtual Machine Scale Sets:
-
+Get list of installed extensions for Azure Virtual Machine Scale Sets
```powershell
-# Not supported by extensions framework
+Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<myVmssName>/extensions"
```
-Get a list of installed extensions for Virtual Machine Scale Sets:
-
+Uninstall application monitoring extension from Azure Virtual Machine Scale Sets
```powershell
-Get-AzResource -ResourceId /subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<myVmssName>/extensions
-
-# Name : ApplicationMonitoringWindows
-# ResourceGroupName : <myResourceGroup>
-# ResourceType : Microsoft.Compute/virtualMachineScaleSets/extensions
-# Location :
-# ResourceId : /subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<myVmssName>/extensions/ApplicationMonitoringWindows
+# set resource group and vmss name to reflect your environment
+$vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myVmssName>"
+Remove-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitoringWindows"
+Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
+# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance
``` ## Troubleshooting Find troubleshooting tips for the Application Insights Monitoring Agent extension for .NET applications running on Azure virtual machines and Virtual Machine Scale Sets.
-> [!NOTE]
-> The following steps don't apply to Node.js and Python applications, which require SDK instrumentation.
-
-Extension execution output is logged to files found in the following directories:
-
+If you are having trouble deploying the extension, then review execution output which is logged to files found in the following directories:
```Windows C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWindows\<version>\ ```
+If your extension has deployed successfully but you're unable to see telemetry, it could be one of the following issues covered in [Agent Troubleshooting](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot#known-issues).
+- Conflicting DLLs in an app's bin directory
+- Conflict with IIS shared configuration
[!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)]
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWi
### 2.8.44 -- Updated Application Insights .NET/.NET Core SDK to 2.20.1 - red field-- Enabled SQL query collection-- Enabled support for Azure Active Directory authentication
+- Updated Application Insights .NET/.NET Core SDK to 2.20.1 - red field.
+- Enabled SQL query collection.
+- Enabled support for Azure Active Directory authentication.
### 2.8.42
-Updated Application Insights .NET/.NET Core SDK to 2.18.1 - red field
+- Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1 - red field.
### 2.8.41
-Added ASP.NET Core auto-instrumentation feature
+- Added ASP.NET Core auto-instrumentation feature.
## Next steps-
-* Learn how to [deploy an application to a Virtual Machine Scale Set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md).
-* [Set up availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
+* Learn how to [deploy an application to an Azure Virtual Machine Scale Set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md).
+* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
If your application is behind a firewall and can't connect directly to Applicati
} ```
+You can also set the http proxy using the environment variable `APPLICATIONINSIGHTS_PROXY`, which takes the format `https://<host>:<port>`. It then takes precedence over the proxy specified in the JSON configuration.
+ Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if they're set, and `http.nonProxyHosts`, if needed. ## Recovery from ingestion failures
azure-monitor Autoscale Multiprofile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md
Previously updated : 09/30/2022 Last updated : 01/10/2023
The example below shows an autoscale setting with a default profile and recurrin
In the above example, on Monday after 6 AM, the recurring profile will be used. If the instance count is less than 3, autoscale scales to the new minimum of three. Autoscale continues to use this profile and scales based on CPU% until Monday at 6 PM. At all other times scaling will be done according to the default profile, based on the number of requests. After 6 PM on Monday, autoscale switches to the default profile. If for example, the number of instances at the time is 12, autoscale scales in to 10, which the maximum allowed for the default profile.
+## Multiple contiguous profiles
+Autoscale transitions between profiles based on their start times. The end time for a given profile is determined by the start time of the following profile.
+
+In the portal, the end time field becomes the next start time for the default profile. You can't specify the same time for the end of one profile and the start of the next. The portal will force the end time to be one minute before the start time of the following profile. During this minute, the default profile will become active. If you don't want the default profile to become active between recurring profiles, leave the end time field empty.
+
+> [!TIP]
+> To set up multiple contiguous profiles using the portal, leave the end time empty. The current profile will stop being used when the next profile becomes active. Only specify an end time when you want to revert to the default profile.
+ ## Multiple profiles using templates, CLI, and PowerShell When creating multiple profiles using templates, the CLI, and PowerShell, follow the guidelines below. ## [ARM templates](#tab/templates)
-Follow the rules below when using ARM templates to create autoscale settings with multiple profiles:
- See the autoscale section of the [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings) for a full template reference.
-* Create a default profile for each recurring profile. If you have two recurring profiles, create two matching default profiles.
-* The default profile must contain a `recurrence` section that is the same as the recurring profile, with the `hours` and `minutes` elements set for the end time of the recurring profile. If you don't specify a recurrence with a start time for the default profile, the last recurrence rule will remain in effect.
-* The `name` element for the default profile is an object with the following format: `"name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Recurring profile name\"}"` where the recurring profile name is the value of the `name` element for the recurring profile. If the name isn't specified correctly, the default profile will appear as another recurring profile.
- *The rules above don't apply for non-recurring scheduled profiles.
+There is no specification in the template for end time. A profile will remain active until the next profile's start time.
+ ## Add a recurring profile using AIM templates
-The example below shows how to create two recurring profiles. One profile for weekends between 06:00 and 19:00, Saturday and Sunday, and a second for Mondays between 04:00 and 15:00. Note the two default profiles, one for each recurring profile.
+The example below shows how to create two recurring profiles. One profile for weekends from 00:01 on Saturday morning and a second Weekday profile starting on Mondays at 04:00. That means that the weekend profile will start on Saturday morning at one minute passed midnight and end on Monday morning at 04:00. The Weekday profile will start at 4am on Monday end just after midnight on Saturday morning.
Use the following command to deploy the template: ` az deployment group create --name VMSS1-Autoscale-607 --resource-group rg-vmss1 --template-file VMSS1-autoscale.json`
where *VMSS1-autoscale.json* is the the file containing the JSON object below.
"targetResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1", "profiles": [ {
- "name": "Monday profile",
+ "name": "Weekday profile",
"capacity": { "minimum": "3", "maximum": "20",
where *VMSS1-autoscale.json* is the the file containing the JSON object below.
"schedule": { "timeZone": "E. Europe Standard Time", "days": [
- "Saturday",
- "Sunday"
+ "Saturday"
], "hours": [
- 6
- ],
- "minutes": [
0
- ]
- }
- }
- },
- {
- "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Weekend profile\"}",
- "capacity": {
- "minimum": "2",
- "maximum": "10",
- "default": "2"
- },
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "E. Europe Standard Time",
- "days": [
- "Saturday",
- "Sunday"
- ],
- "hours": [
- 19
- ],
- "minutes": [
- 0
- ]
- }
- },
- "rules": [
- {
- "scaleAction": {
- "direction": "Increase",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT3M"
- },
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
- "operator": "GreaterThan",
- "statistic": "Average",
- "threshold": 50,
- "timeAggregation": "Average",
- "timeGrain": "PT1M",
- "timeWindow": "PT1M",
- "Dimensions": [],
- "dividePerInstance": false
- }
- },
- {
- "scaleAction": {
- "direction": "Decrease",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT3M"
- },
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
- "operator": "LessThan",
- "statistic": "Average",
- "threshold": 39,
- "timeAggregation": "Average",
- "timeGrain": "PT1M",
- "timeWindow": "PT3M",
- "Dimensions": [],
- "dividePerInstance": false
- }
- }
- ]
- },
- {
- "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Monday profile\"}",
- "capacity": {
- "minimum": "2",
- "maximum": "10",
- "default": "2"
- },
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "E. Europe Standard Time",
- "days": [
- "Monday"
- ],
- "hours": [
- 15
], "minutes": [
- 0
+ 1
] }
- },
- "rules": [
- {
- "scaleAction": {
- "direction": "Increase",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT3M"
- },
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
- "operator": "GreaterThan",
- "statistic": "Average",
- "threshold": 50,
- "timeAggregation": "Average",
- "timeGrain": "PT1M",
- "timeWindow": "PT1M",
- "Dimensions": [],
- "dividePerInstance": false
- }
- },
- {
- "scaleAction": {
- "direction": "Decrease",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT3M"
- },
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
- "operator": "LessThan",
- "statistic": "Average",
- "threshold": 39,
- "timeAggregation": "Average",
- "timeGrain": "PT1M",
- "timeWindow": "PT3M",
- "Dimensions": [],
- "dividePerInstance": false
- }
- }
- ]
+ }
} ], "notifications": [],
where *VMSS1-autoscale.json* is the the file containing the JSON object below.
} ]
-}
-
+}
``` ## [CLI](#tab/cli)
$DefaultProfileThursdayProfile = New-AzAutoscaleProfile -DefaultCapacity "1" -Ma
* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest) * [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings)
-* [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
+* [PowerShell Az PowerShell module.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
* [REST API reference. Autoscale Settings](https://learn.microsoft.com/rest/api/monitor/autoscale-settings).
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
A daily cap on a Log Analytics workspace allows you to avoid unexpected increase
> [!IMPORTANT] > You should use care when setting a daily cap because when data collection stops, your ability to observe and receive alerts when the health conditions of your resources will be impacted. It can also impact other Azure services and solutions whose functionality may depend on up-to-date data being available in the workspace. Your goal shouldn't be to regularly hit the daily limit but rather use it as an infrequent method to avoid unplanned charges resulting from an unexpected increase in the volume of data collected. >
-> For strategies to reduce your Azure Monitor costs, see [Cost optimization and Azure Monitor](/azure/azure-monitor/best-practices-cost).
+> For strategies to reduce your Azure Monitor costs, see [Cost optimization and Azure Monitor](../best-practices-cost.md).
## How the daily cap works Each workspace has a daily cap that defines its own data volume limit. When the daily cap is reached, a warning banner appears across the top of the page for the selected Log Analytics workspace in the Azure portal, and an operation event is sent to the *Operation* table under the **LogManagement** category. You can optionally create an alert rule to send an alert when this event is created.
To help you determine an appropriate daily cap for your workspace, see [Azure M
## Workspaces with Microsoft Defender for Cloud
-Some data security-related data types collected [Microsoft Defender for Cloud](../../security-center/index.yml) or Microsoft Sentinel are collected despite any daily cap. The data types listed below will not be capped except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017:
+Some data security-related data types collected [Microsoft Defender for Cloud](../../security-center/index.yml) or Microsoft Sentinel are collected despite any daily cap, when the [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers-select-plan.md) solution was enabled on a workspace after June 19, 2017. The following data types will be subject to this special exception from the daily cap:
- WindowsEvent - SecurityAlert
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
The endpoint URI uses the following format, where the `Data Collection Endpoint`
### Body
-The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR.
+The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR. Additionally, it is important to ensure that the request body is properly encoded in UTF-8 to prevent any issues with data transmission.
## Sample call
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
If your web app is an ASP.NET Core application, it must be running on the [lates
Profiler isn't currently supported on free or shared app service plans. Upgrade to one of the basic plans for Profiler to start working.
+> [!NOTE]
+> The Azure Functions consumption plan isn't supported. See [Profile live Azure Functions app with Application Insights](./profiler-azure-functions.md).
+ ## Make sure you're searching for Profiler data within the right timeframe If the data you're trying to view is older than a couple of weeks, try limiting your time filter and try again. Traces are deleted after seven days.
azure-monitor Vminsights Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-policy.md
Last updated 12/13/2022
# Enable VM insights by using Azure Policy
-[Azure Policy](/azure/governance/policy/overview) lets you set and enforce requirements for all new resources you create and resources you modify. VM insights policy initiatives, which are predefined sets of policies created for VM insights, install the agents required for VM insights and enable monitoring on all new virtual machines in your Azure environment. This article explains how to enable VM insights for Azure virtual machines, Virtual Machine Scale Sets, and hybrid virtual machines connected with Azure Arc using predefined VM insights policy initiates.
+[Azure Policy](../../governance/policy/overview.md) lets you set and enforce requirements for all new resources you create and resources you modify. VM insights policy initiatives, which are predefined sets of policies created for VM insights, install the agents required for VM insights and enable monitoring on all new virtual machines in your Azure environment. This article explains how to enable VM insights for Azure virtual machines, Virtual Machine Scale Sets, and hybrid virtual machines connected with Azure Arc using predefined VM insights policy initiates.
> [!NOTE] > For information about how to use Azure Policy with Azure virtual machine scale sets and how to work with Azure Policy directly to enable Azure virtual machines, see [Deploy Azure Monitor at scale using Azure Policy](../best-practices.md).
To track the progress of remediation tasks, select **Remediate** from the **Poli
Learn how to: - [View VM insights Map](vminsights-maps.md) to see application dependencies. -- [View Azure VM performance](vminsights-performance.md) to identify bottlenecks and overall utilization of your VM's performance.
+- [View Azure VM performance](vminsights-performance.md) to identify bottlenecks and overall utilization of your VM's performance.
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
na Previously updated : 10/14/2021 Last updated : 01/12/2023
This article describes Azure NetApp Files feature availability in Azure Governme
## Feature availability
-For Azure Government regions supported by Azure NetApp Files, see the *[Products Available by Region page](https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia)*.
+For Azure Government regions supported by Azure NetApp Files, see the *[Products Available by Region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true)*.
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud are also available on supported Azure Government regions ***except for the features listed in the following table***:
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
|: |: |: | | Azure NetApp Files cross-region replication | Generally available (GA) | [Limited](cross-region-replication-introduction.md#supported-region-pairs) | | Azure NetApp Files backup | Public preview | No |
+| Standard network features | Generally available (GA) | No |
## Portal access
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 09/14/2022 Last updated : 01/10/2023 # Bicep CLI commands
The command returns an array of available versions.
## publish
-The `publish` command adds a module to a registry. The Azure container registry must exist and the account publishing to the registry must have the correct permissions. For more information about setting up a module registry, see [Use private registry for Bicep modules](private-module-registry.md).
+The `publish` command adds a module to a registry. The Azure container registry must exist and the account publishing to the registry must have the correct permissions. For more information about setting up a module registry, see [Use private registry for Bicep modules](private-module-registry.md). To publish a module, the account must have the correct profile and permissions to access the registry. You can configure the profile and credential precedence for authenticating to the registry in the [Bicep config file](./bicep-config-modules.md#configure-profiles-and-credentials).
After publishing the file to the registry, you can [reference it in a module](modules.md#file-in-registry).
The `publish` command doesn't recognize aliases that you've defined in a [bicepc
When your Bicep file uses modules that are published to a registry, the `restore` command gets copies of all the required modules from the registry. It stores those copies in a local cache. A Bicep file can only be built when the external files are available in the local cache. Typically, you don't need to run `restore` because it's called automatically by `build`.
-To restore external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the credential precedence for authenticating to the registry in the [Bicep config file](./bicep-config-modules.md#credentials-for-publishingrestoring-modules).
+To restore external modules to the local cache, the account must have the correct profile and permissions to access the registry. You can configure the profile and credential precedence for authenticating to the registry in the [Bicep config file](./bicep-config-modules.md#configure-profiles-and-credentials).
To use the restore command, you must have Bicep CLI version **0.4.1008 or later**. This command is currently only available when calling the Bicep CLI directly. It's not currently available through the Azure CLI command.
azure-resource-manager Bicep Config Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md
Title: Module setting for Bicep config description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 04/08/2022 Last updated : 01/11/2023 # Add module settings in the Bicep config file
-In a **bicepconfig.json** file, you can create aliases for module paths and configure credential precedence for restoring a module.
+In a **bicepconfig.json** file, you can create aliases for module paths and configure profile and credential precedence for publishing and restoring modules.
-This article describes the settings that are available for working with [modules](modules.md).
+This article describes the settings that are available for working with [Bicep modules](modules.md).
## Aliases for modules
You can override the public module registry alias definition in the bicepconfig.
} ```
-## Credentials for publishing/restoring modules
+## Configure profiles and credentials
-To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the credential precedence for authenticating to the registry. By default, Bicep uses the credentials from the user authenticated in Azure CLI or Azure PowerShell. To customize the credential precedence, see [Add credential precedence to Bicep config](bicep-config.md#credential-precedence).
+To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the profile and the credential precedence for authenticating to the registry. By default, Bicep uses the `AzureCloud` profile and the credentials from the user authenticated in Azure CLI or Azure PowerShell. You can customize `currentProfile` and `credentialPrecedence` in the config file.
+
+```json
+{
+ "cloud": {
+ "currentProfile": "AzureCloud",
+ "profiles": {
+ "AzureCloud": {
+ "resourceManagerEndpoint": "https://management.azure.com",
+ "activeDirectoryAuthority": "https://login.microsoftonline.com"
+ },
+ "AzureChinaCloud": {
+ "resourceManagerEndpoint": "https://management.chinacloudapi.cn",
+ "activeDirectoryAuthority": "https://login.chinacloudapi.cn"
+ },
+ "AzureUSGovernment": {
+ "resourceManagerEndpoint": "https://management.usgovcloudapi.net",
+ "activeDirectoryAuthority": "https://login.microsoftonline.us"
+ }
+ },
+ "credentialPrecedence": [
+ "AzureCLI",
+ "AzurePowerShell"
+ ]
+ }
+}
+```
+
+The available profiles are:
+
+- AzureCloud
+- AzureChinaCloud
+- AzureUSGovernment
+
+You can customize these profiles, or add new profiles for your on-premises environments.
+
+The available credential types are:
+
+- AzureCLI
+- AzurePowerShell
+- Environment
+- ManagedIdentity
+- VisualStudio
+- VisualStudioCode
+ ## Next steps
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file description: Describes the configuration file for your Bicep deployments Previously updated : 12/06/2022 Last updated : 01/09/2023 # Configure your Bicep environment
To create a `bicepconfig.json` file in Visual Studio Code, open the Command Pale
## Available settings
-When working with [modules](modules.md), you can add aliases for module paths. These aliases simplify your Bicep file because you don't have to repeat complicated paths. For more information, see [Add module settings to Bicep config](bicep-config-modules.md).
+When working with [modules](modules.md), you can add aliases for module paths. These aliases simplify your Bicep file because you don't have to repeat complicated paths. You can also configure cloud profile and credential precedence for authenticating to Azure from Bicep CLI and Visual Studio Code. The credentials are used to publish modules to registries and to restore external modules to the local cache when using the insert resource function.For more information, see [Add module settings to Bicep config](bicep-config-modules.md).
The [Bicep linter](linter.md) checks Bicep files for syntax errors and best practice violations. You can override the default settings for the Bicep file validation by modifying `bicepconfig.json`. For more information, see [Add linter settings to Bicep config](bicep-config-linter.md).
-You can also configure the credential precedence for authenticating to Azure from Bicep CLI and Visual Studio Code. The credentials are used to publish modules to registries and to restore external modules to the local cache when using the insert resource function.
-
-## Credential precedence
-
-You can configure the credential precedence for authenticating to the registry. By default, Bicep uses the credentials from the user authenticated in Azure CLI or Azure PowerShell. To customize the credential precedence, add `cloud` and `credentialPrecedence` elements to the config file.
-
-```json
-{
- "cloud": {
- "credentialPrecedence": [
- "AzureCLI",
- "AzurePowerShell"
- ]
- }
-}
-```
-
-The available credential types are:
--- AzureCLI-- AzurePowerShell-- Environment-- ManagedIdentity-- VisualStudio-- VisualStudioCode-- ## Intellisense The Bicep extension for Visual Studio Code supports intellisense for your `bicepconfig.json` file. Use the intellisense to discover available properties and values.
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
Title: Create private registry for Bicep module description: Learn how to set up an Azure container registry for private Bicep modules Previously updated : 04/01/2022 Last updated : 01/10/2023 # Create private registry for Bicep modules
A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-r
1. To publish modules to a registry, you must have permission to **push** an image. To deploy a module from a registry, you must have permission to **pull** the image. For more information about the roles that grant adequate access, see [Azure Container Registry roles and permissions](../../container-registry/container-registry-roles.md).
-1. Depending on the type of account you use to deploy the module, you may need to customize which credentials are used. These credentials are needed to get the modules from the registry. By default, credentials are obtained from Azure CLI or Azure PowerShell. You can customize the precedence for getting the credentials in the **bicepconfig.json** file. For more information, see [Credentials for restoring modules](bicep-config-modules.md#credentials-for-publishingrestoring-modules).
+1. Depending on the type of account you use to deploy the module, you may need to customize which credentials are used. These credentials are needed to get the modules from the registry. By default, credentials are obtained from Azure CLI or Azure PowerShell. You can customize the precedence for getting the credentials in the **bicepconfig.json** file. For more information, see [Credentials for restoring modules](bicep-config-modules.md#configure-profiles-and-credentials).
> [!IMPORTANT] > The private container registry is only available to users with the required access. However, it's accessed through the public internet. For more security, you can require access through a private endpoint. See [Connect privately to an Azure container registry using Azure Private Link](../../container-registry/container-registry-private-link.md).
azure-resource-manager Microsoft Common Dropdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-dropdown.md
When filtering is enabled, the control includes a text box for adding the filter
"type": "Microsoft.Common.DropDown", "label": "Example drop down", "placeholder": "",
- "defaultValue": "Value two",
+ "defaultValue": ["Value two"],
"toolTip": "", "multiselect": true, "selectAll": true,
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
# NSG service tags for Azure Video Indexer
-Azure Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
+Azure Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
+
+> [!NOTE]
+> If you are already using "AzureVideoAnalyzerForMedia" Network Service Tag you may experience issues with your networking security group starting 9 January 2023. This is because we are moving to a new Security Tag label "VideoIndexer" that was unfortunately not launched to GA in the UI before removing the preceding "AzureVideoAnalyzerForMedia" tag. The mitigatation is to run the following command from Powershell CLI:
+
+`$nsg | Add-AzNetworkSecurityRuleConfig -Name $rulename -Description "Testing our Service Tag" -Access Allow -Protocol * -Direction Inbound -Priority 100 -SourceAddressPrefix "YourTagDisplayName" -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange $port`
+
+Where `YourTagDisplayName` needs to be replaced with VideoIndexer
+ Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
To stay up-to-date with the most recent Azure Video Indexer developments, this a
For more information, see [supported languages](language-support.md).
+### Face grouping
+
+Significantly reduced number of low-quality face detection occurrences in the UI and [insights.json](video-indexer-output-json-v2.md#insights). Enhancing the quality and usability through improved grouping algorithm.
+ ## November 2022 ### Speakers' names can now be edited from the Azure Video Indexer website
backup Backup Azure Backup Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-import-export.md
The amount of time it takes to process an Azure import job varies. Process time
To monitor the status of your import job from the Azure portal, go to the **Azure Data Box** pane and select the job.
-For more information on the status of the import jobs, see [Monitor Azure Import/Export Jobs](/azure/import-export/storage-import-export-view-drive-status?tabs=azure-portal-preview).
+For more information on the status of the import jobs, see [Monitor Azure Import/Export Jobs](../import-export/storage-import-export-view-drive-status.md?tabs=azure-portal-preview).
### Finish the workflow
After the initial backup is finished, you can safely delete the data imported to
## Next steps
-* For any questions about the Azure Import/Export service workflow, see [Use the Microsoft Azure Import/Export service to transfer data to Blob storage](../import-export/storage-import-export-service.md).
+* For any questions about the Azure Import/Export service workflow, see [Use the Microsoft Azure Import/Export service to transfer data to Blob storage](../import-export/storage-import-export-service.md).
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
Unable to find changes in a file. This could be due to various reasons. Please r
## MARS offline seeding using customer-owned disks (Import/Export) is not working
-Azure Import/Export now uses Azure Data Box APIs for offline seeding on customer-owned disks. The Azure portal also list the Import/Export jobs created using the new API under [Azure Data Box jobs](/azure/import-export/storage-import-export-view-drive-status?tabs=azure-portal-preview) with the Model column as Import/Export.
+Azure Import/Export now uses Azure Data Box APIs for offline seeding on customer-owned disks. The Azure portal also list the Import/Export jobs created using the new API under [Azure Data Box jobs](../import-export/storage-import-export-view-drive-status.md?tabs=azure-portal-preview) with the Model column as Import/Export.
MARS agent versions lower than *2.0.9250.0* used the [old Azure Import/Export APIs](/rest/api/storageimportexport/), which will be discontinued after February 28, 2023 and the old MARS agents (version lower than 2.0.9250.0) can't do offline seeding using your own disks. So, we recommend you to use MARS agent 2.0.9250 or higher that uses the new Azure Data Box APIs for offline seeding on your own disks.
If you've ongoing Import/Export jobs created from older MARS agents, you can sti
## Next steps - Get more details on [how to back up Windows Server with the Azure Backup agent](tutorial-backup-windows-server-to-azure.md).-- If you need to restore a backup, see [restore files to a Windows machine](backup-azure-restore-windows-server.md).
+- If you need to restore a backup, see [restore files to a Windows machine](backup-azure-restore-windows-server.md).
backup Backup Azure Vms Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-encryption.md
To enable backups for ADE encrypted VMs using Azure RBAC enabled key vaults, you
:::image type="content" source="./media/backup-azure-vms-encryption/enable-key-vault-encryption-inline.png" alt-text="Screenshot shows the checkbox to enable ADE encrypted key vault." lightbox="./media/backup-azure-vms-encryption/enable-key-vault-encryption-expanded.png":::
-Learn about the [different available roles](/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations). The **Key Vault Administrator** role can allow permissions to *get*, *list*, and *back up* both secret and key.
+Learn about the [different available roles](../key-vault/general/rbac-guide.md?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations). The **Key Vault Administrator** role can allow permissions to *get*, *list*, and *back up* both secret and key.
-For Azure RBAC enabled key vaults, you can create custom role with the following set of permissions. Learn [how to create custom role](/azure/active-directory/roles/custom-create).
+For Azure RBAC enabled key vaults, you can create custom role with the following set of permissions. Learn [how to create custom role](../active-directory/roles/custom-create.md).
| Action | Description | | | |
You can also set the access policy using [PowerShell](./backup-azure-vms-automat
If you run into any issues, review these articles: - [Common errors](backup-azure-vms-troubleshoot.md) when backing up and restoring encrypted Azure VMs.-- [Azure VM agent/backup extension](backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md) issues.
+- [Azure VM agent/backup extension](backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md) issues.
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md
This figure shows the architecture of an Azure Bastion deployment. In this diagr
## <a name="host-scaling"></a>Host scaling
-Azure Bastion supports manual host scaling. You can configure the number of host instances (scale units) in order to manage the number of concurrent RDP/SSH connections that Azure Bastion can support. Increasing the number of host instances lets Azure Bastion manage more concurrent sessions. Decreasing the number of instances decreases the number of concurrent supported sessions. Azure Bastion supports up to 50 host instances. This feature is available for the Azure Bastion Standard SKU only.
+Azure Bastion supports manual host scaling. You can configure the number of host **instances** (scale units) in order to manage the number of concurrent RDP/SSH connections that Azure Bastion can support. Increasing the number of host instances lets Azure Bastion manage more concurrent sessions. Decreasing the number of instances decreases the number of concurrent supported sessions. Azure Bastion supports up to 50 host instances. This feature is available for the Azure Bastion Standard SKU only.
For more information, see the [Configuration settings](configuration-settings.md#instance) article.
bastion Tutorial Protect Bastion Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-protect-bastion-host.md
In this tutorial, you deploy Bastion using the Standard SKU tier and adjust host
Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on one of your VMs and maintain yourself. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) > [!IMPORTANT]
-> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](/azure/ddos-protection/ddos-protection-overview).
+> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
In this tutorial, you'll learn how to:
In this tutorial, you deployed Bastion to a virtual network and connected to a V
> [Bastion features and configuration settings](configuration-settings.md) > [!div class="nextstepaction"]
-> [Bastion - VM connections and features](vm-about.md)
+> [Bastion - VM connections and features](vm-about.md)
cloud-shell Cloud Shell Predictive Intellisense https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/cloud-shell-predictive-intellisense.md
For more information on PowerShell profiles, see [About_Profiles][06].
[01]: /powershell/module/psreadline/about/about_psreadline [02]: /powershell/azure/az-predictor [03]: /powershell/module/psreadline/set-psreadlineoption
-[04]: /azure/cloud-shell/using-cloud-shell-editor
+[04]: ./using-cloud-shell-editor.md
[05]: /powershell/scripting/learn/shell/using-predictors
-[06]: /powershell/module/microsoft.powershell.core/about/about_profiles
-
+[06]: /powershell/module/microsoft.powershell.core/about/about_profiles
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
Other
[02]: ../key-vault/general/manage-with-cli2.md#prerequisites [03]: ../service-fabric/service-fabric-cli.md [04]: ../storage/common/storage-use-azcopy-v10.md
-[05]: /azure/azure-functions/functions-run-local
+[05]: ../azure-functions/functions-run-local.md
[06]: /cli/azure/ [07]: /powershell/azure [08]: /powershell/scripting/whats-new/what-s-new-in-powershell-72
Other
[28]: medilets.png [29]: persisting-shell-storage.md [30]: quickstart-powershell.md
-[31]: quickstart.md
+[31]: quickstart.md
cognitive-services Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/azure-data-explorer.md
The [Anomaly Detector API](/azure/cognitive-services/anomaly-detector/overview-m
### Function 1: series_uv_anomalies_fl()
-The function **[series_uv_anomalies_fl()](/azure/data-explorer/kusto/functions-library/series-uv-anomalies-fl?tabs=adhoc)** detects anomalies in time series by calling the [Univariate Anomaly Detection API](/azure/cognitive-services/anomaly-detector/overview). The function accepts a limited set of time series as numerical dynamic arrays and the required anomaly detection sensitivity level. Each time series is converted into the required JSON (JavaScript Object Notation) format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of high/low/all anomalies, the modeled baseline time series, its normal high/low boundaries (a value above or below the high/low boundary is an anomaly) and the detected seasonality.
+The function **[series_uv_anomalies_fl()](/azure/data-explorer/kusto/functions-library/series-uv-anomalies-fl?tabs=adhoc)** detects anomalies in time series by calling the [Univariate Anomaly Detection API](../overview.md). The function accepts a limited set of time series as numerical dynamic arrays and the required anomaly detection sensitivity level. Each time series is converted into the required JSON (JavaScript Object Notation) format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of high/low/all anomalies, the modeled baseline time series, its normal high/low boundaries (a value above or below the high/low boundary is an anomaly) and the detected seasonality.
### Function 2: series_uv_change_points_fl()
cognitive-services Use Display Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/use-display-requirements.md
Previously updated : 03/02/2022- Last updated : 01/12/2023+ # Bing Search API use and display requirements
These use and display requirements apply to any implementation of the content an
|Term |Description | |||
-|Answer | A category of results returned in a response. For example, a response from the Bing Web Search API can include answers in the categories of webpage results, image, video, visual, and news. |
+|Answer | A category of results returned in a response. For example, a response from the Bing Web Search API can include answers in the categories of webpage results, image, video, and news. |
|Response | Any and all answers and associated data received in response to a single call to a Search API. | |Result | An item of information in an answer. For example, the set of data connected with a single news article is a result in a news answer. | |Search APIs | collectively, the Bing Custom Search, Entity Search, Image Search, News Search, Video Search, Visual Search, Local Business Search, and Web Search APIs. |
Do not:
- Copy, store, or cache any data you receive from the Bing Spell Check or Bing Autosuggest APIs. - Use data you receive from Bing Spell Check or Bing Autosuggest APIs as part of any machine learning or similar algorithmic activity. Do not use this data to train, evaluate, or improve new or existing services that you or third parties might offer.
+- Display data received from the Bing Spell Check or Bing Autosuggest APIs on the same page as content from any general web search engine, large language models or advertising network.
## Bing Search APIs
Do not:
- Use data received from the Search APIs as part of any machine learning or similar algorithmic activity. Do not use this data to train, evaluate, or improve new or existing services that you or third parties might offer.
+- Display data received from the Search APIs on the same page as content from any general web search engine, large language models or advertising network.
+ - Modify the content of results (other than to reformat them in a way that does not violate any other requirement), unless required by law or agreed to by Microsoft. - Omit attribution information and URLs associated with result content.
cognitive-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Tutorials/build-enrollment-app.md
The sample app is written using JavaScript and the React Native framework. It ca
1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository. > [!WARNING]
- > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](/azure/cognitive-services/authentication) for other ways to authenticate the service.
+ > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](../../authentication.md) for other ways to authenticate the service.
1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
The sample app is written using JavaScript and the React Native framework. It ca
1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. > [!WARNING]
- > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](/azure/cognitive-services/authentication) for other ways to authenticate the service.
+ > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](../../authentication.md) for other ways to authenticate the service.
1. Run the app using either a simulated device from Xcode, or your own iOS device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
When you're ready to release your app for production, you'll build an archive of
## Next steps
-In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](../overview-identity.md). Read the other sections on adding users before you begin development.
+In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](../overview-identity.md). Read the other sections on adding users before you begin development.
cognitive-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md
This diagram provides a high-level overview of the workflow.
![Diagram of the Batch Synthesis API workflow.](media/long-audio-api/long-audio-api-workflow.png) > [!TIP]
-> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks.
+> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks. For a C# example, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs).
You can use the following REST API operations for batch synthesis:
cognitive-services Ingestion Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/ingestion-client.md
Internally, the tool uses Speech and Language services, and follows best practic
:::image type="content" source="media/ingestion-client/architecture-1.png" alt-text="Diagram that shows the Ingestion Client Architecture.":::
-The following Speech service features are used by the Ingestion Client:
+The following Speech service feature is used by the Ingestion Client:
- [Batch speech-to-text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.-- [Speaker identification](./speaker-recognition-overview.md): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection.
-Language service features used by the Ingestion Client:
+Here are some Language service features that are used by the Ingestion Client:
- [Personally Identifiable Information (PII) extraction and redaction](../language-service/personally-identifiable-information/how-to-call-for-conversations.md): Identify, categorize, and redact sensitive information in conversation transcription. - [Sentiment analysis and opinion mining](../language-service/sentiment-opinion-mining/overview.md): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Previously updated : 09/16/2022 Last updated : 01/12/2023
The tables in this section summarizes the locales and voices supported for Text-
Additional remarks for Text-to-speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below.
+> [!TIP]
+> Check the [voice samples](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs.
+ [!INCLUDE [Language support include](includes/language-support/tts.md)] ### Voice styles and roles
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
Speech feature summaries are provided below with links for more information.
Use [speech-to-text](speech-to-text.md) to transcribe audio into text, either in real time or asynchronously.
+> [!TIP]
+> You can try speech-to-text in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code.
+ Convert audio to text from a range of sources, including microphones, audio files, and blob storage. Use speaker diarisation to determine who said what and when. Get readable transcripts with automatic formatting and punctuation. The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, you can create and train [custom speech models](custom-speech-overview.md) with acoustic, language, and pronunciation data. Custom speech models are private and can offer a competitive advantage.
-You can try speech to text with [this demo web app](https://azure.microsoft.com/services/cognitive-services/speech-to-text/#features) or in the [Speech Studio](https://aka.ms/speechstudio/speechtotexttool).
- ### Text-to-speech With [text to speech](text-to-speech.md), you can convert input text into humanlike synthesized speech. Use neural voices, which are humanlike voices powered by deep neural networks. Use the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to fine-tune the pitch, pronunciation, speaking rate, volume, and more.
We offer quickstarts in many popular programming languages. Each quickstart is d
* [Speech-to-text quickstart](get-started-speech-to-text.md) * [Text-to-speech quickstart](get-started-text-to-speech.md) * [Speech translation quickstart](./get-started-speech-translation.md)
-* [Intent recognition quickstart](./get-started-intent-recognition.md)
-* [Speaker recognition quickstart](./get-started-speaker-recognition.md)
## Code samples
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
In this overview, you learn about the benefits and capabilities of the speech-to
Speech-to-text, also known as speech recognition, enables real-time or offline transcription of audio streams into text. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md?tabs=stt). > [!NOTE]
-> Microsoft uses the same recognition technology for Cortana and Office products.
+> Microsoft uses the same recognition technology for Windows and Office products.
## Get started
To get started, try the [speech-to-text quickstart](get-started-speech-to-text.m
In depth samples are available in the [Azure-Samples/cognitive-services-speech-sdk](https://aka.ms/csspeech/samples) repository on GitHub. There are samples for C# (including UWP, Unity, and Xamarin), C++, Java, JavaScript (including Browser and Node.js), Objective-C, Python, and Swift. Code samples for Go are available in the [Microsoft/cognitive-services-speech-sdk-go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) repository on GitHub. - ## Batch transcription Batch transcription is a set of [Speech-to-text REST API](rest-speech-to-text.md) operations that enable you to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. For more information on how to use the batch transcription API, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
Release notes for `3.0.015490002-onprem-amd64`:
The [Translator][tr-containers] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.019410001-amd64-preview`.
-This container image has the following tags available.
+This container image has the following tags available. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/v2/azure-cognitive-services/translator/text-translation/tags/list).
| Image Tags | Notes | |-|:|
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/encrypt-data-at-rest.md
az keyvault key delete \
### Delete training, validation, and training results data
- The Files API allows customers to upload their training data for the purpose of fine-tuning a model. This data is stored in Azure Storage, within the same region as the resource and logically isolated with their Azure subscription and API Credentials. Uploaded files can be deleted by the user via the [DELETE API operation](/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-python#delete-your-training-files).
+ The Files API allows customers to upload their training data for the purpose of fine-tuning a model. This data is stored in Azure Storage, within the same region as the resource and logically isolated with their Azure subscription and API Credentials. Uploaded files can be deleted by the user via the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-training-files).
### Delete fine-tuned models and deployments
-The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-python#delete-your-model-deployment).
+The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-model-deployment).
## Disable customer-managed keys
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The number of examples typically range from 0 to 100 depending on how many can f
### Models
-The service provides users access to several different models. Each model provides a different capability and price point. The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and speed.
+The service provides users access to several different models. Each model provides a different capability and price point. The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and increasing order of speed.
The Codex series of models is a descendant of GPT-3 and has been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our [models concept page](./concepts/models.md). ## Next steps
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
In this tutorial, you learn how to:
Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. * <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a> * The following Python libraries: openai, num2words, matplotlib, plotly, scipy, scikit-learn, transformers.
-* An Azure OpenAI Service resource with **text-search-curie-doc-001** and **text-search-curie-query-001** models deployed. These models are currently only available in [certain regions](/azure/cognitive-services/openai/concepts/models#model-summary-table-and-region-availability). If you don't have a resource the process is documented in our [resource deployment guide](../how-to/create-resource.md).
+* An Azure OpenAI Service resource with **text-search-curie-doc-001** and **text-search-curie-query-001** models deployed. These models are currently only available in [certain regions](../concepts/models.md#model-summary-table-and-region-availability). If you don't have a resource the process is documented in our [resource deployment guide](../how-to/create-resource.md).
> [!NOTE] > If you have never worked with the Hugging Face transformers library it has its own specific [prerequisites](https://huggingface.co/docs/transformers/installation) that are required before you can successfully run `pip install transformers`.
res["summary"][9]
Using this approach, you can use embeddings as a search mechanism across documents in a knowledge base. The user can then take the top search result and use it for their downstream task, which prompted their initial query.
+## Video
+
+There is video walkthrough of this tutorial including the pre-requisite steps which can viewed on this [community YouTube post](https://www.youtube.com/watch?v=PSLO-yM6eFY).
+ ## Clean up resources If you created an OpenAI resource solely for completing this tutorial and want to clean up and remove an OpenAI resource, you'll need to delete your deployed models, and then delete the resource or associated resource group if it's dedicated to your test resource. Deleting the resource group also deletes any other resources associated with it.
If you created an OpenAI resource solely for completing this tutorial and want t
Learn more about Azure OpenAI's models: > [!div class="nextstepaction"]
-> [Next steps button](../concepts/models.md)
+> [Next steps button](../concepts/models.md)
cognitive-services Responsible Guidance Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-guidance-integration.md
When you get ready to integrate and responsibly use AI-powered products or featu
- **User Study**: Any consent or disclosure recommendations should be framed in a user study. Evaluate the first and continuous-use experience with a representative sample of the community to validate that the design choices lead to effective disclosure. Conduct user research with 10-20 community members (affected stakeholders) to evaluate their comprehension of the information and to determine if their expectations are met. -- **Transparency & Explainability:** Consider enabling and using Personalizer's [inference explainability](/azure/cognitive-services/personalizer/concepts-features?branch=main#inference-explainability) capability to better understand which features play a significant role in Personalizer's decision choice in each Rank call. This capability empowers you to provide your users with transparency regarding how their data played a role in producing the recommended best action. For example, you can give your users a button labeled "Why These Suggestions?" that shows which top features played a role in producing the Personalizer results. This information can also be used to better understand what data attributes about your users, contexts, and actions are working in favor of Personalizer's choice of best action, which are working against it, and which may have little or no effect. This capability can also provide insights about your user segments and help you identify and address potential biases.
+- **Transparency & Explainability:** Consider enabling and using Personalizer's [inference explainability](./concepts-features.md?branch=main) capability to better understand which features play a significant role in Personalizer's decision choice in each Rank call. This capability empowers you to provide your users with transparency regarding how their data played a role in producing the recommended best action. For example, you can give your users a button labeled "Why These Suggestions?" that shows which top features played a role in producing the Personalizer results. This information can also be used to better understand what data attributes about your users, contexts, and actions are working in favor of Personalizer's choice of best action, which are working against it, and which may have little or no effect. This capability can also provide insights about your user segments and help you identify and address potential biases.
- **Adversarial use**: consider establishing a process to detect and act on malicious manipulation. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment. With coordinated attacks, they can artificially fake patterns of behavior that shift the data and AI models toward their goals. If your use of Personalizer could influence important choices, make sure you have the appropriate means to detect and mitigate these types of attacks in place.
cognitive-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/responsible-use-of-ai-overview.md
Azure Cognitive Services provides information and guidelines on how to responsib
## Personalizer
-* [Transparency note and use cases](/azure/cognitive-services/personalizer/responsible-use-cases)
-* [Characteristics and limitations](/azure/cognitive-services/personalizer/responsible-characteristics-and-limitations)
-* [Integration and responsible use](/azure/cognitive-services/personalizer/responsible-guidance-integration)
-* [Data and privacy](/azure/cognitive-services/personalizer/responsible-data-and-privacy)
+* [Transparency note and use cases](./personalizer/responsible-use-cases.md)
+* [Characteristics and limitations](./personalizer/responsible-characteristics-and-limitations.md)
+* [Integration and responsible use](./personalizer/responsible-guidance-integration.md)
+* [Data and privacy](./personalizer/responsible-data-and-privacy.md)
## QnA Maker
Azure Cognitive Services provides information and guidelines on how to responsib
* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context) * [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context) * [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
-
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
cognitive-services Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-features.md
For a comprehensive list of Azure service security recommendations see the [Cogn
|:|:| | [Transport Layer Security (TLS)](/dotnet/framework/network-programming/tls) | All of the Cognitive Services endpoints exposed over HTTP enforce the TLS 1.2 protocol. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should follow these guidelines: </br>- The client operating system (OS) needs to support TLS 1.2.</br>- The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request. Depending on the language and platform, specifying TLS is done either implicitly or explicitly.</br>- For .NET users, consider the [Transport Layer Security best practices](/dotnet/framework/network-programming/tls). | | [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use managed roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](./authentication.md). |
-| [Key rotation](./authentication.md)| Each Cognitive Services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your service in the event that a key gets leaked. To learn about this and other authentication options, see [Rotate keys](/azure/cognitive-services/rotate-keys). |
+| [Key rotation](./authentication.md)| Each Cognitive Services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your service in the event that a key gets leaked. To learn about this and other authentication options, see [Rotate keys](./rotate-keys.md). |
| [Environment variables](cognitive-services-environment-variables.md) | Environment variables are name-value pairs that are stored within a specific development environment. You can store your credentials in this way as a more secure alternative to using hardcoded values in your code. However, if your environment is compromised, the environment variables are compromised as well, so this is not the most secure approach.</br></br> For instructions on how to use environment variables in your code, see the [Environment variables guide](cognitive-services-environment-variables.md). | | [Customer-managed keys (CMK)](./encryption/cognitive-services-encryption-keys-portal.md) | This feature is for services that store customer data at rest (longer than 48 hours). While this data is already double-encrypted on Azure servers, users can get extra security by adding another layer of encryption, with keys they manage themselves. You can link your service to Azure Key Vault and manage your data encryption keys there. </br></br>You need special approval to get the E0 SKU for your service, which enables CMK. Within 3-5 business days after you submit the [request form](https://aka.ms/cogsvc-cmk), you'll get an update on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once you're approved for using the E0 SKU, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. You won't be able to upgrade from F0 to the new E0 SKU. </br></br>Only some services can use CMK; look for your service on the [Customer-managed keys](./encryption/cognitive-services-encryption-keys-portal.md) page.| | [Virtual networks](./cognitive-services-virtual-networks.md) | Virtual networks allow you to specify which endpoints can make API calls to your resource. The Azure service will reject API calls from devices outside of your network. You can set a formula-based definition of the allowed network, or you can define an exhaustive list of endpoints to allow. This is another layer of security that can be used in combination with others. |
communication-services Identifiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/identifiers.md
There are user identities that you create yourself and there are external identi
* For an introduction to communication identities, see [Identity model](./identity-model.md). * To learn how to quickly create identities for testing, see the [quick-create identity quickstart](../quickstarts/identity/quick-create-identity.md). * To learn how to use Communication Services together with Microsoft Teams, see [Teams interoperability](./teams-interop.md).
+* To learn how to use a Raw ID, see [Use cases for string identifiers in Communication SDKs](./raw-id-use-cases.md).
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting.
+*Azure Communication Services provides developers tools to integrate Microsoft Teams Data Loss Prevention that is compatible with Microsoft Teams. For more information, go to [how to implement Data Loss Prevention (DLP] (../../../../how-to/chat-sdk/data-loss-prevention.md)
+ ## Server capabilities The following table shows supported server-side capabilities available in Azure Communication
communication-services Raw Id Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/raw-id-use-cases.md
+
+ Title: Azure Communication Services - Use cases for string identifiers
+description: Learn how to use Raw ID in SDKs
+++++ Last updated : 12/23/2022++
+#Customer intent: As a developer, I want to learn how to correctly use Raw ID so that I can build applications that run efficiently.
++
+# Use cases for string identifiers in Communication SDKs
+
+This article provides use cases for choosing a string (Raw ID) as a representation type of the [CommunicationIdentifier type](./identifiers.md#the-communicationidentifier-type) in Azure Communication Services SDKs. Following this guidance will help you understand some use cases when you might want to choose a Raw ID over the CommunicationIdentifier derived types.
+
+## Use cases for choosing an identifier
+A common task when implementing communication scenarios is to identify participants of conversations. When you're using Communication Services SDKs, *CommunicationIdentifier* provides the capability of uniquely identifying these participants.
+
+CommunicationIdentifier has the following advantages:
+- Provides good auto-complete in IDEs.
+- Allows using a switch case by type to address different application flows.
+- Allows restricting communication to specific types.
+- Allow access to identifier details and use them to call other APIs (such as the Microsoft Graph API) to provide a rich experience for communication participants.
+
+On top of this, the *CommunicationIdentifier* and the derived types (`MicrosoftTeamsUserIdentifier`, `PhoneNumberIdentifier`, etc.) can be converted to its string representation (Raw ID) and restored from the string, making the following scenarios easier to implement:
+- Store identifiers in a database and use them as keys.
+- Use identifiers as keys in dictionaries.
+- Implement intuitive REST CRUD APIs by using identifiers as key in REST API paths, instead of having to rely on POST payloads.
+- Use identifiers as keys in declarative UI frameworks such as React to avoid unnecessary re-rendering.
+
+### Creating CommunicationIdentifier and retrieving Raw ID
+*CommunicationIdentifier* can be created from a Raw ID and a Raw ID can be retrieved from a type derived from *CommunicationIdentifier*. It removes the need of any custom serialization methods that might take in only certain object properties and omit others. For example, the `MicrosoftTeamsUserIdentifier` has multiple properties such as `IsAnonymous` or `Cloud` or methods to retrieve these values (depending on a platform). Using methods provided by Communication Identity SDK guarantees that the way of serializing identifiers will stay canonical and consistent even if more properties will be added.
+
+Get Raw ID from CommunicationUserIdentifier:
+
+```csharp
+public async Task GetRawId()
+{
+ ChatMessage message = await ChatThreadClient.GetMessageAsync("678f26ef0c");
+ CommunicationIdentifier communicationIdentifier = message.Sender;
+ String rawId = communicationIdentifier.RawId;
+}
+```
+
+Instantiate CommunicationUserIdentifier from a Raw ID:
+
+```csharp
+public void CommunicationIdentifierFromGetRawId()
+{
+ String rawId = "8:acs:bbbcbc1e-9f06-482a-b5d8-20e3f26ef0cd_45ab2481-1c1c-4005-be24-0ffb879b1130";
+ CommunicationIdentifier communicationIdentifier = CommunicationIdentifier.FromRawId(rawId);
+}
+```
+
+You can find more platform-specific examples in the following article: [Understand identifier types](./identifiers.md)
+
+## Storing CommunicationIdentifier in a database
+One of the typical jobs that may be required from you is mapping ACS users to users coming from Contoso user database or identity provider. This is usually achieved by adding an extra column or field in Contoso user DB or Identity Provider. However, given the characteristics of the Raw ID (stable, globally unique, and deterministic), you may as well choose it as a primary key for the user storage.
+
+Assuming a `ContosoUser` is a class that represents a user of your application, and you want to save it along with a corresponding CommunicationIdentifier to the database. The original value for a `CommunicationIdentifier` can come from the Communication Identity, Calling or Chat APIs or from a custom Contoso API but can be represented as a `string` data type in your programming language no matter what the underlying type is:
+
+```csharp
+public class ContosoUser
+{
+ public string Name { get; set; }
+ public string Email { get; set; }
+ public string CommunicationId { get; set; }
+}
+```
+
+You can access `RawId` property of the `CommunicationId` to get a string that can be stored in the database:
+
+```csharp
+public void StoreToDatabase()
+{
+ CommunicationIdentifier communicationIdentifier;
+
+ ContosoUser user = new ContosoUser()
+ {
+ Name = "John",
+ Email = "john@doe.com",
+ CommunicationId = communicationIdentifier.RawId
+ };
+ SaveToDb(user);
+}
+```
+
+If you want to get `CommunicationIdentifier` from the stored Raw ID, you need to pass the raw string to `FromRawId()` method:
+
+```csharp
+public void GetFromDatabase()
+{
+ ContosoUser user = GetFromDb("john@doe.com");
+ CommunicationIdentifier communicationIdentifier = CommunicationIdentifier.FromRawId(user.CommunicationId);
+}
+```
+It will return `CommunicationUserIdentifier`, `PhoneNumberIdentifier`, `MicrosoftTeamsUserIdentifier` or `UnknownIdentifier` based on the identifier type.
+
+## Storing CommunicationIdentifier in collections
+If your scenario requires working with several *CommunicationIdentifier* objects in memory, you may want to store them in a collection (dictionary, list, hash set, etc.). A collection is useful, for example, for maintaining a list of call or chat participants. As the hashing logic relies on the value of a Raw ID, you can use *CommunicationIdentifier* in collections that require elements to have a reliable hashing behavior. The following examples demonstrate adding *CommunicationIdentifier* objects to different types of collections and checking if they're contained in a collection by instantiating new identifiers from a Raw ID value.
+
+The following example shows how Raw ID can be used as a key in a dictionary to store user's messages:
+
+```csharp
+public void StoreMessagesForContosoUsers()
+{
+ var communicationUser = new CommunicationUserIdentifier("8:acs:bbbcbc1e-9f06-482a-b5d8-20e3f26ef0cd_45ab2481-1c1c-4005-be24-0ffb879b1130");
+ var teamsUserUser = new CommunicationUserIdentifier("45ab2481-1c1c-4005-be24-0ffb879b1130");
+
+ // A dictionary with a CommunicationIdentifier as key might be used to store messages of a user.
+ var userMessages = new Dictionary<string, List<Message>>
+ {
+ { communicationUser.RawId, new List<Message>() },
+ { teamsUserUser.RawId, new List<Message>() },
+ };
+
+ // Retrieve messages for a user based on their Raw ID.
+ var messages = userMessages[communicationUser.RawId];
+}
+```
+
+As the hashing logic relies on the value of a Raw ID, you can use `CommunicationIdentifier` itself as a key in a dictionary directly:
+
+```csharp
+public void StoreMessagesForContosoUsers()
+{
+ // A dictionary with a CommunicationIdentifier as key might be used to store messages of a user.
+ var userMessages = new Dictionary<CommunicationIdentifier, List<Message>>
+ {
+ { new CommunicationUserIdentifier("8:acs:bbbcbc1e-9f06-482a-b5d8-20e3f26ef0cd_45ab2481-1c1c-4005-be24-0ffb879b1130"), new List<Message>() },
+ { new MicrosoftTeamsUserIdentifier("45ab2481-1c1c-4005-be24-0ffb879b1130"), new List<Message>() },
+ };
+
+ // Retrieve messages for a user based on their Raw ID.
+ var messages = userMessages[CommunicationIdentifier.FromRawId("8:acs:bbbcbc1e-9f06-482a-b5d8-20e3f26ef0cd_45ab2481-1c1c-4005-be24-0ffb879b1130")];
+}
+```
+
+Hashing logic that relies on the value of a Raw ID, also allows you to add `CommunicationIdentifier` objects to hash sets:
+```csharp
+public void StoreUniqueContosoUsers()
+{
+ // A hash set of unique users of a Contoso application.
+ var users = new HashSet<CommunicationIdentifier>
+ {
+ new PhoneNumberIdentifier("+14255550123"),
+ new UnknownIdentifier("28:45ab2481-1c1c-4005-be24-0ffb879b1130")
+ };
+
+ // Implement custom flow for a new communication user.
+ if (users.Contains(CommunicationIdentifier.FromRawId("4:+14255550123"))){
+ //...
+ }
+}
+```
+
+Another use case is using Raw IDs in mobile applications to identify participants. You can inject the participant view data for remote participant if you want to handle this information locally in the UI library without sending it to Azure Communication Services.
+This view data can contain a UIImage that represents the avatar to render, and a display name they can optionally display instead.
+Both the participant CommunicationIdentifier and Raw ID retrieved from it can be used to uniquely identify a remote participant.
+
+```swift
+callComposite.events.onRemoteParticipantJoined = { identifiers in
+ for identifier in identifiers {
+ // map identifier to displayName
+ let participantViewData = ParticipantViewData(displayName: "<DISPLAY_NAME>")
+ callComposite.set(remoteParticipantViewData: participantViewData,
+ for: identifier) { result in
+ switch result {
+ case .success:
+ print("Set participant view data succeeded")
+ case .failure(let error):
+ print("Set participant view data failed with \(error)")
+ }
+ }
+ }
+}
+```
+
+## Using Raw ID as key in REST API paths
+When designing a REST API, you can have endpoints that either accept a `CommunicationIdentifier` or a Raw ID string. If the identifier consists of several parts (like ObjectID, cloud name, etc. if you're using `MicrosoftTeamsUserIdentifier`), you might need to pass it in the request body. However, using Raw ID allows you to address the entity in the URL path instead of passing the whole composite object as a JSON in the body. So that you can have a more intuitive REST CRUD API.
+
+```csharp
+public async Task UseIdentifierInPath()
+{
+ CommunicationIdentifier user = GetFromDb("john@doe.com");
+
+ using HttpResponseMessage response = await client.GetAsync($"https://contoso.com/v1.0/users/{user.RawId}/profile");
+ response.EnsureSuccessStatusCode();
+}
+```
+
+## Extracting identifier details from Raw IDs.
+Consistent underlying Raw ID allows:
+- Deserializing to the right identifier type (based on which you can adjust the flow of your app).
+- Extracting details of identifiers (such as an oid for `MicrosoftTeamsUserIdentifier`).
+
+The example shows both benefits:
+- The type allows you to decide where to take the avatar from.
+- The decomposed details allow you to query the API in the right way.
+
+```csharp
+public void ExtractIdentifierDetails()
+{
+ ContosoUser user = GetFromDb("john@doe.com");
+
+ string rawId = user.CommunicationIdentifier;
+ CommunicationIdentifier teamsUser = CommunicationIdentifier.FromRawId(rawId);
+ switch (communicationIdentifier)
+ {
+ case MicrosoftTeamsUserIdentifier teamsUser:
+ string getPhotoUri = $"https://graph.microsoft.com/v1.0/users/{teamsUser.UserId}/photo/$value";
+ // ...
+ break;
+ case CommunicationIdentifier communicationUser:
+ string getPhotoUri = GetAvatarFromDB(communicationUser.Id);
+ // ...
+ break;
+ }
+}
+```
+
+You can access properties or methods for a specific *CommunicationIdentifier* type that is stored in a Contoso database in a form of a string (Raw ID).
+
+## Using Raw IDs as key in UI frameworks
+It's possible to use Raw ID of an identifier as a key in UI components to track a certain user and avoid unnecessary re-rendering and API calls. In the example, we're changing the order of how users are rendered in a list. In real world, we might want to show new users first or re-order users based on some condition (for example, hand raised). For the sake of simplicity, the following example just reverses the order in which the users are rendered.
+
+```javascript
+import { getIdentifierRawId } from '@azure/communication-common';
+
+function CommunicationParticipants() {
+ const [users, setUsers] = React.useState([{ id: getIdentifierRawId(userA), name: "John" }, { id: getIdentifierRawId(userB), name: "Jane" }]);
+ return (
+ <div>
+ {users.map((user) => (
+ // React uses keys as hints while rendering elements. Each list item should have a key that's unique among its siblings.
+ // Raw ID can be utilized as a such key.
+ <ListUser item={user} key={user.id} />
+ ))}
+ <button onClick={() => setUsers(users.slice().reverse())}>Reverse</button>
+ </div>
+ );
+}
+
+const ListUser = React.memo(function ListUser({ user }) {
+ console.log(`Render ${user.name}`);
+ return <div>{user.name}</div>;
+});
+```
+++
+## Next steps
+In this article, you learned how to:
+
+> [!div class="checklist"]
+> * Correctly identify use cases for choosing a Raw ID
+> * Convert between Raw ID and different types of a *CommunicationIdentifier*
+
+To learn more, you may want to explore the following quickstart guides:
+
+* [Understand identifier types](./identifiers.md)
+* [Reference documentation](reference.md)
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
Give your voice route a name, specify the number pattern using regular expressions, and select SBC for that pattern. Here are some examples of basic regular expressions: - `^\+\d+$` - matches a telephone number with one or more digits that start with a plus-- `^+1(\d[10])$` - matches a telephone number with a ten digits after a `+1`
+- `^\+1(\d{10})$` - matches a telephone number with a ten digits after a `+1`
- `^\+1(425|206)(\d{7})$` - matches a telephone number that starts with `+1425` or with `+1206` followed by seven digits - `^\+0?1234$` - matches both `+01234` and `+1234` telephone numbers.
communication-services Video Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-effects.md
++
+ Title: Azure Communication Services Calling video WebJS video effects
+
+description: In this document, you'll learn how to create video effects on an Azure Communication Services call.
+++ Last updated : 1/9/2023+++++
+# Adding visual effects to a video call
++
+>[!IMPORTANT]
+> The Calling Video effects are available starting on the public preview version [1.9.1-beta.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.9.1-beta.1) of the Calling SDK. Please ensure that you use this or a newer SDK when using video effects.
+
+> [!NOTE]
+> This API is provided as a preview ('beta') for developers and may change based on feedback that we receive.
+
+> [!NOTE]
+> This library cannot be used standalone and can only work when used with the Azure Communication Calling client library for WebJS (https://www.npmjs.com/package/@azure/communication-calling).
+
+The Azure Communication Calling SDK allows you to create video effects that other users on a call will be able to see. For example, for a user doing ACS calling using the WebJS SDK you can now enable that the user can turn on background blur. When background blur enabled a user can feel more comfortable in doing a video call that the output video will just show a user and all other content will be blurred.
+
+## Prerequisites
+### Install the Azure Communication Services Calling SDK
+- An Azure account with an active subscription is required. See [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) on how to create an Azure account.
+- [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended.
+- An active Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A User Access Token to instantiate a call client. Learn how to [create and manage user access tokens](../../quickstarts/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)
+- Azure Communication Calling client library is properly set up and configured (https://www.npmjs.com/package/@azure/communication-calling).
+
+An example using the Azure CLI to
+```azurecli-interactive
+az communication identity token issue --scope voip --connection-string "yourConnectionString"
+```
+For details on using the CLI, see [Use Azure CLI to Create and Manage Access Tokens](../../quickstarts/access-tokens.md?pivots=platform-azcli).
+
+## Install the Calling effects SDK
+Use ΓÇÿnpm installΓÇÖ command to install the Azure Communication Calling Effects SDK for JavaScript.
+
+'npm install @azure/communication-calling-effects ΓÇôsave'
+
+## Supported video effects:
+Currently the video effects support the following ability:
+- Background blur
+- Replace the background with a custom image
+
+## Browser support:
+
+Currently creating video effects is only supported on Chrome Desktop Browser and Mac Safari Desktop.
+
+## Class model:
+
+| Name | Description |
+|||
+| BackgroundBlurEffect | The background blur effect class. |
+| BackgroundReplacementEffect | The background replacement with image effect class. |
+
+To use video effects with the Azure Communication Calling client library, once you've created a LocalVideoStream, you need to get the VideoEffects feature API of from the LocalVideoStream.
+
+### Code examples
+```js
+import * as AzureCommunicationCallingSDK from '@azure/communication-calling';
+import { BackgroundBlur, BackgroundReplacement } from '@azure/communication-calling-effects';
+
+/** Assuming you have initialized the Azure Communication Calling client library and have created a LocalVideoStream
+(reference <link to main SDK npm>)
+*/
+
+// Get the video effects feature api on the LocalVideoStream
+const videoEffectsFeatureApi = localVideoStream.features(AzureCommunicationCallingSDK.Features.VideoEffects);
+
+// Subscribe to useful events
+videoEffectsFeatureApi.on(ΓÇÿeffectsStartedΓÇÖ, () => {
+ // Effects started
+});
+
+
+videoEffectsFeatureApi.on(ΓÇÿeffectsStoppedΓÇÖ, () => {
+ // Effects stopped
+});
++
+videoEffectsFeatureApi.on(ΓÇÿeffectsErrorΓÇÖ, (error) => {
+ // Effects error
+});
+
+// Create the effect instance
+const backgroundBlurEffect = new BackgroundBlur();
+
+// Recommended: Check if backgroundBlur is supported
+const backgroundBlurSupported = await backgroundBlurEffect.isSupported();
+
+if (backgroundBlurSupported) {
+ // Use the video effects feature api we created to start/stop effects
+
+ await videoEffectsFeatureApi.startEffects(backgroundBlurEffect);
+
+}
+
+
+/**
+To create a background replacement with a custom image you need to provide the URL of the image you want as the background to this effect. The 'startEffects' method will fail if the URL is not of an image or is unreachable/unreadable.
+
+Supported image formats are ΓÇô png, jpg, jpeg, tiff, bmp.
+*/
+
+const backgroundImage = 'https://linkToImageFile';
+
+// Create the effect instance
+const backgroundReplacementEffect = new BackgroundReplacement({
+
+ backgroundImageUrl: backgroundImage
+
+});
+
+// Recommended: Check if background replacement is supported:
+const backgroundReplacementSupported = await backgroundReplacementEffect.isSupported();
+
+if (backgroundReplacementSupported) {
+ // Use the video effects feature api as before to start/stop effects
+ await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect);
+}
+
+You can change the image used for this effect by passing it in the a new configure method:
+
+const newBackgroundImage = 'https://linkToNewImageFile';
+await backgroundReplacementEffect.configure({
+
+ backgroundImageUrl: newBackgroundImage
+
+});
+
+You can switch the effects using the same method on the video effects feature api:
+
+// Switch to background blur
+await videoEffectsFeatureApi.startEffects(backgroundBlurEffect);
++
+// Switch to background replacement
+await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect);
+
+//To stop effects:
+await videoEffectsFeatureApi.stopEffects();
+
+```
communication-services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/chat-sdk/data-loss-prevention.md
+
+ Title: Integrate Azure Communication Services with Microsoft Teams Data Loss Prevention
+
+description: Learn how to integrate with Microsoft Teams Data Loss Prevention policies by subscribing to Real-time Chat Notifications
++ Last updated : 01/10/2023+++++
+# How to integrate with Microsoft Teams Data Loss Prevention policies by subscribing to real-time chat notifications
+
+Microsoft Teams administrator can configure policies for data loss prevention (DLP) to prevent leakage of sensitive information from Teams users in Teams meetings. Developers can integrate chat in Teams meetings with Azure Communication Services for Communication Services users via the Communication Services UI library or custom integration. This article describes how to incorporate data loss prevention without a UI library.
+
+You need to subscribe to real-time notifications and listen for message updates. If a chat message from a Teams user contains sensitive content, the message content is updated to blank. The Azure Communication Services user interface has to be updated to indicate that the message cannot be displayed, for example, "Message was blocked as it contains sensitive information.". There could be a delay of a couple of seconds before a policy violation is detected and the message content is updated. You can find an example of such code below.
+
+Data Loss Prevention policies only apply to messages sent by Teams users and aren't meant to protect Azure Communications users from sending out sensitive information.
+
+```javascript
+let endpointUrl = '<replace with your resource endpoint>';
+
+// The user access token generated as part of the pre-requisites
+let userAccessToken = '<USER_ACCESS_TOKEN>';
+
+let chatClient = new ChatClient(endpointUrl, new AzureCommunicationTokenCredential(userAccessToken));
+
+await chatClient.startRealtimeNotifications();
+chatClient.on("chatMessageEdited", (e) => {
+ if(e.messageBody == ΓÇ£ΓÇ¥ && e.sender.kind == "microsoftTeamsUser")
+ // Show UI message blocked
+});
+```
+
+## Next steps
+- [Learn how to enable Microsoft Teams Data Loss Prevention](/microsoft-365/compliance/dlp-microsoft-teams?view=o365-worldwide)
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
Sometimes the bot wouldn't be able to understand or answer a question or a custo
## Handling bot to bot communication
- There may be certain use cases where two bots need to be added to the same chat thread to provide different services. In such use cases, you may need to ensure that bots don't start sending automated replies to each other's messages. If not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. You can verify the Azure Communication Services user identity of the sender of a message from the activity's `From.Id` field to see if it belongs to another bot and take required action to prevent such a communication flow. If such a scenario results in high call volumes, then Azure Communication Services Chat channel will start throttling the requests, which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](/azure/communication-services/concepts/service-limits#chat).
+ There may be certain use cases where two bots need to be added to the same chat thread to provide different services. In such use cases, you may need to ensure that bots don't start sending automated replies to each other's messages. If not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. You can verify the Azure Communication Services user identity of the sender of a message from the activity's `From.Id` field to see if it belongs to another bot and take required action to prevent such a communication flow. If such a scenario results in high call volumes, then Azure Communication Services Chat channel will start throttling the requests, which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](../../concepts/service-limits.md#chat).
## Troubleshooting
Sometimes the bot wouldn't be able to understand or answer a question or a custo
## Next steps
-Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
+Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
confidential-computing Quick Create Confidential Vm Azure Cli Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli-amd.md
Create a VM with the [az vm create](/cli/azure/vm) command.
The following example creates a VM named *myVM* and adds a user account named *azureuser*. The `--generate-ssh-keys` parameter is used to automatically generate an SSH key, and put it in the default key location(*~/.ssh*). To use a specific set of keys instead, use the `--ssh-key-values` option. For `size`, select a confidential VM size. For more information, see [supported confidential VM families](virtual-machine-solutions-amd.md).
-Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](/azure/virtual-machines/trusted-launch). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
+Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
```azurecli-interactive az vm create \
az keyvault set-policy -n keyVaultName -g myResourceGroup --object-id $desIdenti
```azurecli-interactive $diskEncryptionSetID=(az disk-encryption-set show -n diskEncryptionSetName -g myResourceGroup --query [id] -o tsv) ```
-6. Create a VM with the [az vm create](/cli/azure/vm) command. Choose `DiskWithVMGuestState` for OS disk confidential encryption with a customer-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](/azure/virtual-machines/trusted-launch). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
+6. Create a VM with the [az vm create](/cli/azure/vm) command. Choose `DiskWithVMGuestState` for OS disk confidential encryption with a customer-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
```azurecli-interactive az vm create \
echo -n $JWT | cut -d "." -f 2 | base64 -d 2> | jq .
## Next steps > [!div class="nextstepaction"]
-> [Create a confidential VM on AMD with an ARM template](quick-create-confidential-vm-arm-amd.md)
+> [Create a confidential VM on AMD with an ARM template](quick-create-confidential-vm-arm-amd.md)
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-sftp-ssh.md
Title: Connect to SFTP using SSH from workflows
-description: Connect to your SFTP file server over SSH from workflows in Azure Logic Apps.
+ Title: Connect to an SFTP server from workflows
+description: Connect to your SFTP file server from workflows using Azure Logic Apps.
ms.suite: integration Previously updated : 08/19/2022 Last updated : 01/12/2023 tags: connectors
-# Connect to an SFTP file server using SSH from workflows in Azure Logic Apps
+# Connect to an SFTP file server from workflows in Azure Logic Apps
-To automate tasks that create and manage files on a [Secure File Transfer Protocol (SFTP)](https://www.ssh.com/ssh/sftp/) server using the [Secure Shell (SSH)](https://www.ssh.com/ssh/protocol/) protocol, you can create automated integration workflows by using Azure Logic Apps and the SFTP-SSH connector. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream.
-Here are some example tasks you can automate:
+This how-to guide shows how to access your [SSH File Transfer Protocol (SFTP)](https://www.ssh.com/ssh/sftp/) server from a workflow in Azure Logic Apps. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream and uses the [Secure Shell (SSH)](https://www.ssh.com/ssh/protocol/) protocol.
+
+In Consumption logic app workflows, you can use the **SFTP-SSH** *managed* connector, while in Standard logic app workflows, you can use the **SFTP** built-in connector or the **SFTP-SSH** managed connector. You can use these connector operations to create automated workflows that run when triggered by events in your SFTP server or in other systems and run actions to manage files on your SFTP server. Both the managed and built-in connectors use the SSH protocol.
+
+For example, your workflow can start with an SFTP trigger that monitors and responds to events on your SFTP server. The trigger makes the outputs available to subsequent actions in your workflow. Your workflow can run SFTP actions that get, create, and manage files through your SFTP server account. The following list includes more example tasks:
* Monitor when files are added or changed. * Get, create, copy, rename, update, list, and delete files.
Here are some example tasks you can automate:
* Get file content and metadata. * Extract archives to folders.
-In your workflow, you can use a trigger that monitors events on your SFTP server and makes output available to other actions. You can then use actions to perform various tasks on your SFTP server. You can also include other actions that use the output from SFTP-SSH actions. For example, if you regularly retrieve files from your SFTP server, you can send email alerts about those files and their content using the Office 365 Outlook connector or Outlook.com connector. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
-
-For differences between the SFTP-SSH connector and the SFTP connector, review the [Compare SFTP-SSH versus SFTP](#comparison) section later in this topic.
-
-## Limitations
-
-* The SFTP-SSH connector currently doesn't support these SFTP servers:
-
- * IBM DataPower
- * MessageWay
- * OpenText Secure MFT
- * OpenText GXS
- * Globalscape
- * SFTP for Azure Blob Storage
- * FileMage Gateway
- * VShell Secure File Transfer Server
-
-* The following SFTP-SSH actions support [chunking](../logic-apps/logic-apps-handle-large-messages.md):
-
- | Action | Chunking support | Override chunk size support |
- |--||--|
- | **Copy file** | No | Not applicable |
- | **Create file** | Yes | Yes |
- | **Create folder** | Not applicable | Not applicable |
- | **Delete file** | Not applicable | Not applicable |
- | **Extract archive to folder** | Not applicable | Not applicable |
- | **Get file content** | Yes | Yes |
- | **Get file content using path** | Yes | Yes |
- | **Get file metadata** | Not applicable | Not applicable |
- | **Get file metadata using path** | Not applicable | Not applicable |
- | **List files in folder** | Not applicable | Not applicable |
- | **Rename file** | Not applicable | Not applicable |
- | **Update file** | No | Not applicable |
- ||||
-
- SFTP-SSH actions that support chunking can handle files up to 1 GB, while SFTP-SSH actions that don't support chunking can handle files up to 50 MB. The default chunk size is 15 MB. However, this size can dynamically change, starting from 5 MB and gradually increasing to the 50-MB maximum. Dynamic sizing is based on factors such as network latency, server response time, and so on.
-
- > [!NOTE]
- > For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
- > this connector's ISE-labeled version requires chunking to use the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
-
- You can override this adaptive behavior when you [specify a constant chunk size](#change-chunk-size) to use instead. This size can range from 5 MB to 50 MB. For example, suppose you have a 45-MB file and a network that can that support that file size without latency. Adaptive chunking results in several calls, rather that one call. To reduce the number of calls, you can try setting a 50-MB chunk size. In different scenario, if your logic app workflow is timing out, for example, when using 15-MB chunks, you can try reducing the size to 5 MB.
-
- Chunk size is associated with a connection. This attribute means you can use the same connection for both actions that support chunking and actions that don't support chunking. In this case, the chunk size for actions that support chunking ranges from 5 MB to 50 MB.
-
-* SFTP-SSH triggers don't support message chunking. When triggers request file content, they select only files that are 15 MB or smaller. To get files larger than 15 MB, follow this pattern instead:
-
- 1. Use an SFTP-SSH trigger that returns only file properties. These triggers have names that include the description, **(properties only)**.
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create and edit logic app workflows:
- 1. Follow the trigger with the SFTP-SSH **Get file content** action. This action reads the complete file and implicitly uses message chunking.
+* Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
-* The SFTP-SSH managed or Azure-hosted connector can create a limited number of connections to the SFTP server, based on the connection capacity in the Azure region where your logic app resource exists. If this limit poses a problem in a Consumption logic app workflow, consider creating a Standard logic app workflow and use the SFTP-SSH built-in connector instead.
+* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
-<a name="comparison"></a>
+## Connector technical reference
-## Compare SFTP-SSH versus SFTP
+The SFTP connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-The following list describes key SFTP-SSH capabilities that differ from the SFTP connector:
+| Logic app type (plan) | Environment | Connector version |
+||-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label and built-in connector, which appears in the designer under the **Built-in** label and is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [SFTP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sftp/) <br><br>- [Managed connectors in Azure Logic Apps](managed.md) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
-* Uses the [SSH.NET library](https://github.com/sshnet/SSH.NET), which is an open-source Secure Shell (SSH) library that supports .NET.
+## General limitations
-* Provides the **Create folder** action, which creates a folder at the specified path on the SFTP server.
+* Before you use the SFTP-SSH managed connector, review the known issues and limitations in the [SFTP-SSH managed connector reference](/connectors/sftpwithssh/).
-* Provides the **Rename file** action, which renames a file on the SFTP server.
+* Before you use the SFTP built-in connector, review the known issues and limitations in the [SFTP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sftp/).
-* Caches the connection to SFTP server *for up to 1 hour*. This capability improves performance and reduces how often the connector tries connecting to the server. To set the duration for this caching behavior, edit the [**ClientAliveInterval** property](https://man.openbsd.org/sshd_config#ClientAliveInterval) in the SSH configuration on your SFTP server.
+<a name="known-issues"></a>
-## How SFTP-SSH triggers work
+## Known issues
-<a name="polling-behavior"></a>
-### Polling behavior
+## Chunking
-SFTP-SSH triggers poll the SFTP file system and look for any file that changed since the last poll. Some tools let you preserve the timestamp when the files change. In these cases, you have to disable this feature so your trigger can work. Here are some common settings:
-
-| SFTP client | Action |
-|-|--|
-| Winscp | Go to **Options** > **Preferences** > **Transfer** > **Edit** > **Preserve timestamp** > **Disable** |
-| FileZilla | Go to **Transfer** > **Preserve timestamps of transferred files** > **Disable** |
-|||
-
-When a trigger finds a new file, the trigger checks that the new file is complete, and not partially written. For example, a file might have changes in progress when the trigger checks the file server. To avoid returning a partially written file, the trigger notes the timestamp for the file that has recent changes, but doesn't immediately return that file. The trigger returns the file only when polling the server again. Sometimes, this behavior might cause a delay that is up to twice the trigger's polling interval.
-
-<a name="trigger-recurrence-shift-drift"></a>
-
-## Trigger recurrence shift and drift (daylight saving time)
-
-Recurring connection-based triggers where you need to create a connection first, such as the managed SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
-
-To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#recurrence-for-connection-based-triggers).
+For more information about how the SFTP-SSH managed connector can handle large files exceeding default size limits, see [SFTP-SSH managed connector reference - Chunking](/connectors/sftpwithssh/#chunking).
## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Your SFTP server address and account credentials, so your workflow can access your SFTP account. You also need access to an SSH private key and the SSH private key password. To upload large files using chunking, you need both read and write access for the root folder on your SFTP server. Otherwise, you get a "401 Unauthorized" error.
-
- The SFTP-SSH connector supports both private key authentication and password authentication. However, the SFTP-SSH connector supports *only* the following private key formats, key exchange algorithms, encryption algorithms, and fingerprints:
-
- * **Private key formats**: RSA (Rivest Shamir Adleman) and DSA (Digital Signature Algorithm) keys in both OpenSSH and ssh.com formats. If your private key is in PuTTY (.ppk) file format, first [convert the key to the OpenSSH (.pem) file format](#convert-to-openssh).
- * **Key exchange algorithms**: Review [Key Exchange Method - SSH.NET](https://github.com/sshnet/SSH.NET#key-exchange-method).
- * **Encryption algorithms**: Review [Encryption Method - SSH.NET](https://github.com/sshnet/SSH.NET#encryption-method).
- * **Fingerprint**: MD5
-
- After you add an SFTP-SSH trigger or action to your workflow, you have to provide connection information for your SFTP server. When you provide your SSH private key for this connection, ***don't manually enter or edit the key***, which might cause the connection to fail. Instead, make sure that you ***copy the key*** from your SSH private key file, and ***paste*** that key into the connection details. For more information, see the [Connect to SFTP with SSH](#connect) section later this article.
-
-* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-
-* The logic app workflow where you want to access your SFTP account. To start with an SFTP-SSH trigger, [create a blank logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md). To use an SFTP-SSH action, start your workflow with another trigger, for example, the **Recurrence** trigger.
-
-## Considerations
-
-The following section describes considerations to review when you use this connector's triggers and actions.
-
-<a name="different-folders-trigger-processing-file-storage"></a>
+* Connection and authentication information to access your SFTP server, such as the server address, account credentials, access to an SSH private key, and the SSH private key password. For more information, see [SFTP-SSH managed connector reference - Authentication and permissions](/connectors/sftpwithssh/#authentication-and-permissions).
-### Use different SFTP folders for file upload and processing
+ > [!IMPORTANT]
+ >
+ > When you create your connection and enter your SSH private key in the **SSH private key** property, make sure to
+ > [follow the steps for providing the complete and correct value for this property](/connectors/sftpwithssh/#authentication-and-permissions).
+ > Otherwise, a non-valid key causes the connection to fail.
-On your SFTP server, use separate folders for storing uploaded files and for the trigger to monitor those files for processing. Otherwise, the trigger won't fire and behaves unpredictably, for example, skipping a random number of files that the trigger processes. However, this requirement means that you need a way to move files between those folders.
+* The logic app workflow where you want to access your SFTP account. To start with an SFTP-SSH trigger, you have to start with a blank workflow. To use an SFTP-SSH action, start your workflow with another trigger, such as the **Recurrence** trigger.
-If this trigger problem happens, remove the files from the folder that the trigger monitors, and use a different folder to store the uploaded files.
+<a name="add-sftp-trigger"></a>
-<a name="create-file"></a>
+## Add an SFTP trigger
-### Create file
+### [Consumption](#tab/consumption)
-To create a file on your SFTP server, you can use the SFTP-SSH **Create file** action. When this action creates the file, the Logic Apps service also automatically calls your SFTP server to get the file's metadata. However, if you move the newly created file before the Logic Apps service can make the call to get the metadata, you get a `404` error message, `'A reference was made to a file or folder which does not exist'`. To skip reading the file's metadata after file creation, follow the steps to [add and set the **Get all file metadata** property to **No**](#file-does-not-exist).
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-> [!IMPORTANT]
-> If you use chunking with SFTP-SSH operations that create files on your SFTP server,
-> these operations create temporary `.partial` and `.lock` files. These files help
-> the operations use chunking. Don't remove or change these files. Otherwise,
-> the file operations fail. When the operations finish, they delete the temporary files.
+1. On the designer, under the search box, select **Standard**. In the search box, enter **sftp**.
-<a name="convert-to-openssh"></a>
+1. From the triggers list, select the [SFTP-SSH trigger](/connectors/sftpwithssh/#triggers) that you want to use.
-## Convert PuTTY-based key to OpenSSH
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-The PuTTY format and OpenSSH format use different file name extensions. The PuTTY format uses the .ppk, or PuTTY Private Key, file name extension. The OpenSSH format uses the .pem, or Privacy Enhanced Mail, file name extension. If your private key is in PuTTY format, and you have to use OpenSSH format, first convert the key to the OpenSSH format by following these steps:
+1. After the trigger information box appears, provide the necessary details for your selected trigger. For more information, see [SFTP-SSH managed connector triggers reference](/connectors/sftpwithssh/#triggers).
-### Unix-based OS
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-1. If you don't have the PuTTY tools installed on your system, do that now, for example:
+### [Standard](#tab/standard)
- `sudo apt-get install -y putty`
+<a name="built-in-connector-trigger"></a>
-1. Run this command, which creates a file that you can use with the SFTP-SSH connector:
+#### Built-in connector trigger
- `puttygen <path-to-private-key-file-in-PuTTY-format> -O private-openssh -o <path-to-private-key-file-in-OpenSSH-format>`
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
- For example:
+1. On the designer, select **Choose an operation**. Under the search box, select **Built-in**.
- `puttygen /tmp/sftp/my-private-key-putty.ppk -O private-openssh -o /tmp/sftp/my-private-key-openssh.pem`
+1. In the search box, enter **sftp**. From the triggers list, select the [SFTP trigger](/azure/logic-apps/connectors/built-in/reference/sftp/#triggers) that you want to use.
-### Windows OS
+1. If prompted, provide the necessary [connection information](/azure/logic-apps/connectors/built-in/reference/sftp/#authentication). When you're done, select **Create**.
-1. If you haven't done so already, [download the latest PuTTY Generator (puttygen.exe) tool](https://www.puttygen.com), and then open the tool.
+1. After the trigger information box appears, provide the necessary details for your selected trigger. For more information, see [SFTP built-in connector triggers reference](/azure/logic-apps/connectors/built-in/reference/sftp/#triggers).
-1. In the PuTTY Key Generator tool (puttygen.exe), under **Actions**, select **Load**.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
- ![Screenshot showing the PuTTY Key Generator tool and the "Actions" section with "Load" selected.](./media/connectors-sftp-ssh/puttygen-load.png)
+<a name="managed-connector-trigger"></a>
-1. Browse to your private key file in PuTTY format, and select **Open**.
+#### Managed connector trigger
-1. From the **Conversions** menu, select **Export OpenSSH key**.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
- ![Screenshot showing the PuTTY Generator tool with the "Conversions" menu open and "Export OpenSSH key" selected.](./media/connectors-sftp-ssh/export-openssh-key.png)
+1. On the designer, select **Choose an operation**. Under the search box, select **Azure**.
-1. Save the private key file with the **.pem** file name extension.
+1. In the search box, enter **sftp**. From the triggers list, select the [SFTP-SSH trigger](/connectors/sftpwithssh/#triggers) that you want to use.
-## Find the MD5 fingerprint
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-The SFTP-SSH connector rejects a connection if both the SFTP server's fingerprint and expected fingerprint don't match. To get the MD5 fingerprint, which is a sequence with 16 pairs of hex digits delimited by colons, try the following options.
+1. After the trigger information box appears, provide the necessary details for your selected trigger. For more information, see [SFTP-SSH managed connector triggers reference](/connectors/sftpwithssh/#triggers).
-### You have the key
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-The MD5 key is a 47-character string delimited by colons. To get the MD5 fingerprint when you have the key, you can use tools such as `ssh-keygen`, for example:
-
-```bash
-ssh-keygen -l -f id_rsa.pub -E md5
-```
-
-### You don't have the key
-
-To get an MD5 fingerprint when you don't have a key, you can use the latest [Server and Protocol Information Dialog tool by WinSCP](https://winscp.net/eng/docs/ui_fsinfo), or you can use the PuTTY Configuration tool instead:
-
-1. In the PuTTY Configuration tool (putty.exe), in the **Category** window, open **Connection** > **SSH** > **Host keys**.
-
-1. Under **Host key algorithm preference**, in the **Algorithm selection policy** list, check that **RSA** appears at the top.
-
-1. If **RSA** doesn't appear at the top, select **RSA**, and then select **Up** until **RSA** moves to the top.
-
- ![Screenshot showing the PuTTY Configuration tool, "Connection" category expanded to show "Host keys" selected. On right pane, "RSA" and "Up" button appear selected.](media/connectors-sftp-ssh/putty-select-rsa-key.png)
-
-1. Connect to your SFTP server with PuTTY. After the connection is created, when the PUTTY security alert appears, select **More info**.
-
- ![Screenshot showing the PuTTY terminal and security alert with "More info" selected.](media/connectors-sftp-ssh/putty-security-alert-more-info.png)
-
- > [!TIP]
- >
- > If the security alert doesn't appear, try clearing the **SshHostKeys** entry. Open the Windows registry editor,
- > and browse to the following entry:
- >
- > **Computer\HKEY_CURRENT_USER\Software\SimonTatham\PuTTY\SshHostKeys**
-
-1. After the **PuTTY: information about the server's host key** box appears, find the **MD5 fingerprint** property, and copy the *47-character string value*, for example.
-
- ![Screenshot showing the more information box with the "MD5 fingerprint" property and the string with the last 47 characters selected for copying.](medi5-fingerprint-key.png)
-
-<a name="connect"></a>
-
-## Connect to SFTP with SSH
--
-1. Sign in to the [Azure portal](https://portal.azure.com), and open your logic app in Logic App Designer, if not open already.
-
-1. For blank logic apps, in the search box, enter `sftp ssh` as your filter. Under the triggers list, select the trigger you want.
-
- -or-
-
- For existing logic apps, under the last step where you want to add an action, select **New step**. In the search box, enter `sftp ssh` as your filter. Under the actions list, select the action you want.
-
- To add an action between steps, move your pointer over the arrow between steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
-
-1. Provide the necessary details for your connection.
-
- > [!IMPORTANT]
- >
- > When you enter your SSH private key in the **SSH private key** property, follow these additional steps, which help
- > make sure you provide the complete and correct value for this property. An invalid key causes the connection to fail.
-
- Although you can use any text editor, here are sample steps that show how to correctly copy and paste your key by using Notepad.exe as an example.
-
- 1. Open your SSH private key file in a text editor. These steps use Notepad as the example.
-
- 1. On the Notepad **Edit** menu, select **Select All**.
+
- 1. Select **Edit** > **Copy**.
+When you save your workflow, this step automatically publishes your updates to your deployed logic app, which is live in Azure. With only a trigger, your workflow just checks the FTP server based on your specified schedule. You have to [add an action](#add-sftp-action) that responds to the trigger and does something with the trigger outputs.
- 1. In the SFTP-SSH trigger or action, *paste the complete* copied key in the **SSH private key** property, which supports multiple lines. ***Don't manually enter or edit the key***.
+For example, the trigger named **When a file is added or modified** starts a workflow when a file is added or changed on an SFTP server. As a subsequent action, you can add a condition that checks whether the file content meets your specified criteria. If the content meets the condition, use the action named **Get file content** to get the file content, and then use another action to put that file content into a different folder on the SFTP server.
-1. After you finish entering the connection details, select **Create**.
+<a name="add-sftp-action"></a>
-1. Now provide the necessary details for your selected trigger or action and continue building your logic app's workflow.
+## Add an SFTP action
-<a name="change-chunk-size"></a>
+Before you can use an SFTP action, your workflow must already start with a trigger, which can be any kind that you choose. For example, you can use the generic **Recurrence** built-in trigger to start your workflow on specific schedule.
-## Override chunk size
+### [Consumption](#tab/consumption)
-To override the default adaptive behavior that chunking uses, you can specify a constant chunk size from 5 MB to 50 MB.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. In the action's upper-right corner, select the ellipses button (**...**), and then select **Settings**.
+1. Under the trigger or action where you want to add the action, select **New step**.
- ![Open SFTP-SSH settings](./media/connectors-sftp-ssh/sftp-ssh-connector-setttings.png)
+ Or, to add the action between steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-1. Under **Content Transfer**, in the **Chunk size** property, enter an integer value from `5` to `50`, for example:
+1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **sftp**.
- ![Specify chunk size to use instead](./media/connectors-sftp-ssh/specify-chunk-size-override-default.png)
+1. From the actions list, select the [SFTP-SSH action](/connectors/sftpwithssh/) that you want to use.
-1. After you finish, select **Done**.
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-## Examples
+1. After the action information box appears, provide the necessary details for your selected action. For more information, see [SFTP-SSH managed connector actions reference](/connectors/sftpwithssh/#actions).
-<a name="file-added-modified"></a>
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-### SFTP - SSH trigger: When a file is added or modified
+### [Standard](#tab/standard)
-This trigger starts a workflow when a file is added or changed on an SFTP server. As example follow-up actions, the workflow can use a condition to check whether the file content meets specified criteria. If the content meets the condition, the **Get file content** SFTP-SSH action can get the content, and then another SFTP-SSH action can put that file in a different folder on the SFTP server.
+<a name="built-in-connector-action"></a>
-**Enterprise example**: You can use this trigger to monitor an SFTP folder for new files that represent customer orders. You can then use an SFTP-SSH action such as **Get file content** so you get the order's contents for further processing and store that order in an orders database.
+#### Built-in connector action
-<a name="get-content"></a>
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-### SFTP - SSH action: Get file content using path
+1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**.
-This action gets the content from a file on an SFTP server by specifying the file path. So for example, you can add the trigger from the previous example and a condition that the file's content must meet. If the condition is true, the action that gets the content can run.
+ Or, to add an action between steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**.
-<a name="troubleshooting-errors"></a>
+1. On the **Add an action** pane, under the search box, select **Built-in**. In the search box, enter **sftp**.
-## Troubleshoot problems
+1. From the actions list, select the [SFTP action](/azure/logic-apps/connectors/built-in/reference/sftp/#actions) that you want to use.
-This section describes possible solutions to common errors or problems.
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-<a name="connection-attempt-failed"></a>
+1. After the action information box appears, provide the necessary details for your selected action. For more information, see [SFTP built-in connector actions reference](/azure/logic-apps/connectors/built-in/reference/sftp/#actions).
-### 504 error: "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond" or "Request to the SFTP server has taken more than '00:00:30' seconds"
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-This error can happen when your logic app can't successfully establish a connection with the SFTP server. There might be different reasons for this problem, so try these troubleshooting options:
+<a name="managed-connector-action"></a>
-* The connection timeout is 20 seconds. Check that your SFTP server has good performance and intermediate devices, such as firewalls, aren't adding overhead.
+#### Managed connector action
-* If you have a firewall set up, make sure that you add the **Managed connector IP** addresses for your region to the approved list. To find the IP addresses for your logic app's region, see [Managed connector outbound IPs - Azure Logic Apps](/connectors/common/outbound-ip-addresses).
+1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
-* If this error happens intermittently, change the **Retry policy** setting on the SFTP-SSH action to a retry count higher than the default four retries.
+1. Under the trigger or action where you want to add the action, select **New step**.
-* Check whether your SFTP server puts a limit on the number of connections from each IP address. Any such limit hinders communication between the connector and the SFTP server. Make sure to remove this limit.
+ Or, to add an action between steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-* To reduce connection establishment cost, in the SSH configuration for your SFTP server, increase the [**ClientAliveInterval**](https://man.openbsd.org/sshd_config#ClientAliveInterval) property to around one hour.
+1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **sftp**.
-* Review the SFTP server log to check whether the request from logic app reached the SFTP server. To get more information about the connectivity problem, you can also run a network trace on your firewall and your SFTP server.
+1. From the actions list, select the [SFTP-SSH action](/connectors/sftpwithssh/) that you want to use.
-<a name="file-does-not-exist"></a>
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-### 404 error: "A reference was made to a file or folder which does not exist"
+1. After the action information box appears, provide the necessary details for your selected action. For more information, see [SFTP-SSH managed connector actions reference](/connectors/sftpwithssh/#actions).
-This error can happen when your workflow creates a file on your SFTP server with the SFTP-SSH **Create file** action, but immediately moves that file before the Logic Apps service can get the file's metadata. When your workflow runs the **Create file** action, the Logic Apps service automatically calls your SFTP server to get the file's metadata. However, if your logic app moves the file, the Logic Apps service can no longer find the file so you get the `404` error message.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-If you can't avoid or delay moving the file, you can skip reading the file's metadata after file creation instead by following these steps:
+
-1. In the **Create file** action, open the **Add new parameter** list, select the **Get all file metadata** property, and set the value to **No**.
+For example, the action named **Get file content using path** gets the content from a file on an SFTP server by specifying the file path. You can use the trigger from the previous example and a condition that the file content must meet. If the condition is true, a subsequent action can get the content.
-1. If you need this file metadata later, you can use the **Get file metadata** action.
+
-## Connector reference
+## Troubleshooting
-For more technical details about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/sftpwithssh/).
+For more information, see the following documentation:
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version require chunking to use the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
+- [SFTP-SSH managed connector reference - Troubleshooting](/connectors/sftpwithssh/#troubleshooting)
+- [SFTP built-in connector reference - Troubleshooting](/azure/logic-apps/connectors/built-in/reference/sftp#troubleshooting)
## Next steps
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
cosmos-db How To Python Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-python-manage-databases.md
The preceding code snippet displays output similar to the following example cons
## Does database exist?
-The PyMongo driver for Python creates a database if it doesn't exist when you access it. However, we recommend that instead you use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. To create a new database if it doesn't exist, use the [create database extension](/azure/cosmos-db/mongodb/custom-commands#create-database) as shown in the following code snippet.
+The PyMongo driver for Python creates a database if it doesn't exist when you access it. However, we recommend that instead you use the [MongoDB extension commands](./custom-commands.md) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. To create a new database if it doesn't exist, use the [create database extension](./custom-commands.md#create-database) as shown in the following code snippet.
To see if the database already exists before using it, get the list of current databases with the [list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method.
The preceding code snippet displays output similar to the following example cons
## Get database object instance
-If a database doesn't exist, the PyMongo driver for Python creates it when you access it. However, we recommend that instead you use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. The pattern is shown above in the section [Does database exist?](#does-database-exist).
+If a database doesn't exist, the PyMongo driver for Python creates it when you access it. However, we recommend that instead you use the [MongoDB extension commands](./custom-commands.md) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. The pattern is shown above in the section [Does database exist?](#does-database-exist).
When working with PyMongo, you access databases using attribute style access on MongoClient instances. Once you have a database instance, you can use database level operations as shown below.
The preceding code snippet displays output similar to the following example cons
## See also -- [Get started with Azure Cosmos DB for MongoDB and Python](how-to-python-get-started.md)
+- [Get started with Azure Cosmos DB for MongoDB and Python](how-to-python-get-started.md)
cosmos-db How To Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-python-get-started.md
In your *app.py*:
:::code language="python" source="~/cosmos-db-nosql-python-samples/003-how-to/app_aad_default.py" id="credential"::: > [!IMPORTANT]
-> For details on how to add the correct role to enable `DefaultAzureCredential` to work, see [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account](/azure/cosmos-db/how-to-setup-rbac). In particular, see the section on creating roles and assigning them to a principal ID.
+> For details on how to add the correct role to enable `DefaultAzureCredential` to work, see [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account](../how-to-setup-rbac.md). In particular, see the section on creating roles and assigning them to a principal ID.
#### Create CosmosClient with a custom credential implementation
The following guides show you how to use each of these classes to build your app
|--|| | [Create a database](how-to-python-create-database.md) | Create databases | | [Create container](how-to-python-create-container.md) | Create containers |
-| [Item examples](/azure/cosmos-db/nosql/samples-python#item-examples) | Point read a specific item |
+| [Item examples](./samples-python.md#item-examples) | Point read a specific item |
## See also
The following guides show you how to use each of these classes to build your app
Now that you've connected to an API for NoSQL account, use the next guide to create and manage databases. > [!div class="nextstepaction"]
-> [Create a database in Azure Cosmos DB for NoSQL using Python](how-to-python-create-database.md)
+> [Create a database in Azure Cosmos DB for NoSQL using Python](how-to-python-create-database.md)
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/kafka-connector-sink.md
You can learn more about change feed in Azure Cosmo DB with the following docs:
* [Reading from change feed](read-change-feed.md) You can learn more about bulk operations in V4 Java SDK with the following docs:
-* [Perform bulk operations on Azure Cosmos DB data](/azure/cosmos-db/nosql/bulk-executor-java)
+* [Perform bulk operations on Azure Cosmos DB data](./bulk-executor-java.md)
cosmos-db Throughput Control Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/throughput-control-spark.md
The [Spark Connector](quickstart-spark.md) allows you to communicate with Azure Cosmos DB using [Apache Spark](https://spark.apache.org/). This article describes how the throughput control feature works. Check out our [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples) to get started using throughput control. > [!TIP]
-> This article documents the use of global throughput control groups in the Azure Cosmos DB Spark Connector, but the functionality is also available in the [Java SDK](/azure/cosmos-db/nosql/sdk-java-v4). In the SDK, you can also use Local Throughput Control groups to limit the RU consumption in the context of a single client connection instance. For example, you can apply this to different operations within a single microservice, or maybe to a single data loading program. Take a look at a code snippet [here](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos/src/samples/java/com/azure/cosmos/ThroughputControlCodeSnippet.java) for how to build a CosmosAsyncClient with both local and global control groups.
+> This article documents the use of global throughput control groups in the Azure Cosmos DB Spark Connector, but the functionality is also available in the [Java SDK](./sdk-java-v4.md). In the SDK, you can also use Local Throughput Control groups to limit the RU consumption in the context of a single client connection instance. For example, you can apply this to different operations within a single microservice, or maybe to a single data loading program. Take a look at a code snippet [here](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos/src/samples/java/com/azure/cosmos/ThroughputControlCodeSnippet.java) for how to build a CosmosAsyncClient with both local and global control groups.
## Why is throughput control important?
In each client record, the `loadFactor` attribute represents the load on the giv
* [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples). * [Manage data with Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL](quickstart-spark.md).
-* Learn more about [Apache Spark](https://spark.apache.org/).
+* Learn more about [Apache Spark](https://spark.apache.org/).
data-factory Choose The Right Integration Runtime Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/choose-the-right-integration-runtime-configuration.md
Previously updated : 01/10/2023 Last updated : 01/12/2023 # Choose the right integration runtime configuration for your scenario - The integration runtime is a very important part of the infrastructure for the data integration solution provided by Azure Data Factory. This requires you to fully consider how to adapt to the existing network structure and data source at the beginning of designing the solution, as well as consider performance, security and cost. ## Comparison of different types of integration runtimes
data-factory Concepts Data Flow Column Pattern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-column-pattern.md
Previously updated : 11/23/2021 Last updated : 01/11/2023 # Using column patterns in mapping data flow
Use the [expression builder](concepts-data-flow-expression-builder.md) to enter
:::image type="content" source="media/data-flow/edit-column-pattern.png" alt-text="Screenshot shows the Derived column's settings tab.":::
-The above column pattern matches every column of type double and creates one derived column per match. By stating `$$` as the column name field, each matched column is updated with the same name. The value of the each column is the existing value rounded to two decimal points.
+The above column pattern matches every column of type double and creates one derived column per match. By stating `$$` as the column name field, each matched column is updated with the same name. The value of each column is the existing value rounded to two decimal points.
To verify your matching condition is correct, you can validate the output schema of defined columns in the **Inspect** tab or get a snapshot of the data in the **Data preview** tab.
data-factory Concepts Data Flow Flowlet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-flowlet.md
Previously updated : 11/11/2021 Last updated : 01/11/2023 # Flowlets in mapping data flow
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-overview.md
Previously updated : 08/26/2021 Last updated : 01/11/2023 # Mapping data flows in Azure Data Factory
data-factory Concepts Data Flow Performance Sinks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-sinks.md
Previously updated : 10/06/2021 Last updated : 01/11/2023 # Optimizing sinks
data-factory Concepts Data Flow Performance Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-transformations.md
Previously updated : 09/29/2021 Last updated : 01/11/2023 # Optimizing transformations
Unlike merge join in tools like SSIS, the join transformation isn't a mandatory
## Window transformation performance
-The [Window transformation in mapping data flow](data-flow-window.md) partitions your data by value in columns that you select as part of the ```over()``` clause in the transformation settings. There are a number of very popular aggregate and analytical functions that are exposed in the Windows transformation. However, if your use case is to generate a window over your entire dataset for the purpose of ranking ```rank()``` or row number ```rowNumber()```, it is recommended that you instead use the [Rank transformation](data-flow-rank.md) and the [Surrogate Key transformation](data-flow-surrogate-key.md). Those transformation will perform better again full dataset operations using those functions.
+The [Window transformation in mapping data flow](data-flow-window.md) partitions your data by value in columns that you select as part of the ```over()``` clause in the transformation settings. There are a number of very popular aggregate and analytical functions that are exposed in the Windows transformation. However, if your use case is to generate a window over your entire dataset for the purpose of ranking ```rank()``` or row number ```rowNumber()```, it is recommended that you instead use the [Rank transformation](data-flow-rank.md) and the [Surrogate Key transformation](data-flow-surrogate-key.md). Those transformations will perform better again full dataset operations using those functions.
## Repartitioning skewed data
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connect-data-factory-to-azure-purview.md
Previously updated : 10/25/2021 Last updated : 01/11/2023 # Connect Data Factory to Microsoft Purview [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-[Microsoft Purview](../purview/overview.md) is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. You can connect your data factory to Microsoft Purview. That connection allows you to use Microsoft Purview for capturing lineage data, and to discover and explore Microsoft Purview assets.
+[Microsoft Purview](../purview/overview.md) is a unified data governance service that helps you manage and govern your on-premises, multicloud, and software-as-a-service (SaaS) data. You can connect your data factory to Microsoft Purview. That connection allows you to use Microsoft Purview for capturing lineage data, and to discover and explore Microsoft Purview assets.
## Connect Data Factory to Microsoft Purview
data-factory Connector Amazon S3 Compatible Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-s3-compatible-storage.md
Previously updated : 12/13/2021 Last updated : 01/11/2023 # Copy data from Amazon S3 Compatible Storage by using Azure Data Factory or Synapse Analytics
data-factory Connector Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-ftp.md
Previously updated : 11/29/2021 Last updated : 01/11/2023
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-cloud-storage.md
Previously updated : 12/13/2021 Last updated : 01/11/2023
data-factory Connector Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hdfs.md
Previously updated : 12/13/2021 Last updated : 01/11/2023
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md
Previously updated : 09/09/2021 Last updated : 01/11/2023
data-factory Connector Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-servicenow.md
Previously updated : 09/09/2021 Last updated : 01/11/2023 # Copy data from ServiceNow using Azure Data Factory or Synapse Analytics
data-factory Connector Troubleshoot Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-parquet.md
Previously updated : 10/13/2021 Last updated : 01/11/2023
data-factory Continuous Integration Delivery Hotfix Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-hotfix-environment.md
Previously updated : 09/24/2021 Last updated : 01/11/2023
data-factory Continuous Integration Delivery Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-linked-templates.md
Previously updated : 09/24/2021 Last updated : 01/11/2023
data-factory Copy Activity Preserve Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-preserve-metadata.md
Previously updated : 09/09/2021 Last updated : 01/11/2023
data-factory Industry Sap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-overview.md
Previously updated : 08/11/0222 Last updated : 01/11/2023 # SAP knowledge center overview
data-factory Tutorial Incremental Copy Change Data Capture Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md
Previously updated : 12/15/2022 Last updated : 01/11/2023 # Incrementally load data from Azure SQL Managed Instance to Azure Storage using change data capture (CDC)
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Create a data factory
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**:
-
- :::image type="content" source="./media/tutorial-incremental-copy-change-data-capture-feature-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the &quot;New&quot; pane":::
-
-2. In the **New data factory** page, enter **ADFTutorialDataFactory** for the **name**.
-
- :::image type="content" source="./media/tutorial-incremental-copy-change-data-capture-feature-portal/new-azure-data-factory.png" alt-text="New data factory page":::
-
- The name of the Azure data factory must be **globally unique**. If you receive the following error, change the name of the data factory (for example, yournameADFTutorialDataFactory) and try creating again. See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
-
- *Data factory name "ADFTutorialDataFactory" is not available.*
-3. Select **V2** for the **version**.
-4. Select your Azure **subscription** in which you want to create the data factory.
-5. For the **Resource Group**, do one of the following steps:
-
- 1. Select **Use existing**, and select an existing resource group from the drop-down list.
- 2. Select **Create new**, and enter the name of a resource group.
-
- To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-5. Select the **location** for the data factory. Only locations that are supported are displayed in the drop-down list. The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other regions.
-6. De-select **Enable GIT**.
-7. Click **Create**.
-8. Once the deployment is complete, click on **Go to resource**
-
- :::image type="content" source="./media/tutorial-incremental-copy-change-data-capture-feature-portal/data-factory-deploy-complete.png" alt-text="Screenshot shows a message that your deployment is complete and an option to go to resource.":::
-9. After the creation is complete, you see the **Data Factory** page as shown in the image.
-
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
-
-10. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
-11. In the home page, switch to the **Manage** tab in the left panel as shown in the following image:
-
- :::image type="content" source="media/doc-common-process/get-started-page-manage-button.png" alt-text="Screenshot that shows the Manage button.":::
+Follow the steps in the article [Quickstart: Create a data factory by using the Azure portal](quickstart-create-data-factory.md) to create a data factory if you don't already have one to work with.
## Create linked services You create linked services in a data factory to link your data stores and compute services to the data factory. In this section, you create linked services to your Azure Storage account and Azure SQL MI.
data-factory Tutorial Incremental Copy Change Tracking Feature Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-powershell.md
Previously updated : 02/18/2021 Last updated : 01/11/2023 # Incrementally load data from Azure SQL Database to Azure Blob Storage using change tracking information using PowerShell
data-factory Tutorial Transform Data Spark Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-portal.md
Previously updated : 06/07/2021 Last updated : 01/11/2023 # Transform data in the cloud by using a Spark activity in Azure Data Factory
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a data factory
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. Select **New** on the left menu, select **Data + Analytics**, and then select **Data Factory**.
-
- :::image type="content" source="./media/tutorial-transform-data-spark-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the &quot;New&quot; pane":::
-1. In the **New data factory** pane, enter **ADFTutorialDataFactory** under **Name**.
-
- :::image type="content" source="./media/tutorial-transform-data-spark-portal/new-azure-data-factory.png" alt-text="&quot;New data factory&quot; pane":::
-
- The name of the Azure data factory must be *globally unique*. If you see the following error, change the name of the data factory. (For example, use **&lt;yourname&gt;ADFTutorialDataFactory**). For naming rules for Data Factory artifacts, see the [Data Factory - naming rules](naming-rules.md) article.
-
- :::image type="content" source="./media/tutorial-transform-data-spark-portal/name-not-available-error.png" alt-text="Error when a name is not available":::
-1. For **Subscription**, select your Azure subscription in which you want to create the data factory.
-1. For **Resource Group**, take one of the following steps:
-
- - Select **Use existing**, and select an existing resource group from the drop-down list.
- - Select **Create new**, and enter the name of a resource group.
-
- Some of the steps in this quickstart assume that you use the name **ADFTutorialResourceGroup** for the resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-1. For **Version**, select **V2**.
-1. For **Location**, select the location for the data factory.
-
- For a list of Azure regions in which Data Factory is currently available, select the regions that interest you on the following page, and then expand **Analytics** to locate **Data Factory**: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). The data stores (like Azure Storage and Azure SQL Database) and computes (like HDInsight) that Data Factory uses can be in other regions.
-
-1. Select **Create**.
-
-1. After the creation is complete, you see the **Data factory** page. Select the **Author & Monitor** tile to start the Data Factory UI application on a separate tab.
-
- :::image type="content" source="./media/tutorial-transform-data-spark-portal/data-factory-home-page.png" alt-text="Home page for the data factory, with the &quot;Author & Monitor&quot; tile":::
+Follow the steps in the article [Quickstart: Create a data factory by using the Azure portal](quickstart-create-data-factory.md) to create a data factory if you don't already have one to work with.
## Create linked services You author two linked services in this section:
data-lake-analytics Data Lake Analytics Cicd Manage Assemblies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-manage-assemblies.md
Title: Manage U-SQL assemblies in a CI/CD pipeline - Azure Data Lake description: 'Learn the best practices for managing U-SQL C# assemblies in a CI/CD pipeline with Azure DevOps.'-- Last updated 10/30/2018
data-lake-analytics Data Lake Analytics Cicd Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-overview.md
Title: How to set up a CI/CD pipeline for Azure Data Lake Analytics description: Learn how to set up continuous integration and continuous deployment for Azure Data Lake Analytics.--- Last updated 09/14/2018
data-lake-analytics Data Lake Analytics Cicd Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-test.md
Title: How to test your Azure Data Lake Analytics code description: 'Learn how to add test cases for U-SQL and extended C# code for Azure Data Lake Analytics.'--- Last updated 08/30/2019
data-lake-analytics Data Lake Analytics Data Lake Tools Local Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-local-run.md
Title: Run Azure Data Lake U-SQL scripts on your local machine description: Learn how to use Azure Data Lake Tools for Visual Studio to run U-SQL jobs on your local machine.--- Last updated 07/03/2018
data-lake-analytics Data Lake Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-overview.md
Title: Overview of Azure Data Lake Analytics description: Data Lake Analytics lets you drive your business using insights gained in your cloud data at any scale.---
data-lake-analytics Data Lake Analytics U Sql Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-sdk.md
Title: Run U-SQL jobs locally - Azure Data Lake U-SQL SDK description: Learn how to run and test U-SQL jobs locally using the command line and programming interfaces on your local workstation. --- Last updated 03/01/2017
data-lake-analytics Data Lake Analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-whats-new.md
Title: Data Lake Analytics recent changes description: This article provides an ongoing list of recent changes that are made to Data Lake Analytics. - - Last updated 11/16/2022
data-lake-analytics Migrate Azure Data Lake Analytics To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md
Title: Migrate Azure Data Lake Analytics to Azure Synapse Analytics. description: This article describes how to migrate from Azure Data Lake Analytics to Azure Synapse Analytics.--
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics
description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 01/05/2023 --
databox-online Azure Stack Edge Gpu Manage Virtual Machine Network Interfaces Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md
Previously updated : 12/07/2022 Last updated : 01/12/2023 # Customer intent: As an IT admin, I need to understand how to manage network interfaces on an Azure Stack Edge Pro device so that I can use it to run applications using Edge compute before sending it to Azure.<!--Does "it" refer to the device or to the virtual NICs?-->
Follow these steps to add a network interface to a virtual machine deployed on y
||-| |Name | A unique name within the edge resource group. The name cannot be changed after the network interface is created. To manage multiple network interfaces easily, use the suggestions provided in the [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). | |Select an edge resource group |Select the edge resource group to add the network interface to.|
- |Virtual network| The virtual network associated with the virtual switch created on your device when you enabled compute on the network interface. There is only one virtual network associated with your device. |
+ |Virtual network| The virtual network associated with the virtual switch created on your device when you enabled compute on the network interface. |
|Subnet | A subnet within the selected virtual network. This field is automatically populated with the subnet associated with the network interface on which you enabled compute. | |IP address assignment | A static or a dynamic IP for your network interface. The static IP should be an available, free IP from the specified subnet range. Choose dynamic if a DHCP server exists in the environment. |
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Azure DDoS Network Protection, combined with application design best practices,
DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added > [!NOTE]
-> DDoS IP Protection is currently only available in the Azure Preview Portal.
+> DDoS IP Protection is currently only available in Azure Preview PowerShell.
DDoS IP Protection is currently available in the following regions.
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
Some examples of how to use suppression rule are:
You can apply suppression rules to management groups or to subscriptions. -- To suppress alerts for a management group, use [Azure Policy](/azure/governance/policy/overview).
+- To suppress alerts for a management group, use [Azure Policy](../governance/policy/overview.md).
- To suppress alerts for subscriptions, use the Azure portal or the [REST API](#create-and-manage-suppression-rules-with-the-api). Alert types that were never triggered on a subscription or management group before the rule was created won't be suppressed.
This article described the suppression rules in Microsoft Defender for Cloud tha
Learn more about security alerts: -- [Security alerts generated by Defender for Cloud](alerts-reference.md)
+- [Security alerts generated by Defender for Cloud](alerts-reference.md)
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
# Configure the Microsoft Security DevOps Azure DevOps extension
+> [!Note]
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension is retired. MSCA is replaced by the Microsoft Security DevOps Azure DevOps extension. MSCA customers should follow the instructions in this article to install and configure the extension.
+ Microsoft Security DevOps is a command line application that integrates static analysis tools into the development lifecycle. Microsoft Security DevOps installs, configures, and runs the latest versions of static analysis tools (including, but not limited to, SDL/security and compliance tools). Microsoft Security DevOps is data-driven with portable configurations that enable deterministic execution across multiple environments. The Microsoft Security DevOps uses the following Open Source tools:
defender-for-cloud Concept Easm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md
Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that
EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥). That data can be used by MDC CSPM (ΓÇ£inside-outΓÇ¥) to assist with internet-exposure validation and discovery capabilities to provide better visibility to customers. + ## Learn more You can learn more about [Defender EASM](../external-attack-surface-management/index.md), and learn about the [pricing](https://azure.microsoft.com/pricing/details/defender-external-attack-surface-management/) options available.
defender-for-cloud Concept Regulatory Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-regulatory-compliance.md
description: Learn about the Microsoft cloud security benchmark and the benefits
Previously updated : 09/21/2022 Last updated : 01/10/2023 # Microsoft cloud security benchmark in Defender for Cloud
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Some images may reuse tags from an image that was already scanned. For example,
Currently, Defender for Containers can scan images in Azure Container Registry (ACR) and AWS Elastic Container Registry (ECR) only. Docker Registry, Microsoft Artifact Registry/Microsoft Container Registry, and Microsoft Azure Red Hat OpenShift (ARO) built-in container image registry aren't supported.
-Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](/azure/container-registry/container-registry-import-images?tabs=azure-cli).
+Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](../container-registry/container-registry-import-images.md?tabs=azure-cli).
## Next steps
-Learn more about the [advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md).
+Learn more about the [advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md).
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
Defender for DevOps helps unify, strengthen and manage multi-pipeline DevOps sec
## Availability > [!Note]
- > During the preview, the maximum number of repositories that can be onboarded to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded.
+ > During the preview, the maximum number of GitHub repositories that can be onboarded to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 GitHub repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded.
>
- > If your organization is interested in onboarding more than 2,000 repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding).
+ > If your organization is interested in onboarding more than 2,000 GitHub repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding).
| Aspect | Details | |--|--|
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
When you enable **Microsoft Defender for Azure SQL**, all supported resources th
A vulnerability assessment service discovers, tracks, and helps you remediate potential database vulnerabilities. Assessment scans provide an overview of your SQL machines' security state, and details of any security findings. Defender for Azure SQL helps you identify and mitigate potential database vulnerabilities and detecting anomalous activities that could indicate threats to your databases.
-Learn more about [vulnerability assessment for Azure SQL Database](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview).
+Learn more about [vulnerability assessment for Azure SQL Database](./sql-azure-vulnerability-assessment-overview.md).
### Advanced threat protection
In this article, you learned about Microsoft Defender for Azure SQL. Now you can
- [Enable Microsoft Defender for Azure SQL](quickstart-enable-database-protections.md) - [How Microsoft Defender for Azure SQL can protect SQL servers anywhere](https://www.youtube.com/watch?v=V7RdB6RSVpc).-- [Set up email notifications for security alerts](configure-email-notifications.md)
+- [Set up email notifications for security alerts](configure-email-notifications.md)
defender-for-cloud Defender For Sql On Machines Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-on-machines-vulnerability-assessment.md
Last updated 11/09/2021
- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview) - [SQL Server running on Windows machines without Azure Arc](../azure-monitor/agents/agent-windows.md)
-The integrated [vulnerability assessment scanner](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview) discovers, tracks, and helps you remediate potential database vulnerabilities. Assessment scans findings provide an overview of your SQL machines' security state, and details of any security findings.
+The integrated [vulnerability assessment scanner](./sql-azure-vulnerability-assessment-overview.md) discovers, tracks, and helps you remediate potential database vulnerabilities. Assessment scans findings provide an overview of your SQL machines' security state, and details of any security findings.
> [!NOTE] > The scan is lightweight, safe, only takes a few seconds per database to run and is entirely read-only. It does not make any changes to your database.
You can specify the region where your SQL Vulnerability Assessment data will be
## Next steps
-Learn more about Defender for Cloud's protections for SQL resources in [Overview of Microsoft Defender for SQL](defender-for-sql-introduction.md).
+Learn more about Defender for Cloud's protections for SQL resources in [Overview of Microsoft Defender for SQL](defender-for-sql-introduction.md).
defender-for-cloud Episode Eighteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eighteen.md
Last updated 11/03/2022
## Recommended resources
-Learn more about [Enable Microsoft Defender for Azure Cosmos DB](/azure/defender-for-cloud/defender-for-databases-enable-cosmos-protections)
+Learn more about [Enable Microsoft Defender for Azure Cosmos DB](./defender-for-databases-enable-cosmos-protections.md)
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
Learn more about [Enable Microsoft Defender for Azure Cosmos DB](/azure/defender
## Next steps > [!div class="nextstepaction"]
-> [Defender for DevOps | Defender for Cloud in the field](episode-nineteen.md)
+> [Defender for DevOps | Defender for Cloud in the field](episode-nineteen.md)
defender-for-cloud Episode Nineteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nineteen.md
Last updated 11/08/2022
- [08:22](/shows/mdc-in-the-field/defender-for-devops#time=08m22s) - Demonstration ## Recommended resources
- - [Learn more](/azure/defender-for-cloud/defender-for-devops-introduction) about Defender for DevOps.
+ - [Learn more](./defender-for-devops-introduction.md) about Defender for DevOps.
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity) - For more about [Microsoft Security](https://msft.it/6002T9HQY)
Last updated 11/08/2022
## Next steps > [!div class="nextstepaction"]
-> [Cloud security explorer and attack path analysis](episode-twenty.md)
+> [Cloud security explorer and attack path analysis](episode-twenty.md)
defender-for-cloud Episode Twenty One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-one.md
Last updated 11/24/2022
## Recommended resources
- - [Learn more](/azure/defender-for-cloud/regulatory-compliance-dashboard) about improving your regulatory compliance.
+ - [Learn more](./regulatory-compliance-dashboard.md) about improving your regulatory compliance.
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity) - For more about [Microsoft Security](https://msft.it/6002T9HQY)
Last updated 11/24/2022
## Next steps > [!div class="nextstepaction"]
-> [Defender External Attack Surface Management (Defender EASM)](episode-twenty-two.md)
+> [Defender External Attack Surface Management (Defender EASM)](episode-twenty-two.md)
defender-for-cloud Episode Twenty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty.md
Last updated 11/24/2022
## Recommended resources
- - [Learn more](/azure/defender-for-cloud/concept-attack-path) about Attack path.
+ - [Learn more](./concept-attack-path.md) about Attack path.
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity) - For more about [Microsoft Security](https://msft.it/6002T9HQY)
Last updated 11/24/2022
## Next steps > [!div class="nextstepaction"]
-> [Latest updates in the regulatory compliance dashboard](episode-twenty-one.md)
+> [Latest updates in the regulatory compliance dashboard](episode-twenty-one.md)
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
Title: Microsoft Defender for Cloud's main dashboard or 'overview' page description: Learn about the features of the Defender for Cloud overview page Previously updated : 09/20/2022- Last updated : 01/10/2023+
Microsoft Defender for Cloud's overview page is an interactive dashboard that pr
You can select any element on the page to get more detailed information. ## Features of the overview page ### Metrics
The **top menu bar** offers:
- **Subscriptions** - You can view and filter the list of subscriptions by selecting this button. Defender for Cloud will adjust the display to reflect the security posture of the selected subscriptions. - **What's new** - Opens the [release notes](release-notes.md) so you can keep up to date with new features, bug fixes, and deprecated functionality.-- **High-level numbers** for the connected cloud accounts, to show the context of the information in the main tiles below. As well as the number of assessed resources, active recommendations, and security alerts. Select the assessed resources number to access [Asset inventory](asset-inventory.md). Learn more about connecting your [AWS accounts](quickstart-onboard-aws.md) and your [GCP projects](quickstart-onboard-gcp.md).
+- **High-level numbers** for the connected cloud accounts, showing the context of the information in the main tiles, and the number of assessed resources, active recommendations, and security alerts. Select the assessed resources number to access [Asset inventory](asset-inventory.md). Learn more about connecting your [AWS accounts](quickstart-onboard-aws.md) and your [GCP projects](quickstart-onboard-gcp.md).
### Feature tiles
-In the center of the page are the **feature tiles**, each linking to a high profile feature or dedicated dashboard:
+The center of the page displays the **feature tiles**, each linking to a high profile feature or dedicated dashboard:
-- **Security posture** - Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level. [Learn more](secure-score-security-controls.md).-- **Workload protections** - This is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the enhanced security features](enhanced-security-features-overview.md).
+- **Security posture** - Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can understand, at a glance, your current security situation: the higher the score, the lower the identified risk level. [Learn more](secure-score-security-controls.md).
+- **Workload protections** - This tile is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the enhanced security features](enhanced-security-features-overview.md).
- **Regulatory compliance** - Defender for Cloud provides insights into your compliance posture based on continuous assessments of your Azure environment. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md). - **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
Title: Integrate security solutions in Microsoft Defender for Cloud
description: Learn about how Microsoft Defender for Cloud integrates with partners to enhance the overall security of your Azure resources. Previously updated : 07/14/2022 Last updated : 01/10/2023 # Integrate security solutions in Microsoft Defender for Cloud
This document helps you to manage security solutions already connected to Micros
Defender for Cloud makes it easy to enable integrated security solutions in Azure. Benefits include: - **Simplified deployment**: Defender for Cloud offers streamlined provisioning of integrated partner solutions. For solutions like antimalware and vulnerability assessment, Defender for Cloud can provision the agent on your virtual machines. For firewall appliances, Defender for Cloud can take care of much of the network configuration required.-- **Integrated detections**: Security events from partner solutions are automatically collected, aggregated, and displayed as part of Defender for Cloud alerts and incidents. These events also are fused with detections from other sources to provide advanced threat-detection capabilities.
+- **Integrated detections**: Security events from partner solutions are automatically collected, aggregated, and displayed as part of Defender for Cloud alerts and incidents. These events are also fused with detections from other sources to provide advanced threat-detection capabilities.
- **Unified health monitoring and management**: Customers can use integrated health events to monitor all partner solutions at a glance. Basic management is available, with easy access to advanced setup by using the partner solution. Currently, integrated security solutions include vulnerability assessment by [Qualys](https://www.qualys.com/public-cloud/#azure) and [Rapid7](https://www.rapid7.com/products/insightvm/).
Defender for Cloud also offers vulnerability analysis for your:
## How security solutions are integrated Azure security solutions that are deployed from Defender for Cloud are automatically connected. You can also connect other security data sources, including computers running on-premises or in other clouds. ## Manage integrated Azure security solutions and other data sources
The **Connected solutions** section includes security solutions that are current
![Connected solutions.](./media/partner-integration/connected-solutions.png)
-The status of a partner solution can be:
+The status of a security solution can be:
* **Healthy** (green) - no health issues. * **Unhealthy** (red) - there's a health issue that requires immediate attention.
Select **CONNECT** under a solution to integrate with Defender for Cloud and be
### Add data sources
-The **Add data sources** section includes other available data sources that can be connected. For instructions on adding data from any of these sources, click **ADD**.
+The **Add data sources** section includes other available data sources that can be connected. For instructions on adding data from any of these sources, select **ADD**.
![Data sources.](./media/partner-integration/add-data-sources.png)
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
The specific role required to deploy monitoring components depends on the extens
## Roles used to automatically provision agents and extensions
-To allow the Security Admin role to automatically provision agents and extensions used in Defender for Cloud plans, Defender for Cloud uses policy remediation in a similar way to [Azure Policy](/azure/governance/policy/how-to/remediate-resources). To use remediation, Defender for Cloud needs to create service principals, also called managed identities, that assign roles at the subscription level. For example, the service principals for the Defender for Containers plan are:
+To allow the Security Admin role to automatically provision agents and extensions used in Defender for Cloud plans, Defender for Cloud uses policy remediation in a similar way to [Azure Policy](../governance/policy/how-to/remediate-resources.md). To use remediation, Defender for Cloud needs to create service principals, also called managed identities, that assign roles at the subscription level. For example, the service principals for the Defender for Containers plan are:
| Service Principal | Roles | |:-|:-|
This article explained how Defender for Cloud uses Azure RBAC to assign permissi
- [Set security policies in Defender for Cloud](tutorial-security-policy.md) - [Manage security recommendations in Defender for Cloud](review-security-recommendations.md) - [Manage and respond to security alerts in Defender for Cloud](managing-and-responding-alerts.md)-- [Monitor partner security solutions](./partner-integration.md)
+- [Monitor partner security solutions](./partner-integration.md)
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
By connecting your GitHub repositories to Defender for Cloud, you'll extend Defe
- To use all advanced security capabilities provided by GitHub Connector in Defender for DevOps, you need to have GitHub Enterprise with GitHub Advanced Security (GHAS) enabled. ## Availability
+ > [!Note]
+ > During the preview, the maximum number of GitHub repositories that can be onboarded to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 GitHub repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded.
+ >
+ > If your organization is interested in onboarding more than 2,000 GitHub repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding).
| Aspect | Details | |--|--|
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Applications that are installed in virtual machines could often have vulnerabili
Azure Security Center's support for threat protection and vulnerability assessment for SQL DBs running on IaaS VMs is now in preview.
-[Vulnerability assessment](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview) is an easy to configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security posture as part of secure score and includes the steps to resolve security issues and enhance your database fortifications.
+[Vulnerability assessment](./sql-azure-vulnerability-assessment-overview.md) is an easy to configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security posture as part of secure score and includes the steps to resolve security issues and enhance your database fortifications.
[Advanced threat protection](/azure/azure-sql/database/threat-detection-overview) detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit your SQL server. It continuously monitors your database for suspicious activities and provides action-oriented security alerts on anomalous database access patterns. These alerts provide the suspicious activity details and recommended actions to investigate and mitigate the threat.
Azure Security Center (ASC) has launched new networking recommendations and impr
One of the biggest attack surfaces for workloads running in the public cloud are connections to and from the public Internet. Our customers find it hard to know which Network Security Group (NSG) rules should be in place to make sure that Azure workloads are only available to required source ranges. With this feature, Security Center learns the network traffic and connectivity patterns of Azure workloads and provides NSG rule recommendations, for Internet facing virtual machines. This helps our customer better configure their network access policies and limit their exposure to attacks.
-[Learn more about adaptive network hardening](adaptive-network-hardening.md).
+[Learn more about adaptive network hardening](adaptive-network-hardening.md).
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
To get to the list of recommendations:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Either:
- - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment that you want to improve.
+ - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment you want to improve.
- Go to **Recommendations** in the Defender for Cloud menu.
-You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations, and look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
+You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations. Look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
You can learn more by watching this video from the Defender for Cloud in the Field video series: - [Security posture management improvements](episode-four.md)
When you [remediate](implement-security-recommendations.md) all of the recommend
[Security teams can assign a recommendation](governance-rules.md) to a specific person and assign a due date to drive your organization towards increased security. If you have recommendations assigned to you, you're accountable to remediate the resources affected by the recommendations to help your organization be compliant with the security policy.
-Recommendations are listed as **On time** until their due date is passed, when they're changed to **Overdue**. Before the recommendation is overdue, the recommendation doesn't impact the secure score. The security team can also apply a grace period during which overdue recommendations continue to not impact the secure score.
+Recommendations are listed as **On time** until their due date is passed, when they're changed to **Overdue**. Before the recommendation is overdue, the recommendation doesn't affect the secure score. The security team can also apply a grace period during which overdue recommendations continue to not affect the secure score.
To help you plan your work and report on progress, you can set an ETA for the specific resources to show when you plan to have the recommendation resolved by for those resources. You can also change the owner of the recommendation for specific resources so that the person responsible for remediation is assigned to the resource.
To change the owner of resources and set the ETA for remediation of recommendati
1. In the filters for list of recommendations, select **Show my items only**. - The status column indicates the recommendations that are on time, overdue, or completed.
- - The insights column indicates the recommendations that are in a grace period, so they currently don't impact your secure score until they become overdue.
+ - The insights column indicates the recommendations that are in a grace period, so they currently don't affect your secure score until they become overdue.
1. Select an on time or overdue recommendation. 1. For the resources that are assigned to you, set the owner of the resource: 1. Select the resources that are owned by another person, and select **Change owner and set ETA**. 1. Select **Change owner**, enter the email address of the owner of the resource, and select **Save**.
- The owner of the resource gets a weekly email listing the recommendations that they're assigned to.
+
+ The owner of the resource gets a weekly email listing the recommendations that they're assigned.
+ 1. For resources that you own, set an ETA for remediation: 1. Select resources that you plan to remediate by the same date, and select **Change owner and set ETA**. 1. Select **Change ETA** and set the date by which you plan to remediate the recommendation for those resources.
The due date for the recommendation doesn't change, but the security team can se
## Review recommendation data in Azure Resource Graph Explorer (ARG)
-You can review recommendations in ARG both on the recommendations page or on an individual recommendation.
+You can review recommendations in ARG both on the Recommendations page or on an individual recommendation.
-The toolbar on the recommendation details page includes an **Open query** button to explore the details in [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that gives you the ability to query - across multiple subscriptions - Defender for Cloud's security posture data.
+The toolbar on the Recommendations page includes an **Open query** button to explore the details in [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that gives you the ability to query - across multiple subscriptions - Defender for Cloud's security posture data.
ARG is designed to provide efficient resource exploration with the ability to query at scale across your cloud environments with robust filtering, grouping, and sorting capabilities. It's a quick and efficient way to query information across Azure subscriptions programmatically or from within the Azure portal.
The Insights column of the page gives you more details for each recommendation.
Recommendations that aren't included in the calculations of your secure score, should still be remediated wherever possible, so that when the period ends they'll contribute towards your score instead of against it.
-## Download recommendations in a CSV report
+## Download recommendations to a CSV report
Recommendations can be downloaded to a CSV report from the Recommendations page.
To download a CSV report of your recommendations:
:::image type="content" source="media/review-security-recommendations/download-csv.png" alt-text="Screenshot showing you where to select the Download C S V report from.":::
-You'll know the report is being prepared by the pop-up.
+You'll know the report is being prepared when the pop-up appears.
When the report is ready, you'll be notified by a second pop-up. ## Learn more
You can check out the following blogs:
In this document, you were introduced to security recommendations in Defender for Cloud. For related information: -- [Remediate recommendations](implement-security-recommendations.md)--Learn how to configure security policies for your Azure subscriptions and resource groups.
+- [Remediate recommendations](implement-security-recommendations.md)-Learn how to configure security policies for your Azure subscriptions and resource groups.
- [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md).-- [Automate responses to Defender for Cloud triggers](workflow-automation.md)--Automate responses to recommendations
+- [Automate responses to Defender for Cloud triggers](workflow-automation.md)-Automate responses to recommendations
- [Exempt a resource from a recommendation](exempt-resource.md) - [Security recommendations - a reference guide](recommendations-reference.md)
defender-for-cloud Sql Azure Vulnerability Assessment Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-enable.md
To enable vulnerability assessment with a storage account, use the classic confi
:::image type="content" source="media/defender-for-sql-azure-vulnerability-assessment/sql-vulnerability-scan-settings.png" alt-text="Screenshot of configuring the SQL vulnerability assessment scans.":::
- 1. Configure a storage account where your scan results for all databases on the server or managed instance will be stored. For information about storage accounts, see [About Azure storage accounts](/azure/storage/common/storage-account-create).
+ 1. Configure a storage account where your scan results for all databases on the server or managed instance will be stored. For information about storage accounts, see [About Azure storage accounts](../storage/common/storage-account-create.md).
1. To configure vulnerability assessments to automatically run weekly scans to detect security misconfigurations, set **Periodic recurring scans** to **On**. The results are sent to the email addresses you provide in **Send scan reports to**. You can also send email notification to admins and subscription owners by enabling **Also send email notification to admins and subscription owners**.
Learn more about:
- [Microsoft Defender for Azure SQL](defender-for-sql-introduction.md) - [Data discovery and classification](/azure/azure-sql/database/data-discovery-and-classification-overview)-- [Storing scan results in a storage account behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage)
+- [Storing scan results in a storage account behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage)
defender-for-cloud Sql Azure Vulnerability Assessment Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md
Typical scenarios may include:
- Disable findings from benchmarks that aren't of interest for a defined scope > [!IMPORTANT]
-> - To disable specific findings, you need permissions to edit a policy in Azure Policy. Learn more in [Azure RBAC permissions in Azure Policy](/azure/governance/policy/overview#azure-rbac-permissions-in-azure-policy).
+> - To disable specific findings, you need permissions to edit a policy in Azure Policy. Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
> - Disabled findings will still be included in the weekly SQL vulnerability assessment email report. > - Disabled rules are shown in the "Not applicable" section of the scan results.
To handle Boolean types as true/false, set the baseline result with binary input
- Learn more about [Microsoft Defender for Azure SQL](defender-for-sql-introduction.md). - Learn more about [data discovery and classification](/azure/azure-sql/database/data-discovery-and-classification-overview).-- Learn more about [storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage).
+- Learn more about [storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
### [**Windows machines**](#tab/features-windows)
-| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
| | :--: | :-: | :-: | | [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes | | [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Yes |
The **tabs** below show the features of Microsoft Defender for Cloud that are av
### [**Linux machines**](#tab/features-linux)
-| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
| | :--: | :-: | :-: | | [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | Γ£ö | Yes | | [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
For information about when recommendations are generated for each of these solut
- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md#log-analytics-agent). - Learn how [Defender for Cloud manages and safeguards data](data-security.md).-- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
+- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-iot Concept Data Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-data-processing.md
Title: Data processing and residency description: Microsoft Defender for IoT data processing, and residency can occur in regions that are different than the IoT Hub's region. Previously updated : 12/19/2021 Last updated : 01/12/2023
defender-for-iot Concept Standalone Micro Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-standalone-micro-agent-overview.md
Title: Standalone micro agent overview description: The Microsoft Defender for IoT security agents allow you to build security directly into your new IoT devices and Azure IoT projects. Previously updated : 12/13/2021 Last updated : 01/12/2023
defender-for-iot How To Provision Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-provision-micro-agent.md
This article explains how to provision the standalone Microsoft Defender for IoT micro agent using [Azure IoT Hub Device Provisioning Service](../../iot-dps/about-iot-dps.md) with [X.509 certificate attestation](../../iot-dps/concepts-x509-attestation.md).
-To learn how to configure the Microsoft Defender for IoT micro agent for Edge devices see [Create and provision IoT Edge devices at scale](/azure/iot-edge/how-to-provision-devices-at-scale-linux-tpm)
+To learn how to configure the Microsoft Defender for IoT micro agent for Edge devices see [Create and provision IoT Edge devices at scale](../../iot-edge/how-to-provision-devices-at-scale-linux-tpm.md)
## Prerequisites
To learn how to configure the Microsoft Defender for IoT micro agent for Edge de
1. [Configure the micro agent to use the created module](tutorial-standalone-agent-binary-installation.md#authenticate-using-a-module-identity-connection-string) (note that the device does not have to exist yet).
-1. Navigate back to DPS and [provision the device through DPS](/azure/iot-dps/quick-create-simulated-device-x509).
+1. Navigate back to DPS and [provision the device through DPS](../../iot-dps/quick-create-simulated-device-x509.md).
1. Navigate to the configured device in the destination IoT Hub.
To learn how to configure the Microsoft Defender for IoT micro agent for Edge de
[Configure Microsoft Defender for IoT agent-based solution](tutorial-configure-agent-based-solution.md)
-[Configure pluggable Authentication Modules (PAM) to audit sign-in events (Preview)](configure-pam-to-audit-sign-in-events.md)
+[Configure pluggable Authentication Modules (PAM) to audit sign-in events (Preview)](configure-pam-to-audit-sign-in-events.md)
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/overview.md
Title: What is Microsoft Defender for IoT for device builders? description: Learn about how Microsoft Defender for IoT helps device builders to embed security into new IoT/OT devices. Previously updated : 12/19/2021 Last updated : 01/12/2023 #Customer intent: As a device builder, I want to understand how Defender for IoT can help secure my new IoT/OT initiatives.
defender-for-iot Dell Poweredge R340 Xl Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r340-xl-legacy.md
This article describes the Dell PowerEdge R340 XL appliance, supported for OT sensors and on-premises management consoles.
-Legacy appliances are certified but aren't currently offered as preconfigured appliances.
-
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
|Appliance characteristic | Description| |||
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
This article describes the HPE Edgeline EL300 appliance for OT sensors or on-premises management consoles.
-Legacy appliances are certified but aren't currently offered as preconfigured appliances.
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
| Appliance characteristic |Details |
defender-for-iot Hpe Proliant Dl20 Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-legacy.md
This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors in an enterprise deployment.
-Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
| Appliance characteristic |Details | |||
defender-for-iot Hpe Proliant Dl20 Smb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-smb-legacy.md
This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors for monitoring production lines.
-Legacy appliances are certified but are not currently offered as pre-configured appliances.
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
| Appliance characteristic |Details | |||
defender-for-iot Neousys Nuvo 5006Lp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/neousys-nuvo-5006lp.md
This article describes the Neousys Nuvo-5006LP appliance for OT sensors.
-Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
| Appliance characteristic |Details | |||
defender-for-iot Concept Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-enterprise.md
The number of IoT devices continues to grow exponentially across enterprise netw
While the number of IoT devices continues to grow, they often lack the security safeguards that are common on managed endpoints like laptops and mobile phones. To bad actors, these unmanaged devices can be used as a point of entry for lateral movement or evasion, and too often, the use of such tactics leads to the exfiltration of sensitive information.
-[Microsoft Defender for IoT](/azure/defender-for-iot/organizations/) seamlessly integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data.
+[Microsoft Defender for IoT](./index.yml) seamlessly integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data.
> [!IMPORTANT] > The Enterprise IoT Network sensor is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Defender for IoT provides IoT security functionality across both the Microsoft 3
|Method |Description and requirements | Configure in ... | ||||
-|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) only** | Add an Enterprise IoT plan in Microsoft 365 Defender to view IoT-specific alerts, recommendations, and vulnerability data in Microsoft 365 Defender. <br><br>The extra security value is provided for IoT devices detected by Defender for Endpoint. <br><br>**Requires**: <br> - A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator)<br>- Azure access as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) | Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. |
-|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) plus an [Enterprise IoT sensor](#device-visibility-with-enterprise-iot-sensors-public-preview)** | Add an Enterprise IoT plan in Microsoft 365 Defender to add IoT-specific alerts, recommendations, and vulnerability data Microsoft 365 Defender, for IoT devices detected by Defender for Endpoint. <br><br>Register an Enterprise IoT sensor in Defender for IoT for more device visibility in both Microsoft 365 Defender and the Azure portal. An Enterprise IoT sensor also adds alerts and recommendations triggered by the sensor in the Azure portal.<br><br>**Requires**: <br>- A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator)<br>- Azure access as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner)<br>- A physical or VM appliance to use as a sensor |Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. <br><br>Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
-|**[An Enterprise IoT sensor only](#device-visibility-with-enterprise-iot-sensors-only)** | Register an Enterprise IoT sensor in Defender for IoT for Enterprise IoT device visibility, alerts, and recommendations in the Azure portal only. <br><br>Vulnerability data isn't currently available. <br><br>**Requires**: <br>- Azure access as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) <br>- A physical or VM appliance to use as a sensor | Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
+|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) only** | Add an Enterprise IoT plan in Microsoft 365 Defender to view IoT-specific alerts, recommendations, and vulnerability data in Microsoft 365 Defender. <br><br>The extra security value is provided for IoT devices detected by Defender for Endpoint. <br><br>**Requires**: <br> - A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator)<br>- Azure access as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) | Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. |
+|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) plus an [Enterprise IoT sensor](#device-visibility-with-enterprise-iot-sensors-public-preview)** | Add an Enterprise IoT plan in Microsoft 365 Defender to add IoT-specific alerts, recommendations, and vulnerability data Microsoft 365 Defender, for IoT devices detected by Defender for Endpoint. <br><br>Register an Enterprise IoT sensor in Defender for IoT for more device visibility in both Microsoft 365 Defender and the Azure portal. An Enterprise IoT sensor also adds alerts and recommendations triggered by the sensor in the Azure portal.<br><br>**Requires**: <br>- A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator)<br>- Azure access as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner)<br>- A physical or VM appliance to use as a sensor |Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. <br><br>Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
+|**[An Enterprise IoT sensor only](#device-visibility-with-enterprise-iot-sensors-only)** | Register an Enterprise IoT sensor in Defender for IoT for Enterprise IoT device visibility, alerts, and recommendations in the Azure portal only. <br><br>Vulnerability data isn't currently available. <br><br>**Requires**: <br>- Azure access as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) <br>- A physical or VM appliance to use as a sensor | Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
## Security value in Microsoft 365 Defender
The following image shows the architecture of an Enterprise IoT network sensor c
Start securing your Enterprise IoT network resources with by [onboarding to Defender for IoT from Microsoft 365 Defender](eiot-defender-for-endpoint.md). Then, add even more device visibility by [adding an Enterprise IoT network sensor](eiot-sensor.md) to Defender for IoT.
-For more information, see [Enterprise IoT networks frequently asked questions](faqs-eiot.md).
+For more information, see [Enterprise IoT networks frequently asked questions](faqs-eiot.md).
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
SecurityAlert
After you've installed the Microsoft Defender for IoT solution and deployed the [AD4IoT-AutoAlertStatusSync](iot-advanced-threat-monitoring.md#update-alert-statuses-in-defender-for-iot) playbook, alert status changes are synchronized from Microsoft Sentinel to Defender for IoT. Alert status changes are *not* synchronized from Defender for IoT to Microsoft Sentinel. > [!IMPORTANT]
-> We recommend that you manage your alert statuses together with the related incidents in Microsoft Sentinel. For more information, see [Work with incident tasks in Microsoft Sentinel](/azure/sentinel/work-with-tasks).
+> We recommend that you manage your alert statuses together with the related incidents in Microsoft Sentinel. For more information, see [Work with incident tasks in Microsoft Sentinel](../../sentinel/work-with-tasks.md).
> ### Defender for IoT incidents in Microsoft Sentinel
defender-for-iot Eiot Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-defender-for-endpoint.md
Make sure that you have:
|Identity management |Roles required | |||
- |**In Azure Active Directory** | [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator) for your Microsoft 365 tenant |
- |**In Azure RBAC** | [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) for the Azure subscription that you'll be using for the integration |
+ |**In Azure Active Directory** | [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) for your Microsoft 365 tenant |
+ |**In Azure RBAC** | [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) for the Azure subscription that you'll be using for the integration |
## Onboard a Defender for IoT plan
Make sure that you have:
1. Select the following options for your plan:
- - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+ - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription.
- **Price plan**: For the sake of this tutorial, select a **Trial** pricing plan. Microsoft Defender for IoT provides a [30-day free trial](billing.md#free-trial) for evaluation purposes. For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
Learn how to set up an Enterprise IoT network sensor (Public preview) and gain m
Customers that have set up an Enterprise IoT network sensor will be able to see all discovered devices in the **Device inventory** in either Microsoft 365 Defender, or Defender for IoT in the Azure portal. > [!div class="nextstepaction"]
-> [Enhance device discovery with an Enterprise IoT network sensor](eiot-sensor.md)
+> [Enhance device discovery with an Enterprise IoT network sensor](eiot-sensor.md)
defender-for-iot Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-sensor.md
Before you start registering an Enterprise IoT sensor:
If you only want to view data in the Azure portal, an Enterprise IoT plan isn't required. You can also onboard your Enterprise IoT plan from Microsoft 365 Defender after registering your network sensor to bring [extra device visibility and security value](concept-enterprise.md#security-value-in-microsoft-365-defender) to your organization. -- Make sure you can access the Azure portal as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).
+- Make sure you can access the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).
- Allocate a physical appliance or a virtual machine (VM) to use as your network sensor. Make sure that your machine has the following specifications:
Billing changes will take effect one hour after cancellation of the previous sub
- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md). For more information, see [Malware engine alerts](alert-engine-messages.md#malware-engine-alerts). -- [Enhance security posture with security recommendations](recommendations.md)
+- [Enhance security posture with security recommendations](recommendations.md)
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
This procedure describes how to add a trial Defender for IoT plan for OT network
1. In the **Plan settings** pane, define the following settings:
- - **Subscription**: Select the Azure subscription where you want to add a plan. You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the selected subscription.
+ - **Subscription**: Select the Azure subscription where you want to add a plan. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the selected subscription.
> [!TIP] > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner. Also make sure that you have the right subscriptions selected in your Azure settings > **Directories + subscriptions** page.
Your new plan is listed under the relevant subscription on the **Plans and prici
> [Understand Defender for IoT subscription billing](billing.md) > [!div class="nextstepaction"]
-> [Defender for IoT pricing](https://azure.microsoft.com/pricing/details/iot-defender/)
-
+> [Defender for IoT pricing](https://azure.microsoft.com/pricing/details/iot-defender/)
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
If your forwarding alert rules aren't working as expected, check the following d
## Next steps > [!div class="nextstepaction"]
-> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
+> [Microsoft Defender for IoT alerts](alerts.md)
> [!div class="nextstepaction"] > [View and manage alerts on your OT sensor](how-to-view-alerts.md) > [!div class="nextstepaction"]
-> [Accelerate alert workflows on an OT network sensor](how-to-accelerate-alert-incident-response.md)
+> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
> [!div class="nextstepaction"] > [OT monitoring alert types and descriptions](alert-engine-messages.md)
-> [!div class="nextstepaction"]
-> [Forward alert information](how-to-forward-alert-information-to-partners.md)
-
-> [!div class="nextstepaction"]
-> [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
For more information, see [Azure user roles and permissions for Defender for IoT
| **Severity**| A predefined alert severity assigned by the sensor that you can [modify as needed](#manage-alert-severity-and-status). | | **Name** | The alert title. | | **Site** | The site associated with the sensor that detected the alert, as listed on the [Sites and sensors](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal) page.|
- | **Engine** | The [Defender for IoT detection engine](architecture.md#defender-for-iot-analytics-engines) that detected the activity and triggered the alert. <br><br>**Note**: A value of **Micro-agent** indicates that the event was triggered by the Defender for IoT [Device Builder](/azure/defender-for-iot/device-builders/) platform. |
+ | **Engine** | The [Defender for IoT detection engine](architecture.md#defender-for-iot-analytics-engines) that detected the activity and triggered the alert. <br><br>**Note**: A value of **Micro-agent** indicates that the event was triggered by the Defender for IoT [Device Builder](../device-builders/index.yml) platform. |
| **Last detection** | The last time the alert was detected. <br><br>- If an alert's status is **New**, and the same traffic is seen again, the **Last detection** time is updated for the same alert. <br>- If the alert's status is **Closed** and traffic is seen again, the **Last detection** time is *not* updated, and a new alert is triggered.| | **Status** | The alert status: *New*, *Active*, *Closed* <br><br>For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).| | **Source device** |The IP address, MAC address, or the name of the device where the traffic that triggered the alert originated. |
The file is generated, and you're prompted to save it locally.
> [OT monitoring alert types and descriptions](alert-engine-messages.md) > [!div class="nextstepaction"]
-> [Microsoft Defender for IoT alerts](alerts.md)
+> [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
You'll need an SMTP mail server configured to enable email alerts about disconne
**Prerequisites**:
-Make sure you can reach the SMTP server from the [sensor's management port](/azure/defender-for-iot/organizations/best-practices/understand-network-architecture).
+Make sure you can reach the SMTP server from the [sensor's management port](./best-practices/understand-network-architecture.md).
**To configure an SMTP server on your sensor**:
For more information, see:
- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md) - [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Before performing the procedures in this article, make sure that you have:
- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/). -- A [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) user role for the Azure subscription that you'll be using for the integration
+- A [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user role for the Azure subscription that you'll be using for the integration
## Calculate committed devices for OT monitoring
This procedure describes how to add a Defender for IoT plan for OT networks to a
- **Subscription**. Select the subscription where you would like to add a plan.
- You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+ You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription.
> [!TIP] > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner.
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
You can configure a standalone sensor and a management console, with the sensors
To connect a standalone sensor to NTP: -- [See the CLI documentation](/azure/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands#sync-time-from-the-ntp-server).
+- [See the CLI documentation](./references-work-with-defender-for-iot-cli-commands.md).
To connect a sensor controlled by the management console to NTP:
defender-for-iot On Premises Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/on-premises-sentinel.md
You can [stream Microsoft Defender for IoT data into Microsoft Sentinel](../iot-
However, if you're working either in a hybrid environment, or completely on-premises, you might want to stream data in from your locally-managed sensors to Microsoft Sentinel. To do this, create forwarding rules on either your OT network sensor, or for multiple sensors from an on-premises management console.
-Stream data into Microsoft Sentinel whenever you want to use Microsoft Sentinel's advanced threat hunting, security analytics, and automation features when responding to security incidents and threats across your network. For more information, see [Microsoft Sentinel documentation](/azure/sentinel/).
+Stream data into Microsoft Sentinel whenever you want to use Microsoft Sentinel's advanced threat hunting, security analytics, and automation features when responding to security incidents and threats across your network. For more information, see [Microsoft Sentinel documentation](../../../sentinel/index.yml).
## Prerequisites
Before you start, make sure that you have the following prerequisites as needed:
- Access to the OT network sensor or on-premises management console as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md). -- A proxy machine prepared to send data to Microsoft Sentinel. For more information, see [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](/azure/sentinel/connect-common-event-format).
+- A proxy machine prepared to send data to Microsoft Sentinel. For more information, see [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](../../../sentinel/connect-common-event-format.md).
- If you want to encrypt the data you send to Microsoft Sentinel using TLS, make sure to generate a valid TLS certificate from the proxy server to use in your forwarding alert rule.
Select **Save** when you're done. Make sure to test the rule to make sure that i
> [Stream data from cloud-connected sensors](../iot-solution.md) > [!div class="nextstepaction"]
-> [Investigate in Microsoft Sentinel](/azure/sentinel/investigate-cases)
+> [Investigate in Microsoft Sentinel](../../../sentinel/investigate-cases.md)
defender-for-iot Send Cloud Data To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/send-cloud-data-to-partners.md
As more businesses convert OT systems to digital IT infrastructures, security op
We recommend using Microsoft Defender for IoT's out-of-the-box [data connector](../iot-solution.md) and [solution](../iot-advanced-threat-monitoring.md) to integrate with Microsoft Sentinel and bridge the gap between the IT and OT security challenge.
-However, if you have other security information and event management (SIEM) systems, you can also use Microsoft Sentinel to forward Defender for IoT cloud alerts on to that partner SIEM, via [Microsoft Sentinel](/azure/sentinel/) and [Azure Event Hubs](/azure/event-hubs/).
+However, if you have other security information and event management (SIEM) systems, you can also use Microsoft Sentinel to forward Defender for IoT cloud alerts on to that partner SIEM, via [Microsoft Sentinel](../../../sentinel/index.yml) and [Azure Event Hubs](../../../event-hubs/index.yml).
While this article uses Splunk as an example, you can use the process described below with any SIEM that supports Event Hub ingestion, such as IBM QRadar.
You'll need Azure Active Directory (Azure AD) defined as a service principal for
**To register an Azure AD application and define permissions**:
-1. In [Azure AD](/azure/active-directory/), register a new application. On the **Certificates & secrets** page, add a new client secret for the service principal.
+1. In [Azure AD](../../../active-directory/index.yml), register a new application. On the **Certificates & secrets** page, add a new client secret for the service principal.
- For more information, see [Register an application with the Microsoft identity platform](/azure/active-directory/develop/quickstart-register-app)
+ For more information, see [Register an application with the Microsoft identity platform](../../../active-directory/develop/quickstart-register-app.md)
1. In your app's **API permissions** page, grant API permissions to read data from your app.
You'll need Azure Active Directory (Azure AD) defined as a service principal for
1. Make sure that admin consent is required for your permission.
- For more information, see [Configure a client application to access a web API](/azure/active-directory/develop/quickstart-configure-app-access-web-apis#add-permissions-to-access-your-web-api)
+ For more information, see [Configure a client application to access a web API](../../../active-directory/develop/quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api)
1. From your app's **Overview** page, note the following values for your app:
Create an Azure event hub to use as a bridge between Microsoft Sentinel and your
In your event hub, make sure to define the **Partition Count** and **Message Retention** settings.
- For more information, see [Create an event hub using the Azure portal](/azure/event-hubs/event-hubs-create).
+ For more information, see [Create an event hub using the Azure portal](../../../event-hubs/event-hubs-create.md).
1. In your event hub namespace, select the **Access control (IAM)** page and add a new role assignment. Select to use the **Azure Event Hubs Data Receiver** role, and add the Azure AD service principle app that you'd created [earlier](#register-an-application-in-azure-active-directory) as a member.
- For more information, see: [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+ For more information, see: [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
1. In your event hub namespace's **Overview** page, make a note of the namespace's **Host name** value.
In your rule, make sure to define the following settings:
- Configure the **Source** as **SecurityIncident** - Configure the **Destination** as **Event Type**, using the event hub namespace and event hub name you'd recorded earlier.
-For more information, see [Log Analytics workspace data export in Azure Monitor](/azure/azure-monitor/logs/logs-data-export?tabs=portal#create-or-update-a-data-export-rule).
+For more information, see [Log Analytics workspace data export in Azure Monitor](../../../azure-monitor/logs/logs-data-export.md?tabs=portal#create-or-update-a-data-export-rule).
## Configure Splunk to consume Microsoft Sentinel incidents
Once data starts getting ingested into Splunk from your event hub, query the dat
This article describes how to forward alerts generated by cloud-connected sensors only. If you're working on-premises, such as in air-gapped environments, you may be able to create a forwarding alert rule to forward alert data directly from an OT sensor or on-premises management console.
-For more information, see [Integrations with Microsoft and partner services](../integrate-overview.md).
+For more information, see [Integrations with Microsoft and partner services](../integrate-overview.md).
defender-for-iot Iot Advanced Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md
Last updated 09/18/2022
# Tutorial: Investigate and detect threats for IoT devices
-The integration between Microsoft Defender for IoT and [Microsoft Sentinel](/azure/sentinel/) enable SOC teams to efficiently and effectively detect and respond to security threats across your network. Enhance your security capabilities with the [Microsoft Defender for IoT solution](/azure/sentinel/sentinel-solutions-catalog#domain-solutions), a set of bundled content configured specifically for Defender for IoT data that includes analytics rules, workbooks, and playbooks.
+The integration between Microsoft Defender for IoT and [Microsoft Sentinel](../../sentinel/index.yml) enable SOC teams to efficiently and effectively detect and respond to security threats across your network. Enhance your security capabilities with the [Microsoft Defender for IoT solution](../../sentinel/sentinel-solutions-catalog.md#domain-solutions), a set of bundled content configured specifically for Defender for IoT data that includes analytics rules, workbooks, and playbooks.
In this tutorial, you:
In this tutorial, you:
Before you start, make sure you have: -- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](/azure/sentinel/roles).
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](../../sentinel/roles.md).
- Completed [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md). ## Install the Defender for IoT solution
-Microsoft Sentinel [solutions](/azure/sentinel/sentinel-solutions) can help you onboard Microsoft Sentinel security content for a specific data connector using a single process.
+Microsoft Sentinel [solutions](../../sentinel/sentinel-solutions.md) can help you onboard Microsoft Sentinel security content for a specific data connector using a single process.
The **Microsoft Defender for IoT** solution integrates Defender for IoT data with Microsoft Sentinel's security orchestration, automation, and response (SOAR) capabilities by providing out-of-the-box and optimized playbooks for automated response and prevention capabilities.
The **Microsoft Defender for IoT** solution integrates Defender for IoT data wit
1. When you're done, select **Review + Create** to install the solution.
-For more information, see [About Microsoft Sentinel content and solutions](/azure/sentinel/sentinel-solutions) and [Centrally discover and deploy out-of-the-box content and solutions](/azure/sentinel/sentinel-solutions-deploy).
+For more information, see [About Microsoft Sentinel content and solutions](../../sentinel/sentinel-solutions.md) and [Centrally discover and deploy out-of-the-box content and solutions](../../sentinel/sentinel-solutions-deploy.md).
## Detect threats out-of-the-box with Defender for IoT data
After youΓÇÖve [configured your Defender for IoT data to trigger new incidents i
> [!TIP] > To investigate the incident in Defender for IoT, select the **Investigate in Microsoft Defender for IoT** link at the top of the incident details pane.
-For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](/azure/sentinel/investigate-cases).
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md).
### Investigate further with IoT device entities
The IoT device entity page provides contextual device information, with basic de
:::image type="content" source="media/iot-solution/iot-device-entity-page.png" alt-text="Screenshot of the IoT device entity page.":::
-For more information on entity pages, see [Investigate entities with entity pages in Microsoft Sentinel](/azure/sentinel/entity-pages).
+For more information on entity pages, see [Investigate entities with entity pages in Microsoft Sentinel](../../sentinel/entity-pages.md).
You can also hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name: :::image type="content" source="media/iot-solution/entity-behavior-iot-devices-alerts.png" alt-text="Screenshot of IoT devices by number of alerts on entity behavior page.":::
-For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](/azure/sentinel/investigate-cases).
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md).
## Visualize and monitor Defender for IoT data
To visualize and monitor your Defender for IoT data, use the workbooks deployed
The Defenders for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
-View workbooks in Microsoft Sentinel on the **Threat management > Workbooks > My workbooks** tab. For more information, see [Visualize collected data](/azure/sentinel/get-visibility).
+View workbooks in Microsoft Sentinel on the **Threat management > Workbooks > My workbooks** tab. For more information, see [Visualize collected data](../../sentinel/get-visibility.md).
The following table describes the workbooks included in the **Microsoft Defender for IoT** solution:
Before using the out-of-the-box playbooks, make sure to perform the prerequisite
For more information, see: -- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](/azure/sentinel/tutorial-respond-threats-playbook)-- [Automate threat response with playbooks in Microsoft Sentinel](/azure/sentinel/automate-responses-with-playbooks)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
+- [Automate threat response with playbooks in Microsoft Sentinel](../../sentinel/automate-responses-with-playbooks.md)
### Playbook prerequisites
This procedure describes how to configure a Microsoft Sentinel analytics rule to
1. Select **Run**. > [!TIP]
-> You can also manually run a playbook on demand. This can be useful in situations where you want more control over orchestration and response processes. For more information, see [Run a playbook on demand](/azure/sentinel/tutorial-respond-threats-playbook#run-a-playbook-on-demand).
+> You can also manually run a playbook on demand. This can be useful in situations where you want more control over orchestration and response processes. For more information, see [Run a playbook on demand](../../sentinel/tutorial-respond-threats-playbook.md#run-a-playbook-on-demand).
### Automatically close incidents
This playbook updates the incident severity according to the importance level of
## Next steps > [!div class="nextstepaction"]
-> [Visualize data](/azure/sentinel/get-visibility)
+> [Visualize data](../../sentinel/get-visibility.md)
> [!div class="nextstepaction"]
-> [Create custom analytics rules](/azure/sentinel/detect-threats-custom)
+> [Create custom analytics rules](../../sentinel/detect-threats-custom.md)
> [!div class="nextstepaction"]
-> [Investigate incidents](/azure/sentinel/investigate-cases)
+> [Investigate incidents](../../sentinel/investigate-cases.md)
> [!div class="nextstepaction"]
-> [Investigate entities](/azure/sentinel/entity-pages)
+> [Investigate entities](../../sentinel/entity-pages.md)
> [!div class="nextstepaction"]
-> [Use playbooks with automation rules](/azure/sentinel/tutorial-respond-threats-playbook)
-
-For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
+> [Use playbooks with automation rules](../../sentinel/tutorial-respond-threats-playbook.md)
+For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
defender-for-iot Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md
In this tutorial, you will learn how to:
Before you start, make sure you have the following requirements on your workspace: -- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](/azure/sentinel/roles).
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](../../sentinel/roles.md).
- **Contributor** or **Owner** permissions on the subscription you want to connect to Microsoft Sentinel. - A Defender for IoT plan on your Azure subscription with data streaming into Defender for IoT. For more information, see [Quickstart: Get started with Defender for IoT](getting-started.md). > [!IMPORTANT]
-> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](/azure/sentinel/data-connectors-reference#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
+> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](../../sentinel/data-connectors-reference.md#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
> ## Connect your data from Defender for IoT to Microsoft Sentinel
Start by enabling the **Defender for IoT** data connector to stream all your Def
If you've made any connection changes, it can take 10 seconds or more for the **Subscription** list to update.
-For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](/azure/sentinel/connect-azure-windows-microsoft-services).
+For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](../../sentinel/connect-azure-windows-microsoft-services.md).
## View Defender for IoT alerts
After you've connected a subscription to Microsoft Sentinel, you'll be able to v
> [!NOTE] > The **Logs** page in Microsoft Sentinel is based on Azure Monitor's Log Analytics. >
-> For more information, see [Log queries overview](/azure/azure-monitor/logs/log-query-overview) in the Azure Monitor documentation and the [Write your first KQL query](/training/modules/write-first-query-kusto-query-language/) Learn module.
+> For more information, see [Log queries overview](../../azure-monitor/logs/log-query-overview.md) in the Azure Monitor documentation and the [Write your first KQL query](/training/modules/write-first-query-kusto-query-language/) Learn module.
> ### Understand alert timestamps
For more information, see:
- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) - [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184) - [Microsoft Defender for IoT solution](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview)-- [Microsoft Defender for IoT data connector](/azure/sentinel/data-connectors-reference#microsoft-defender-for-iot)-
+- [Microsoft Defender for IoT data connector](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot)
defender-for-iot Manage Subscriptions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md
Before performing the procedures in this article, make sure that you have:
- The following user roles:
- - **In Azure Active Directory**: [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator) for your Microsoft 365 tenant
+ - **In Azure Active Directory**: [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) for your Microsoft 365 tenant
- - **In Azure RBAC**: [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) for the Azure subscription that you'll be using for the integration
+ - **In Azure RBAC**: [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) for the Azure subscription that you'll be using for the integration
## Calculate committed devices for Enterprise IoT monitoring
This procedure describes how to add an Enterprise IoT plan to your Azure subscri
1. Select the following options for your plan:
- - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+ - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription.
> [!TIP] > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner.
For more information, see:
- [Create an additional Azure subscription](../../cost-management-billing/manage/create-subscription.md) -- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
+- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
defender-for-iot Manage Users Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-overview.md
Microsoft Defender for IoT provides tools both in the Azure portal and on-premis
## Azure users for Defender for IoT
-In the Azure portal, users are managed at the subscription level with [Azure Active Directory](/azure/active-directory/) and [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview). Azure subscription users can have one or more user roles, which determine the data and actions they can access from the Azure portal, including in Defender for IoT.
+In the Azure portal, users are managed at the subscription level with [Azure Active Directory](../../active-directory/index.yml) and [Azure role-based access control (RBAC)](../../role-based-access-control/overview.md). Azure subscription users can have one or more user roles, which determine the data and actions they can access from the Azure portal, including in Defender for IoT.
-Use the [portal](/azure/role-based-access-control/quickstart-assign-role-user-portal) or [PowerShell](/azure/role-based-access-control/tutorial-role-assignments-group-powershell) to assign your Azure subscription users with the specific roles they'll need to view data and take action, such as whether they'll be viewing alert or device data, or managing pricing plans and sensors.
+Use the [portal](../../role-based-access-control/quickstart-assign-role-user-portal.md) or [PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md) to assign your Azure subscription users with the specific roles they'll need to view data and take action, such as whether they'll be viewing alert or device data, or managing pricing plans and sensors.
For more information, see [Manage users on the Azure portal](manage-users-portal.md) and [Azure user roles for OT and Enterprise IoT monitoring](roles-azure.md)
For more information, see [Define global access permission for on-premises users
## Next steps -- [Manage Azure subscription users](/azure/role-based-access-control/quickstart-assign-role-user-portal)
+- [Manage Azure subscription users](../../role-based-access-control/quickstart-assign-role-user-portal.md)
- [Create and manage users on an OT network sensor](manage-users-sensor.md) - [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md) For more information, see: - [Azure user roles and permissions for Defender for IoT](roles-azure.md)-- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
defender-for-iot Manage Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-portal.md
Microsoft Defender for IoT provides tools both in the Azure portal and on-premises for managing user access across Defender for IoT resources.
-In the Azure portal, user management is managed at the *subscription* level with [Azure Active Directory](/azure/active-directory/) and [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview). Assign Azure Active Directory users with Azure roles at the subscription level so that they can add or update Defender for IoT pricing plans and access device data, manage sensors, and access device data across Defender for IoT.
+In the Azure portal, user management is managed at the *subscription* level with [Azure Active Directory](../../active-directory/index.yml) and [Azure role-based access control (RBAC)](../../role-based-access-control/overview.md). Assign Azure Active Directory users with Azure roles at the subscription level so that they can add or update Defender for IoT pricing plans and access device data, manage sensors, and access device data across Defender for IoT.
For OT network monitoring, Defender for IoT has the extra *site* level, which you can use to add granularity to your user management. For example, assign roles at the site level to apply different permissions for the same users across different sites.
For OT network monitoring, Defender for IoT has the extra *site* level, which yo
Manage user access for Defender for IoT using Azure RBAC, applying the roles to users or user groups as needed to access required functionality. -- [Grant a user access to Azure resources using the Azure portal](/azure/role-based-access-control/quickstart-assign-role-user-portal)-- [Grant a group access to Azure resources using Azure PowerShell](/azure/role-based-access-control/tutorial-role-assignments-group-powershell)
+- [Grant a user access to Azure resources using the Azure portal](../../role-based-access-control/quickstart-assign-role-user-portal.md)
+- [Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md)
- [Azure user roles for OT and Enterprise IoT monitoring](roles-azure.md) ## Manage site-based access control (Public preview)
Sites and site-based access control is relevant only for OT monitoring sites, an
For more information, see: -- [Grant a user access to Azure resources using the Azure portal](/azure/role-based-access-control/quickstart-assign-role-user-portal)-- [List Azure role assignments using the Azure portal](/azure/role-based-access-control/role-assignments-list-portal)-- [Check access for a user to Azure resources](/azure/role-based-access-control/check-access)
+- [Grant a user access to Azure resources using the Azure portal](../../role-based-access-control/quickstart-assign-role-user-portal.md)
+- [List Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-list-portal.md)
+- [Check access for a user to Azure resources](../../role-based-access-control/check-access.md)
## Next steps
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can purchase any of the following appliances for your OT on-premises managem
||||| |**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) <br><br> [Dell PowerEdge R350](appliance-catalog/dell-poweredge-r350-e1800.md) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-For information about previously supported legacy appliances, see the [appliance catalog](/azure/defender-for-iot/organizations/appliance-catalog/).
+For information about previously supported legacy appliances, see the [appliance catalog](./appliance-catalog/index.yml).
## Next steps
Then, use any of the following procedures to continue:
- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console) - [Install software](how-to-install-software.md)
-Our OT monitoring appliance reference articles also include extra installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances. For more information, see [OT monitoring appliance reference](appliance-catalog/index.yml).
+Our OT monitoring appliance reference articles also include extra installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances. For more information, see [OT monitoring appliance reference](appliance-catalog/index.yml).
defender-for-iot Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/recommendations.md
The following recommendations are displayed for devices detected by OT and Enter
| **Enterprise IoT network sensors** | | | **Disable insecure administration protocol**| Devices with this recommendation are exposed to malicious threats because they use Telnet, which isn't a secured and encrypted communication protocol. <br><br>We recommend that you switch to a more secure protocol, such as SSH, disable the server altogether, or apply network access restrictions.|
-Other recommendations you may see in the **Recommendations** page are relevant for the [Defender for IoT micro agent](/azure/defender-for-iot/device-builders/).
+Other recommendations you may see in the **Recommendations** page are relevant for the [Defender for IoT micro agent](../device-builders/index.yml).
## Next steps > [!div class="nextstepaction"]
-> [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory)
-
+> [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory)
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
# Azure user roles and permissions for Defender for IoT
-Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](/azure/role-based-access-control/) to provide access to Enterprise IoT monitoring services and data on the Azure portal.
+Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](../../role-based-access-control/index.yml) to provide access to Enterprise IoT monitoring services and data on the Azure portal.
The built-in Azure [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), and [Owner](../../role-based-access-control/built-in-roles.md#owner) roles are relevant for use in Defender for IoT.
-This article provides a reference of Defender for IoT actions available for each role in the Azure portal. For more information, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
+This article provides a reference of Defender for IoT actions available for each role in the Azure portal. For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
## Roles and permissions reference
For more information, see:
- [Manage OT monitoring users on the Azure portal](manage-users-portal.md) - [On-premises user roles for OT monitoring with Defender for IoT](roles-on-premises.md) - [Create and manage users on an OT network sensor](manage-users-sensor.md)-- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
defender-for-iot Track User Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/track-user-activity.md
After you've set up your user access for the [Azure portal](manage-users-portal.
Use Azure Active Directory user auditing resources to audit Azure user activity across Defender for IoT. For more information, see: -- [Audit logs in Azure Active directory](/azure/active-directory/reports-monitoring/concept-audit-logs)-- [Azure AD audit activity reference](/azure/active-directory/reports-monitoring/reference-audit-activities)
+- [Audit logs in Azure Active directory](../../active-directory/reports-monitoring/concept-audit-logs.md)
+- [Azure AD audit activity reference](../../active-directory/reports-monitoring/reference-audit-activities.md)
## Audit user activity on an OT network sensor
For more information, see:
- [Azure user roles and permissions for Defender for IoT](roles-azure.md) - [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md) - [Create and manage users on an OT network sensor](manage-users-sensor.md)-- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
To store the personal access token you generated as a [key vault secret](../key-
| **Name** | Enter a name for the catalog. | | **Git clone URI** | Enter or paste the [clone URL](#get-the-clone-url-for-your-repository) for either your GitHub repository or your Azure DevOps repository.<br/>*Sample Catalog Example:* https://github.com/Azure/deployment-environments.git | | **Branch** | Enter the repository branch to connect to.<br/>*Sample Catalog Example:* main|
- | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments|
+ | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. </br> This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments|
| **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.|
- :::image type="content" source="media/how-to-configure-catalog/add-new-catalog-form.png" alt-text="Screenshot that shows how to add a catalog to a dev center.":::
+ :::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**.
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Previously updated : 10/26/2022 Last updated : 12/20/2022 # Quickstart: Create and configure a dev center This quickstart shows you how to create and configure a dev center in Azure Deployment Environments Preview.
-An enterprise development infrastructure team typically sets up a dev center, configures different entities within the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy applications.
-
-In this quickstart, you learn how to:
-
-> [!div class="checklist"]
->
-> - Create a dev center
-> - Attach an identity to your dev center
-> - Attach a catalog to your dev center
-> - Create environment types
+An enterprise development infrastructure team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy applications.
> [!IMPORTANT] > Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, review the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
In this quickstart, you learn how to:
- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner). ## Create a dev center- To create and configure a Dev center in Azure Deployment Environments by using the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com).
To create and configure a Dev center in Azure Deployment Environments by using t
|**Name**|Enter a name for the dev center.| |**Location**|Select the location or region where you want to create the dev center.|
-1. (Optional) Select the **Tags** tab and enter a **Name**:**Value** pair.
1. Select **Review + Create**. 1. On the **Review** tab, wait for deployment validation, and then select **Create**.
To create and configure a Dev center in Azure Deployment Environments by using t
:::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created.":::
-## Attach an identity to the dev center
+## Create a Key Vault
+You'll need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository.
+If you don't have an existing key vault, use the following steps to create one:
-After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity) you can attach:
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the Search box, enter *Key Vault*.
+1. From the results list, select **Key Vault**.
+1. On the Key Vault page, select **Create**.
+1. On the Create key vault page provide the following information:
-- System-assigned managed identity-- User-assigned managed identity
+ |Name |Value |
+ |-|--|
+ |**Name**|Enter a name for the key vault.|
+ |**Subscription**|Select the subscription in which you want to create the key vault.|
+ |**Resource group**|Either use an existing resource group or select **Create new** and enter a name for the resource group.|
+ |**Location**|Select the location or region where you want to create the key vault.|
+
+ Leave the other options at their defaults.
-You can use a system-assigned managed identity or a user-assigned managed identity. You don't have to use both. For more information, review [Configure a managed identity](how-to-configure-managed-identity.md).
+1. Select **Create**.
-> [!NOTE]
-> In Azure Deployment Environments Preview, if you add both a system-assigned identity and a user-assigned identity, only the user-assigned identity is used.
+## Create a personal access token
+Using an authentication token like a GitHub personal access token (PAT) enables you to share your repository securely.
+> [!TIP]
+> If you are attaching an Azure DevOps repository, use these steps: [Create a personal access token in Azure DevOps](how-to-configure-catalog.md#create-a-personal-access-token-in-azure-devops).
-### Attach a system-assigned managed identity
+1. In a new browser tab, sign into your [GitHub](https://github.com) account.
+1. On your profile menu, select **Settings**.
+1. On your account page, on the left menu, select **< >Developer Settings**.
+1. On the Developer settings page, select **Tokens (classic)**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-pat.png" alt-text="Screenshot that shows the GitHub Tokens (classic) option.":::
+
+ Fine grained and classic tokens work with Azure Deployment Environments.
-To attach a system-assigned managed identity to your dev center:
+1. On the New personal access token (classic) page:
+ - In the **Note** box, add a note describing the tokenΓÇÖs intended use.
+ - In **Select scopes**, select repo.
-1. Complete the steps to create a [system-assigned managed identity](how-to-configure-managed-identity.md#add-a-system-assigned-managed-identity-to-a-dev-center).
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/generate-git-hub-token.png" alt-text="Screenshot that shows the GitHub Tokens (classic) configuration page.":::
+
+1. Select **Generate token**.
+1. On the Personal access tokens (classic) page, copy the new token.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/copy-new-token.png" alt-text="Screenshot that shows the new GitHub token with the copy button highlighted.":::
+
+ > [!WARNING]
+ > You must copy the token now. You will not be able to access it again.
+
+1. Switch back to the **Key Vault ΓÇô Microsoft Azure** browser tab.
+1. In the Key Vault, on the left menu, select **Secrets**.
+1. On the Secrets page, select **Generate/Import**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/import-secret.png" alt-text="Screenshot that shows the key vault Secrets page with the generate/import button highlighted.":::
+
+1. On the Create a secret page:
+ - In the **Name** box, enter a descriptive name for your secret.
+ - In the **Secret value** box, paste the GitHub secret you copied in step 7.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-secret-in-key-vault.png" alt-text="Screenshot that shows the Create a secret page with the Name and Secret value text boxes highlighted.":::
+
+ - Select **Create**.
+1. Leave this tab open, youΓÇÖll need to come back to the Key Vault later.
+
+## Attach an identity to the dev center
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity.":::
+After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity or a user-assigned managed identity. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity).
-1. After you create a system-assigned managed identity, assign the Owner role to give the [identity access](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) on the subscriptions that will be used to configure [project environment types](concept-environments-key-concepts.md#project-environment-types).
+In this quickstart, you'll configure a system-assigned managed identity for your dev center.
- Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access your repository.
+## Attach a system-assigned managed identity
-### Attach an existing user-assigned managed identity
+To attach a system-assigned managed identity to your dev center:
-To attach a user-assigned managed identity to your dev center:
+1. In Dev centers, select your dev center.
+1. In the left menu under Settings, select **Identity**.
+1. Under **System assigned**, set **Status** to **On**, and then select **Save**.
-1. Complete the steps to attach a [user-assigned managed identity](how-to-configure-managed-identity.md#add-a-user-assigned-managed-identity-to-a-dev-center).
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity-on.png" alt-text="Screenshot that shows a system-assigned managed identity.":::
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/user-assigned-managed-identity.png" alt-text="Screenshot that shows a user-assigned managed identity.":::
+1. In the **Enable system assigned managed identity** dialog, select **Yes**.
-1. After you attach the identity, assign the Owner role to give the [identity access](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) on the subscriptions that will be used to configure [project environment types](how-to-configure-project-environment-types.md). Give the identity Reader access to all subscriptions that a project lives in.
+### Assign the system-assigned managed identity access to the key vault secret
+Make sure that the identity has access to the key vault secret that contains the personal access token to access your repository.
- Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access the repository.
+Configure a key vault access policy:
+1. In the Azure portal, go to the key vault that contains the secret with the personal access token.
+2. In the left menu, select **Access policies**, and then select **Create**.
+3. In Create an access policy, enter or select the following information:
+ - On the Permissions tab, under **Secret permissions**, select **Select all**, and then select **Next**.
+ - On the Principal tab, select the identity that's attached to the dev center, and then select **Next**.
+ - Select **Review + create**, and then select **Create**.
-> [!NOTE]
-> The [identity](concept-environments-key-concepts.md#identities) that's attached to the dev center should be assigned the Owner role for access to the deployment subscription for each environment type.
## Add a catalog to the dev center
+Azure Deployment Environments Preview supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
-> [!NOTE]
-> Before you add a [catalog](concept-environments-key-concepts.md#catalogs), store the personal access token as a [key vault secret](../key-vault/secrets/quick-create-portal.md) in Azure Key Vault and copy the secret identifier. Ensure that the [identity](concept-environments-key-concepts.md#identities) that's attached to the dev center has [GET access to the secret](../key-vault/general/assign-access-policy.md).
+In this quickstart, you'll attach a GitHub repository that contains samples created and maintained by the Azure Deployment Environments team.
-To add a catalog to your dev center:
+To add a catalog to your dev center, you'll first need to gather some information.
-1. In the Azure portal, go to Azure Deployment Environments.
-1. In **Dev centers**, select your dev center.
+### Gather GitHub repo information
+To add a catalog, you must specify the GitHub repo URL, the branch, and the folder that contains your catalog items. You can gather this information before you begin the process of adding the catalog to the dev center, and paste it somewhere accessible, like notepad.
+
+> [!TIP]
+> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-of-an-azure-devops-repository).
+
+1. On your [GitHub](https://github.com) account page, select **<> Code**, and then select copy.
+1. Take a note of the branch that you're working in.
+1. Take a note of the folder that contains your catalog items.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-info.png" alt-text="Screenshot that shows the GitHub repo with Code, branch, and folder highlighted.":::
+
+### Gather the secret identifier
+You'll also need the path to the secret you created in the key vault.
+
+1. In the Azure portal, navigate to your key vault.
+1. On the key vault page, from the left menu, select **Secrets**.
+1. On the Secrets page, select the secret you created earlier.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-secrets-page.png" alt-text="Screenshot that shows the list of secrets in the key vault with one highlighted.":::
+
+1. On the versions page, select the **CURRENT VERSION**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-versions-page.png" alt-text="Screenshot that shows the current version of the select secret.":::
+
+1. On the current version page, for the **Secret identifier**, select copy.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-current-version-page.png" alt-text="Screenshot that shows the details current version of the select secret with the secret identifier copy button highlighted.":::
+
+### Add a catalog to your dev center
+1. Navigate to your dev center.
1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**. :::image type="content" source="media/quickstart-create-and-configure-devcenter/catalogs-page.png" alt-text="Screenshot that shows the Catalogs pane."::: 1. In the **Add catalog** pane, enter the following information, and then select **Add**.
- |Name |Value |
- ||-|
- |**Name**|Enter a name for your catalog.|
- |**Git clone URI**|Enter the URI to your GitHub or Azure DevOps repository.|
- |**Branch**|Enter the repository branch that you want to connect.|
- |**Folder path**|Enter the repository relative path where the [catalog item](concept-environments-key-concepts.md#catalog-items) exists.|
- |**Secret identifier**|Enter the secret identifier that contains your personal access token for the repository.|
+ | Name | Value |
+ | -- | -- |
+ | **Name** | Enter a name for the catalog. |
+ | **Git clone URI** | Enter or paste the clone URL for either your GitHub repository or your Azure DevOps repository.<br/>*Sample Catalog Example:* https://github.com/Azure/deployment-environments.git |
+ | **Branch** | Enter the repository branch to connect to.<br/>*Sample Catalog Example:* main|
+ | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. </br>This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments|
+ | **Secret identifier**| Enter the secret identifier that contains your personal access token for the repository.|
- :::image type="content" source="media/how-to-configure-catalog/add-new-catalog-form.png" alt-text="Screenshot that shows how to add a catalog to a dev center.":::
+ :::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
1. Confirm that the catalog is successfully added by checking your Azure portal notifications.
-1. Select the specific repository, and then select **Sync**.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/sync-catalog.png" alt-text="Screenshot that shows how to sync the catalog." :::
- ## Create an environment type Use an environment type to help you define the different types of environments your development teams can deploy. You can apply different settings for each environment type.
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
This quickstart shows you how to create a project in Azure Deployment Environmen
An enterprise development infrastructure team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy applications.
-In this quickstart, you learn how to:
-
-> [!div class="checklist"]
->
-> - Create a project
-> - Configure a project
-> - Provide project access to the development team
- > [!IMPORTANT] > Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
To create a project in your dev center:
|**Name**|Enter a name for the project. | |**Description** (Optional) |Enter any project-related details. |
-1. Select the **Tags** tab and enter a **Name**:**Value** pair.
- 1. On the **Review + Create** tab, wait for deployment validation, and then select **Create**. :::image type="content" source="media/quickstart-create-configure-projects/create-project-page-review-create.png" alt-text="Screenshot that shows selecting the Review + Create button to validate and create a project.":::
To create a project in your dev center:
:::image type="content" source="media/quickstart-create-configure-projects/created-project.png" alt-text="Screenshot that shows the project overview pane.":::
+### Assign a managed identity the owner role to the subscription
+Before you can create environment types, you must give the managed identity that represents your dev center access to the subscriptions where you'll configure the [project environment types](concept-environments-key-concepts.md#project-environment-types).
+
+In this quickstart you'll assign the Owner role to the system-assigned managed identity that you configured previously: [Attach a system-assigned managed identity](quickstart-create-and-configure-devcenter.md#attach-a-system-assigned-managed-identity).
+
+1. Navigate to your dev center.
+1. On the left menu under Settings, select **Identity**.
+1. Under System assigned > Permissions, select **Azure role assignments**.
+
+ :::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted.":::
+
+1. In Azure role assignments, select **Add role assignment (Preview)**, and then enter or select the following information:
+ - In **Scope**, select **Subscription**.
+ - In **Subscription**, select the subscription in which to use the managed identity.
+ - In **Role**, select **Owner**.
+ - Select **Save**.
## Configure a project To configure a project, add a [project environment type](how-to-configure-project-environment-types.md):
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
To create a network connection, you must have:
- An existing virtual network (vnet) and subnet. If you don't have a vnet and subnet available, follow the instructions here: [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md) to create them. - A configured and working Hybrid AD join or Azure AD join.
- - **Hybrid AD join:** To learn how to join your AD DS domain-joined computers to Azure AD from an on-premises Active Directory Domain Services (AD DS) environment, see [Plan your hybrid Azure Active Directory join deployment](/azure/active-directory/devices/hybrid-azuread-join-plan).
- - **Azure AD join:** To learn how to join devices directly to Azure Active Directory (Azure AD), see [Plan your Azure Active Directory join deployment](/azure/active-directory/devices/azureadjoin-plan).
+ - **Hybrid AD join:** To learn how to join your AD DS domain-joined computers to Azure AD from an on-premises Active Directory Domain Services (AD DS) environment, see [Plan your hybrid Azure Active Directory join deployment](../active-directory/devices/hybrid-azuread-join-plan.md).
+ - **Azure AD join:** To learn how to join devices directly to Azure Active Directory (Azure AD), see [Plan your Azure Active Directory join deployment](../active-directory/devices/azureadjoin-plan.md).
- If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). Follow these steps to create a network connection:
In this quickstart, you created a dev box project and the resources necessary to
To learn about how to manage dev box projects, advance to the next quickstart: > [!div class="nextstepaction"]
-> [Configure a dev box project](./quickstart-configure-dev-box-project.md)
-
+> [Configure a dev box project](./quickstart-configure-dev-box-project.md)
devtest-labs Test App Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/test-app-azure.md
Azure DevTest Labs creates an Azure storage account when you create a lab. To cr
1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your lab. 1. Follow these steps to [select the storage account linked to your lab](./encrypt-storage.md#view-storage-account-contents).
-1. Follow these steps to [create a file share](/azure/storage/files/storage-how-to-create-file-share#create-a-file-share).
+1. Follow these steps to [create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share).
## Publish your app from Visual Studio
When the publish operation finishes, the application files are available on the
To access the application files in the Azure file share, you need to first mount the share to your lab VM.
-Follow these steps to [mount the Azure file share to your lab VM](/azure/storage/files/storage-how-to-use-files-windows#mount-the-azure-file-share).
+Follow these steps to [mount the Azure file share to your lab VM](../storage/files/storage-how-to-use-files-windows.md#mount-the-azure-file-share).
## Access the app on your lab VM
You can now run and test your app on your lab VM.
You've published an application directly from Visual Studio on your developer workstation into your lab VM. - Learn how you can [integrate the lab creation and application deployment into your CI/CD pipeline](./use-devtest-labs-build-release-pipelines.md).-- Learn more about [deploying an application to a folder with Visual Studio](/visualstudio/deployment/deploying-applications-services-and-components-resources#folder).
+- Learn more about [deploying an application to a folder with Visual Studio](/visualstudio/deployment/deploying-applications-services-and-components-resources#folder).
digital-twins How To Create App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration.md
description: Create an Azure Active Directory app registration that can access Azure Digital Twins resources. Previously updated : 5/25/2022 Last updated : 01/11/2023
Use these steps to create the role assignment for your registration.
| Setting | Value | | | | | Role | Select as appropriate |
- | Assign access to | User, group, or service principal |
- | Members | Search for the name or [client ID](#collect-client-id-and-tenant-id) of the app registration |
+ | Members > Assign access to | User, group, or service principal |
+ | Members > Members | **+ Select members**, then search for the name or [client ID](#collect-client-id-and-tenant-id) of the app registration |
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the 'Add role assignment' page." lightbox="../../includes/role-based-access-control/media/add-role-assignment-page.png":::
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the Roles tab in the Add role assignment page." lightbox="../../includes/role-based-access-control/media/add-role-assignment-page.png":::
+
+ :::image type="content" source="media/how-to-create-app-registration/add-role.png" alt-text="Screenshot of the Members tab in the Add role assignment page." lightbox="media/how-to-create-app-registration/add-role.png":::
+
+ Once the role has been selected, **Review + assign** it.
#### Verify role assignment
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Previously updated : 09/28/2021 Last updated : 01/05/2023 # What is Azure Database Migration Service?
For up-to-date info about the regional availability of Azure Database Migration
For up-to-date info about Azure Database Migration Service pricing, see [Azure Database Migration Service pricing](https://azure.microsoft.com/pricing/details/database-migration/). ++ ## Next steps * [Status of migration scenarios supported by Azure Database Migration Service](./resource-scenario-status.md)
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Previously updated : 10/19/2022 Last updated : 01/05/2023 # Known issues, limitations, and troubleshooting
-Known issues and limitations associated with the Azure SQL Migration extension for Azure Data Studio.
+Known issues and troubleshooting steps associated with the Azure SQL Migration extension for Azure Data Studio.
> [!NOTE] > When checking migration details using the Azure Portal, Azure Data Studio or PowerShell / Azure CLI you might see the following error: *Operation Id {your operation id} was not found*. This can either be because you provided an operationId as part of an api parameter in your get call that does not exist, or the migration details of your migration were deleted as part of a cleanup operation.
-### Error code: 2007 - CutoverFailedOrCancelled
+## Error code: 2007 - CutoverFailedOrCancelled
+ - **Message**: `Cutover failed or cancelled for database <DatabaseName>. Error details: The restore plan is broken because firstLsn <First LSN> of log backup <URL of backup in Azure Storage container>' is not <= lastLsn <last LSN> of Full backup <URL of backup in Azure Storage container>'. Restore to point in time.` - **Cause**: The error might occur due to the backups being placed incorrectly in the Azure Storage container. If the backups are placed in the network file share, this error could also occur due to network connectivity issues. - **Recommendation**: Ensure the database backups in your Azure Storage container are correct. If you're using network file share, there might be network-related issues and lags that are causing this error. Wait for the process to be completed.
-### Error code: 2009 - MigrationRestoreFailed
+## Error code: 2009 - MigrationRestoreFailed
+ - **Message**: `Migration for Database 'DatabaseName' failed with error cannot find server certificate with thumbprint.` - **Cause**: The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) hasn't been migrated to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine before migrating data.
Known issues and limitations associated with the Azure SQL Migration extension f
> [!NOTE] > For more information on general troubleshooting steps for Azure SQL Managed Instance errors, see [Known issues with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/doc-changes-updates-known-issues)
-### Error code: 2012 - TestConnectionFailed
+## Error code: 2012 - TestConnectionFailed
+ - **Message**: `Failed to test connections using provided Integration Runtime. Error details: 'Remote name could not be resolved.'` - **Cause**: The Self-Hosted Integration Runtime can't connect to the service back end. This issue is caused by network settings in the firewall.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: See [Troubleshoot Self-Hosted Integration Runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors.
-### Error code: 2014 - IntegrationRuntimeIsNotOnline
+## Error code: 2014 - IntegrationRuntimeIsNotOnline
+ - **Message**: `Integration Runtime <IR Name> in resource group <Resource Group Name> Subscription <SubscriptionID> isn't online.` - **Cause**: The Self-Hosted Integration Runtime isn't online.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: Make sure the Self-hosted Integration Runtime is registered and online. To perform the registration, you can use scripts from [Automating self-hosted integration runtime installation using local PowerShell scripts](../data-factory/self-hosted-integration-runtime-automation-scripts.md). Also, see [Troubleshoot self-hosted integration runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors.
-### Error code: 2030 - AzureSQLManagedInstanceNotReady
+## Error code: 2030 - AzureSQLManagedInstanceNotReady
+ - **Message**: `Azure SQL Managed Instance <Instance Name> isn't ready.` - **Cause**: Azure SQL Managed Instance not in ready state.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: Wait until the Azure SQL Managed Instance has finished deploying and is ready, then retry the process.
-### Error code: 2033 - SqlDataCopyFailed
+## Error code: 2033 - SqlDataCopyFailed
+ - **Message**: `Migration for Database <Database> failed in state <state>.` - **Cause**: ADF pipeline for data movement failed.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: Check the MigrationStatusDetails page for more detailed error information.
-### Error code: 2038 - MigrationCompletedDuringCancel
+## Error code: 2038 - MigrationCompletedDuringCancel
+ - **Message**: `Migration cannot be canceled as Migration was completed during the cancel process. Target server: <Target server> Target database: <Target database>.` - **Cause**: A cancellation request was received, but the migration was completed successfully before the cancellation was completed.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: No action required migration succeeded.
-### Error code: 2039 - MigrationRetryNotAllowed
+## Error code: 2039 - MigrationRetryNotAllowed
+ - **Message**: `Migration isn't in a retriable state. Migration must be in state WaitForRetry. Current state: <State>, Target server: <Target Server>, Target database: <Target database>.` - **Cause**: A retry request was received when the migration wasn't in a state allowing retrying.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: No action required migration is ongoing or completed.
-### Error code: 2040 - MigrationTimeoutWaitingForRetry
+## Error code: 2040 - MigrationTimeoutWaitingForRetry
+ - **Message**: `Migration retry timeout limit of 8 hours reached. Target server: <Target Server>, Target database: <Target Database>.` - **Cause**: Migration was idle in a failed, but retrievable state for 8 hours and was automatically canceled.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: No action is required; the migration was canceled.
-### Error code: 2041 - DataCopyCompletedDuringCancel
+## Error code: 2041 - DataCopyCompletedDuringCancel
+ - **Message**: `Data copy finished successfully before canceling completed. Target schema is in bad state. Target server: <Target Server>, Target database: <Target Database>.` - **Cause**: Cancel request was received, and the data copy was completed successfully, but the target database schema hasn't been returned to its original state.
WHERE STEP in (5,7,8) ORDER BY STEP DESC;
```
-### Error code: 2042 - PreCopyStepsCompletedDuringCancel
+## Error code: 2042 - PreCopyStepsCompletedDuringCancel
+ - **Message**: `Pre Copy steps finished successfully before canceling completed. Target database Foreign keys and temporal tables have been altered. Schema migration may be required again for future migrations. Target server: <Target Server>, Target database: <Target Database>.` - **Cause**: Cancel request was received and the steps to prepare the target database for copy were completed successfully. The target database schema hasn't been returned to its original state.
WHERE STEP in (3,4,6);
```
-### Error code: 2043 - CreateContainerFailed
+## Error code: 2043 - CreateContainerFailed
+ - **Message**: `Create container <ContainerName> failed with error Error calling the endpoint '<URL>'. Response status code: 'NA - Unknown'. More details: Exception message: 'NA - Unknown [ClientSideException] Invalid Url:<URL>.` - **Cause**: The request failed due to an underlying issue such as network connectivity, a DNS failure, a server certificate validation, or a timeout. - **Recommendation**: For more troubleshooting steps, see [Troubleshoot Azure Data Factory and Synapse pipelines](../data-factory/data-factory-troubleshoot-guide.md#error-code-2108).
+## Azure SQL Database limitations
+
+Migrating to Azure SQL Database by using the Azure SQL extension for Azure Data Studio has the following limitations:
++
+## Azure SQL Managed Instance limitations
+
+Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
++
+## SQL Server on Azure VMs limitations
+
+Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
-## Database Migration Service issues
-Migrations that were completed before early December 2022 may be missing migration details. This action doesn't have a negative effect on new or ongoing migrations.
-
-## Azure SQL Database Migration limitations
-
-The Azure SQL Database offline migration (Preview) utilizes Azure Data Factory (ADF) pipelines for data movement and thus abides by ADF limitations. A corresponding ADF is created when a database migration service is also created. Thus factory limits apply per service.
- The machine where the SHIR is installed acts as the compute for migration. Make sure this machine can handle the cpu and memory load of the data copy. To learn more, review [SHIR recommendations](/azure/data-factory/create-self-hosted-integration-runtime).
-- 100,000 table per database limit. -- 10,000 concurrent database migrations per service. -- Migration speed heavily depends on the target Azure SQL Database SKU and the self-hosted Integration Runtime host. -- Azure SQL Database migration scales poorly with table numbers due to ADF overhead in starting activities. If a database has thousands of tables, there will be a couple of seconds of startup time for each, even if they're composed of one row with 1 bit of data. -- Azure SQL Database table names with double-byte characters currently aren't supported for migration. Mitigation is to rename tables before migration; they can be changed back to their original names after successful migration.-- Tables with large blob columns may fail to migrate due to timeout.-- Database names with SQL Server reserved are currently not supported.-- Database names that include semicolons are currently not supported.-- Computed columns don't get migrated.-
-## Azure SQL Managed Instance known issues and limitations
--- If migrating a single database, the database backups must be placed in a flat-file structure inside a database folder (including the container root folder), and the folders can't be nested, as it's not supported.-- If migrating multiple databases using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container. -- Overwriting existing databases using DMS in your target Azure SQL Managed Instance isn't supported.-- Configuring high availability and disaster recovery on your target to match source topology isn't supported by DMS.-- The following server objects aren't supported:
- - Logins
- - SQL Server Agent jobs
- - Credentials
- - SSIS packages
- - Server roles
- - Server audit
-- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL migration extension in Azure Data Studio and can be reused for further database migrations.-
-## SQL Server on Azure Virtual Machine known issues and limitations
--- If migrating a single database, the database backups must be placed in a flat-file structure inside a database folder (including the container root folder), and the folders can't be nested, as it's not supported.-- If migrating multiple databases using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container. -- Overwriting existing databases using DMS in your target SQL Server on Azure Virtual Machine isn't supported.-- Configuring high availability and disaster recovery on your target to match source topology isn't supported by DMS.-- The following server objects aren't supported:
- - Logins
- - SQL Server Agent jobs
- - Credentials
- - SSIS packages
- - Server roles
- - Server audit
--- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL migration extension in Azure Data Studio and can be reused for further database migrations.-- VM with SQL Server 2008 and below as target versions aren't supported when migrating to SQL Server on Azure Virtual Machines.-- If you're using VM with SQL Server 2012 or SQL Server 2014, you need to store your source database backup files on an Azure Storage Blob Container instead of using the network share option. Store the backup files as page blobs since block blobs are only supported in SQL 2016 and after.-- You must make sure the [SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management) in the target Azure Virtual Machine is in **Full** mode instead of **Lightweight** mode.-- [SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management)only supports management of **Default Server Server Instance** or **Single Named Instance**, -- There's a temporary limit of 80 databases per target Azure Virtual Machine. A workaround to break the limit (reset the counter) is to **Uninstall** and **Reinstall** [SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management) in the target Azure Virtual Machine.-- Apart from configuring the Networking/Firewall of your Storage Account to allow your VM to access backup files, you also need to configure the Networking/Firewall of your VM to allow outbound connection to your storage account. ## Next steps
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
Previously updated : 09/28/2022 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to Azure SQL Database (preview) offline in Azure Data Studio
You've completed the migration to Azure SQL Database. We encourage you to go t
> [!IMPORTANT] > Be sure to take advantage of the advanced cloud-based features of Azure SQL Database. The features include [built-in high availability](/azure/azure-sql/database/high-availability-sla), [threat detection](/azure/azure-sql/database/azure-defender-for-sql), and [monitoring and tuning your workload](/azure/azure-sql/database/monitor-tune-overview).
+## Limitations
++ ## Next steps - Complete a quickstart to [create an Azure SQL Database instance](/azure/azure-sql/database/single-database-create-quickstart). - Learn more about [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview). - Learn how to [connect apps to Azure SQL Database](/azure/azure-sql/database/connect-query-content-reference-guide).
+- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Previously updated : 10/05/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio
After all database backups are restored on the instance of Azure SQL Managed Ins
> [!IMPORTANT] > After the migration, the availability of SQL Managed Instance with Business Critical service tier might take significantly longer than the General Purpose tier because three secondary replicas have to be seeded for an Always On High Availability group. The duration of this operation depends on the size of the data. For more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
+## Limitations
+
+Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
+++ ## Next steps - Complete a quickstart to [migrate a database to SQL Managed Instance by using the T-SQL RESTORE command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart). - Learn more about [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). - Learn how to [connect apps to SQL Managed Instance](/azure/azure-sql/managed-instance/connect-application-instance).
+- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Previously updated : 10/05/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online by using Azure Data Studio with DMS
During the cutover process, the migration status changes from *in progress* to *
> [!IMPORTANT] > After the cutover, availability of SQL Managed Instance with Business Critical service tier only can take significantly longer than General Purpose as three secondary replicas have to be seeded for Always On High Availability group. This operation duration depends on the size of data, for more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
+## Limitations
+
+Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
++ ## Next steps * For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart). * For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). * For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
+* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
Previously updated : 08/20/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS
+> [!NOTE]
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md).
+ You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) with minimal downtime. For additional methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database from an on-premises instance of SQL Server to a SQL Managed Instance with minimal downtime by using Azure Database Migration Service.
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
Previously updated : 01/03/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to Azure SQL Database using DMS
+> [!NOTE]
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline-ads.md).
++ You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service. You will learn how to:
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
Previously updated : 08/16/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS
+> [!NOTE]
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-offline-ads.md).
++ You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). For additional methods that may require some manual effort, see the article [SQL Server to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database from an on-premises instance of SQL Server to a SQL Managed Instance by using Azure Database Migration Service.
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Previously updated : 10/05/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio
In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configurat
After all database backups are restored on the instance of SQL Server on Azure Virtual Machines, an automatic migration cutover is initiated by Database Migration Service to ensure that the migrated database is ready to use. The migration status changes from **In progress** to **Succeeded**.
+## Limitations
+
+Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
+++ ## Next steps - Complete a quickstart to [migrate a database to SQL Server on Azure Virtual Machines by using the T-SQL RESTORE command](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide). - Learn more about [SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview).--Learn how to [connect apps to SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
+- Learn how to [connect apps to SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
+- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Previously updated : 10/05/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio with DMS
To complete the cutover:
During the cutover process, the migration status changes from *in progress* to *completing*. The migration status changes to *succeeded* when the cutover process is completed. The database migration is successful and that the migrated database is ready for use.
+## Limitations
+
+Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
+++ ## Next steps * How to migrate a database to SQL Server on Azure Virtual Machines using the T-SQL RESTORE command, see [Migrate a SQL Server database to SQL Server on a virtual machine](/azure/azure-sql/virtual-machines/windows/migrate-to-vm-from-sql-server). * For information about SQL Server on Azure Virtual Machines, see [Overview of SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview). * For information about connecting apps to SQL Server on Azure Virtual Machines, see [Connect applications](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
+* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
In this article, you will learn how to convert SEG-Y formatted data to the ZGY f
empty: none ```
-8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](/azure/energy-data-services/tutorial-seismic-ddms-sdutil). See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
+8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](./tutorial-seismic-ddms-sdutil.md). See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
> [!NOTE] > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
OSDU&trade; is a trademark of The Open Group.
## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
+> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Microsoft Energy Data Services is updated on an ongoing basis. To stay up to dat
- Deprecated functionality - Plans for changes - <hr width=100%>
+## December 2022
+
+### Lockbox
+
+Most operations, support, and troubleshooting performed by Microsoft personnel do not require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Microsoft Energy Data Services provides you an interface to review and approve or reject data access requests. Microsoft Energy Data Services now supports Lockbox. [Learn more](../security/fundamentals/customer-lockbox-overview.md).
+++
+<hr width=100%>
## October 20, 2022
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
Title: Send or receive events from Azure Event Hubs using .NET (latest)
-description: This article provides a walkthrough to create a .NET Core application that sends/receives events to/from Azure Event Hubs by using the latest Azure.Messaging.EventHubs package.
+ Title: 'Quickstart: Send or receive events using .NET'
+description: A quickstart to create a .NET Core application that sends/receives events to/from Azure Event Hubs by using the Azure.Messaging.EventHubs package.
Last updated 02/28/2022 ms.devlang: csharp
-# Send events to and receive events from Azure Event Hubs - .NET (Azure.Messaging.EventHubs)
-This quickstart shows how to send events to and receive events from an event hub using the **Azure.Messaging.EventHubs** .NET library.
+# Quickstart: Send events to and receive events from Azure Event Hubs - .NET (Azure.Messaging.EventHubs)
+In this quickstart, you will learn how to send events to and receive events from an event hub using the **Azure.Messaging.EventHubs** .NET library.
> [!NOTE] > You can find all .NET samples for Event Hubs in our [.NET SDK repository on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventhub/).
Task ProcessErrorHandler(ProcessErrorEventArgs eventArgs)
:::image type="content" source="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal-2.png" alt-text="Image of the Azure portal page to verify that the event hub sent events to the receiving app" lightbox="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal-2.png":::
+## Clean up resources
+Delete the resource group that has the Event Hubs namespace or delete only the namespace if you want to keep the resource group.
-## Next steps
-This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of events to an event hub and then receiving them. For more samples on other and advanced scenarios, check out the following samples on GitHub.
+## Samples
+This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of events to an event hub and then receiving them. For more samples, select the following links.
- [Event Hubs samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples) - [Event processor samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples) - [Azure role-based access control (Azure RBAC) sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Azure.Messaging.EventHubs/ManagedIdentityWebApp)+
+## Next steps
+See the following tutorial:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Visualize data anomalies in real-time events sent to Azure Event Hubs](event-hubs-tutorial-visualize-anomalies.md)
event-hubs Event Hubs Python Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-python-get-started-send.md
Title: Send or receive events from Azure Event Hubs using Python (latest)
-description: This article provides a walkthrough for creating a Python application that sends/receives events to/from Azure Event Hubs using the latest azure-eventhub package.
+ Title: Send or receive events from Azure Event Hubs using Python
+description: This article provides a walkthrough for creating a Python application that sends/receives events to/from Azure Event Hubs.
Previously updated : 10/10/2022 Last updated : 01/08/2023 ms.devlang: python-+
-# Send events to or receive events from event hubs by using Python (azure-eventhub)
+# Send events to or receive events from event hubs by using Python
This quickstart shows how to send events to and receive events from an event hub using the **azure-eventhub** Python package. ## Prerequisites
If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md
To complete this quickstart, you need the following prerequisites: -- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).-- Python 2.7 or 3.6 or later, with PIP installed and updated.-- The Python package for Event Hubs.
+- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, sign up for a [free trial](https://azure.microsoft.com/free/).
+- Python 3.7 or later, with pip installed and updated.
+- Visual Studio Code (recommended) or any other integrated development environment (IDE).
+- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create an Event Hubs namespace, and obtain the management credentials that your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md).
- To install the package, run this command in a command prompt that has Python in its path:
+### Install the packages to send events
- ```cmd
- pip install azure-eventhub
- ```
+To install the Python packages for Event Hubs, open a command prompt that has Python in its path. Change the directory to the folder where you want to keep your samples.
- Install the following package for receiving the events using Azure Blob storage as the checkpoint store:
+## [Passwordless (Recommended)](#tab/passwordless)
+
+```shell
+pip install azure-eventhub
+pip install azure-identity
+pip install aiohttp
+```
+
+## [Connection String](#tab/connection-string)
+
+```shell
+pip install azure-eventhub
+```
+++
+### Authenticate the app to Azure
+
- ```cmd
- pip install azure-eventhub-checkpointstoreblob-aio
- ```
-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create an Event Hubs namespace, and obtain the management credentials that your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You'll use the connection string later in this quickstart. ## Send events+ In this section, create a Python script to send events to the event hub that you created earlier. 1. Open your favorite Python editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-2. Create a script called *send.py*. This script sends a batch of events to the event hub that you created earlier.
-3. Paste the following code into *send.py*:
+1. Create a script called *send.py*. This script sends a batch of events to the event hub that you created earlier.
+1. Paste the following code into *send.py*:
+
+ ## [Passwordless (Recommended)](#tab/passwordless)
+
+ In the code, use real values to replace the following placeholders:
+
+ * `EVENT_HUB_FULLY_QUALIFIED_NAMESPACE`
+ * `EVENT_HUB_NAME`
```python import asyncio
- from azure.eventhub.aio import EventHubProducerClient
+
from azure.eventhub import EventData
+ from azure.eventhub.aio import EventHubProducerClient
+ from azure.identity import DefaultAzureCredential
+
+ EVENT_HUB_FULLY_QUALIFIED_NAMESPACE = "EVENT_HUB_FULLY_QUALIFIED_NAMESPACE"
+ EVENT_HUB_NAME = "EVENT_HUB_NAME"
+
+ credential = DefaultAzureCredential()
+
+ async def run():
+ # Create a producer client to send messages to the event hub.
+ # Specify a credential that has correct role assigned to access
+ # event hubs namespace and the event hub name.
+ producer = EventHubProducerClient(
+ fully_qualified_namespace=EVENT_HUB_FULLY_QUALIFIED_NAMESPACE,
+ eventhub_name=EVENT_HUB_NAME,
+ credential=credential,
+ )
+ async with producer:
+ # Create a batch.
+ event_data_batch = await producer.create_batch()
+
+ # Add events to the batch.
+ event_data_batch.add(EventData("First event "))
+ event_data_batch.add(EventData("Second event"))
+ event_data_batch.add(EventData("Third event"))
+
+ # Send the batch of events to the event hub.
+ await producer.send_batch(event_data_batch)
+
+ # Close credential when no longer needed.
+ await credential.close()
+
+ asyncio.run(run())
+ ```
+ ## [Connection String](#tab/connection-string)
+
+ In the code, use real values to replace the following placeholders:
+
+ * `EVENT_HUB_CONNECTION_STR`
+ * `EVENT_HUB_NAME`
+
+ ```python
+ import asyncio
+
+ from azure.eventhub import EventData
+ from azure.eventhub.aio import EventHubProducerClient
+
+ EVENT_HUB_CONNECTION_STR = "EVENT_HUB_CONNECTION_STR"
+ EVENT_HUB_NAME = "EVENT_HUB_NAME"
+
async def run(): # Create a producer client to send messages to the event hub. # Specify a connection string to your event hubs namespace and # the event hub name.
- producer = EventHubProducerClient.from_connection_string(conn_str="EVENT HUBS NAMESPACE - CONNECTION STRING", eventhub_name="EVENT HUB NAME")
+ producer = EventHubProducerClient.from_connection_string(
+ conn_str=EVENT_HUB_CONNECTION_STR, eventhub_name=EVENT_HUB_NAME
+ )
async with producer: # Create a batch. event_data_batch = await producer.create_batch()-
+
# Add events to the batch.
- event_data_batch.add(EventData('First event '))
- event_data_batch.add(EventData('Second event'))
- event_data_batch.add(EventData('Third event'))
-
+ event_data_batch.add(EventData("First event "))
+ event_data_batch.add(EventData("Second event"))
+ event_data_batch.add(EventData("Third event"))
+
# Send the batch of events to the event hub. await producer.send_batch(event_data_batch)-
- loop = asyncio.get_event_loop()
- loop.run_until_complete(run())
-
+
+ asyncio.run(run())
```-
+
> [!NOTE]
- > For the complete source code, including informational comments, go to the [GitHub send_async.py page](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub/samples/async_samples/send_async.py).
+ > For examples of other options for sending events to Event Hub asynchronously using a connection string, see the [GitHub send_async.py page](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub/samples/async_samples/send_async.py). The patterns shown there are also applicable to sending events passwordless.
## Receive events+ This quickstart uses Azure Blob storage as a checkpoint store. The checkpoint store is used to persist checkpoints (that is, the last read positions).
This quickstart uses Azure Blob storage as a checkpoint store. The checkpoint st
> > For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see the [synchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob/samples/receive_events_using_checkpoint_store_storage_api_version.py) and [asynchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/receive_events_using_checkpoint_store_storage_api_version_async.py) samples on GitHub. - ### Create an Azure storage account and a blob container Create an Azure storage account and a blob container in it by doing the following steps: 1. [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal)
-2. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-3. [Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+2. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).
+3. Authenticate to the blob container.
Be sure to record the connection string and container name for later use in the receive code.
+## [Passwordless (Recommended)](#tab/passwordless)
++
+## [Connection String](#tab/connection-string)
+
+[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+++
+### Install the packages to receive events
+
+For the receiving side, you need to install one or more packages. In this quickstart, you use Azure Blob storage to persist checkpoints so that the program doesn't read the events that it has already read. It performs metadata checkpoints on received messages at regular intervals in a blob. This approach makes it easy to continue receiving messages later from where you left off.
+
+## [Passwordless (Recommended)](#tab/passwordless)
+
+```shell
+pip install azure-eventhub-checkpointstoreblob-aio
+pip install azure-identity
+```
+
+## [Connection String](#tab/connection-string)
+
+```shell
+pip install azure-eventhub-checkpointstoreblob-aio
+```
++ ### Create a Python script to receive events In this section, you create a Python script to receive events from your event hub: 1. Open your favorite Python editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-2. Create a script called *recv.py*.
-3. Paste the following code into *recv.py*:
+1. Create a script called *recv.py*.
+1. Paste the following code into *recv.py*:
+
+ ## [Passwordless (Recommended)](#tab/passwordless)
+
+ In the code, use real values to replace the following placeholders:
+
+ * `BLOB_STORAGE_ACCOUNT_URL`
+ * `BLOB_CONTAINER_NAME`
+ * `EVENT_HUB_FULLY_QUALIFIED_NAMESPACE`
+ * `EVENT_HUB_NAME`
```python import asyncio
+
from azure.eventhub.aio import EventHubConsumerClient
- from azure.eventhub.extensions.checkpointstoreblobaio import BlobCheckpointStore
+ from azure.eventhub.extensions.checkpointstoreblobaio import (
+ BlobCheckpointStore,
+ )
+ from azure.identity.aio import DefaultAzureCredential
+
+ BLOB_STORAGE_ACCOUNT_URL = "BLOB_STORAGE_ACCOUNT_URL"
+ BLOB_CONTAINER_NAME = "BLOB_CONTAINER_NAME"
+ EVENT_HUB_FULLY_QUALIFIED_NAMESPACE = "EVENT_HUB_FULLY_QUALIFIED_NAMESPACE"
+ EVENT_HUB_NAME = "EVENT_HUB_NAME"
+
+ credential = DefaultAzureCredential()
+
+ async def on_event(partition_context, event):
+ # Print the event data.
+ print(
+ 'Received the event: "{}" from the partition with ID: "{}"'.format(
+ event.body_as_str(encoding="UTF-8"), partition_context.partition_id
+ )
+ )
+
+ # Update the checkpoint so that the program doesn't read the events
+ # that it has already read when you run it next time.
+ await partition_context.update_checkpoint(event)
+
+
+ async def main():
+ # Create an Azure blob checkpoint store to store the checkpoints.
+ checkpoint_store = BlobCheckpointStore(
+ blob_account_url=BLOB_STORAGE_ACCOUNT_URL,
+ container_name=BLOB_CONTAINER_NAME,
+ credential=credential,
+ )
+
+ # Create a consumer client for the event hub.
+ client = EventHubConsumerClient(
+ fully_qualified_namespace=EVENT_HUB_FULLY_QUALIFIED_NAMESPACE,
+ eventhub_name=EVENT_HUB_NAME,
+ consumer_group="$Default",
+ checkpoint_store=checkpoint_store,
+ credential=credential,
+ )
+ async with client:
+ # Call the receive method. Read from the beginning of the partition
+ # (starting_position: "-1")
+ await client.receive(on_event=on_event, starting_position="-1")
+
+ # Close credential when no longer needed.
+ await credential.close()
+
+ if __name__ == "__main__":
+ # Run the main method.
+ asyncio.run(main())
+ ```
+ ## [Connection String](#tab/connection-string)
+ In the code, use real values to replace the following placeholders:
+
+ * `BLOB_STORAGE_CONNECTION_STRING`
+ * `BLOB_CONTAINER_NAME`
+ * `EVENT_HUB_CONNECTION_STR`
+ * `EVENT_HUB_NAME`
+
+ ```python
+ import asyncio
+
+ from azure.eventhub.aio import EventHubConsumerClient
+ from azure.eventhub.extensions.checkpointstoreblobaio import (
+ BlobCheckpointStore,
+ )
+
+ BLOB_STORAGE_CONNECTION_STRING = "BLOB_STORAGE_CONNECTION_STRING"
+ BLOB_CONTAINER_NAME = "BLOB_CONTAINER_NAME"
+ EVENT_HUB_CONNECTION_STR = "EVENT_HUB_CONNECTION_STR"
+ EVENT_HUB_NAME = "EVENT_HUB_NAME"
+
+
async def on_event(partition_context, event): # Print the event data.
- print("Received the event: \"{}\" from the partition with ID: \"{}\"".format(event.body_as_str(encoding='UTF-8'), partition_context.partition_id))
-
+ print(
+ 'Received the event: "{}" from the partition with ID: "{}"'.format(
+ event.body_as_str(encoding="UTF-8"), partition_context.partition_id
+ )
+ )
+
# Update the checkpoint so that the program doesn't read the events # that it has already read when you run it next time. await partition_context.update_checkpoint(event)-
+
async def main(): # Create an Azure blob checkpoint store to store the checkpoints.
- checkpoint_store = BlobCheckpointStore.from_connection_string("AZURE STORAGE CONNECTION STRING", "BLOB CONTAINER NAME")
-
+ checkpoint_store = BlobCheckpointStore.from_connection_string(
+ BLOB_STORAGE_CONNECTION_STRING, BLOB_CONTAINER_NAME
+ )
+
# Create a consumer client for the event hub.
- client = EventHubConsumerClient.from_connection_string("EVENT HUBS NAMESPACE CONNECTION STRING", consumer_group="$Default", eventhub_name="EVENT HUB NAME", checkpoint_store=checkpoint_store)
+ client = EventHubConsumerClient.from_connection_string(
+ EVENT_HUB_CONNECTION_STR,
+ consumer_group="$Default",
+ eventhub_name=EVENT_HUB_NAME,
+ checkpoint_store=checkpoint_store,
+ )
async with client:
- # Call the receive method. Read from the beginning of the partition (starting_position: "-1")
- await client.receive(on_event=on_event, starting_position="-1")
-
- if __name__ == '__main__':
+ # Call the receive method. Read from the beginning of the
+ # partition (starting_position: "-1")
+ await client.receive(on_event=on_event, starting_position="-1")
+
+ if __name__ == "__main__":
loop = asyncio.get_event_loop() # Run the main method.
- loop.run_until_complete(main())
+ loop.run_until_complete(main())
```
+
+ > [!NOTE]
- > For the complete source code, including additional informational comments, go to the [GitHub recv_with_checkpoint_store_async.py
-page](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub/samples/async_samples/recv_with_checkpoint_store_async.py).
+ > For examples of other options for receiving events from Event Hub asynchronously using a connection string, see the [GitHub recv_with_checkpoint_store_async.py
+page](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub/samples/async_samples/recv_with_checkpoint_store_async.py). The patterns shown there are also applicable to receiving events passwordless.
### Run the receiver app
To run the script, open a command prompt that has Python in its path, and then r
python send.py ```
-The receiver window should display the messages that were sent to the event hub.
+The receiver window should display the messages that were sent to the event hub.
+### Troubleshooting
+
+If you don't see events in the receiver window or the code reports an error, try the following troubleshooting tips:
+
+* If you don't see results from *recy.py*, run *send.py* several times.
+
+* If you see errors about "coroutine" when using the passwordless code (with credentials), make sure you're using importing from `azure.identity.aio`.
+
+* If you see "Unclosed client session" with passwordless code (with credentials), make sure you close the credential when finished. For more information, see [Async credentials](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true#async-credentials).
+
+* If you see authorization errors with *recv.py* when accessing storage, make sure you followed the steps in [Create an Azure storage account and a blob container](#create-an-azure-storage-account-and-a-blob-container) and assigned the **Storage Blob Data Contributor** role to the service principal.
+
+* If you receive events with different partition IDs, this result is expected. Partitions are a data organization mechanism that relates to the downstream parallelism required in consuming applications. The number of partitions in an event hub directly relates to the number of concurrent readers you expect to have. For more information, see [Learn more about partitions](/azure/event-hubs/event-hubs-features#partitions).
## Next steps+ In this quickstart, you've sent and received events asynchronously. To learn how to send and receive events synchronously, go to the [GitHub sync_samples page](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples/sync_samples). For all the samples (both synchronous and asynchronous) on GitHub, go to [Azure Event Hubs client library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples).
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
See [ExpressRoute partners and locations](expressroute-locations.md) for informa
Yes. Microsoft 365 service endpoints are reachable through the Internet, even though ExpressRoute has been configured for your network. Check with your organization's networking team if the network at your location is configured to connect to Microsoft 365 services through ExpressRoute. ### How can I plan for high availability for Microsoft 365 network traffic on Azure ExpressRoute?
-See the recommendation for [High availability and failover with Azure ExpressRoute](/azure/expressroute/designing-for-high-availability-with-expressroute)
+See the recommendation for [High availability and failover with Azure ExpressRoute](./designing-for-high-availability-with-expressroute.md)
### Can I access Office 365 US Government Community (GCC) services over an Azure US Government ExpressRoute circuit?
You can associate a single ExpressRoute Direct circuit with multiple ExpressRout
### Does the ExpressRoute service store customer data?
-No.
+No.
expressroute Expressroute Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-prerequisites.md
If you plan to enable Microsoft 365 on ExpressRoute, review the following docume
* [Azure ExpressRoute for Microsoft 365](/microsoft-365/enterprise/azure-expressroute) * [Routing with ExpressRoute for Microsoft 365](/microsoft-365/enterprise/azure-expressroute)
-* [High availability and failover with ExpressRoute](/azure/expressroute/designing-for-high-availability-with-expressroute)
+* [High availability and failover with ExpressRoute](./designing-for-high-availability-with-expressroute.md)
* [Microsoft 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges) * [Network planning and performance tuning for Microsoft 365](/microsoft-365/enterprise/network-planning-and-performance) * [Network and migration planning for Microsoft 365](/microsoft-365/enterprise/network-and-migration-planning)
If you plan to enable Microsoft 365 on ExpressRoute, review the following docume
* Configure your ExpressRoute connection. * [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) * [Configure routing](expressroute-howto-routing-arm.md)
- * [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
+ * [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
external-attack-surface-management Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/index.md
Microsoft Defender External Attack Surface Management contains both global data
For security purposes, Microsoft collects users' IP addresses when they log in. This data is stored for up to 30 days but may be stored longer if needed to investigate potential fraudulent or malicious use of the product.
-In the case of a region down scenario, customers should see no downtime as Defender EASM uses technologies that replicate data to a backup region.
+In the case of a region down scenario, customers should see no downtime as Defender EASM uses technologies that replicate data to a backup region. Defender EASM processes customer data. By default, customer data is replicated to the paired region.
-
-Defender EASM processes customer data. By default, customer data is replicated to the paired region.
+The Microsoft compliance framework requires that all customer data be deleted within 180 days in accordance with [Azure subscription states](https://learn.microsoft.com/azure/cost-management-billing/manage/subscription-states) handling.  This also includes storage of customer data in offline locations, such as database backups. 
## Next Steps
external-attack-surface-management Using And Managing Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/using-and-managing-discovery.md
Microsoft has preemptively configured the attack surfaces of many organizations,
When first accessing your Defender EASM instance, select ΓÇ£Getting StartedΓÇ¥ in the ΓÇ£GeneralΓÇ¥ section to search for your organization in the list of automated attack surfaces. Then select your organization from the list and click ΓÇ£Build my Attack SurfaceΓÇ¥.
-![Screenshot of pre-configured attack surface selection screen](media/Discovery_1.png)
At this point, the discovery will be running in the background. If you selected a pre-configured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Review these dashboard insights to become familiar with your Attack Surface as you wait for additional assets to be discovered and populated in your inventory. See the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
Custom discoveries are organized into Discovery Groups. They are independent see
1. Select the **Discovery** panel under the **Manage** section in the left-hand navigation column.
- ![Screenshot of EASM instance from overview page with manage section highlighted](media/Discovery_2.png)
+ :::image type="content" source="media/Discovery_2.png" alt-text="Screenshot of EASM instance from overview page with manage section highlighted.":::
2. This Discovery page shows your list of Discovery Groups by default. This list will be empty when you first access the platform. To run your first discovery, click **Add Discovery Group**.
- ![Screenshot of Discovery screen with ΓÇ£add disco groupΓÇ¥ highlighted](media/Discovery_3.png)
+ :::image type="content" source="media/Discovery_3.png" alt-text="Screenshot of Discovery screen with ΓÇ£add disco groupΓÇ¥ highlighted.":::
3. First, name your new discovery group and add a description. The **Recurring Frequency** field allows you to schedule discovery runs for this group, scanning for new assets related to the designated seeds on a continuous basis. The default recurrence selection is **Weekly**; Microsoft recommends this cadence to ensure that your organizationΓÇÖs assets are routinely monitored and updated. For a single, one-time discovery run, select **Never**. However, we recommend that users keep the **Weekly** default cadence and instead turn off historical monitoring within their Discovery Group settings if they later decide to discontinue recurrent discovery runs. Select **Next: Seeds >**
- ![Screenshot of first page of disco group setup](media/Discovery_4.png)
+ :::image type="content" source="media/Discovery_4.png" alt-text="Screenshot of first page of disco group setup.":::
4. Next, select the seeds that youΓÇÖd like to use for this Discovery Group. Seeds are known assets that belong to your organization; the Defender EASM platform scans these entities, mapping their connections to other online infrastructure to create your Attack Surface.
- ![Screenshot of seed selection page of disco group setup](media/Discovery_5.png)
+ :::image type="content" source="media/Discovery_5.png" alt-text="Screenshot of seed selection page of disco group setup.":::
The **Quick Start** option lets you search for your organization in a list of pre-populated Attack Surfaces. You can quickly create a Discovery Group based on the known assets belonging to your organization.
- ![Screenshot of pre-baked attack surface selection page, then output in seed list](media/Discovery_6.png)
+ :::image type="content" source="media/Discovery_6.png" alt-text="Screenshot of pre-baked attack surface selection page, then output in seed list.":::
- ![Screenshot of pre-baked attack surface selection page.](media/Discovery_7.png)
+ :::image type="content" source="media/Discovery_7.png" alt-text="Screenshot of pre-baked attack surface selection page..":::
Alternatively, users can manually input their seeds. Defender EASM accepts organization names, domains, IP blocks, hosts, email contacts, ASNs, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
Custom discoveries are organized into Discovery Groups. They are independent see
5. Review your group information and seed list, then select **Create & Run**.
- ![Screenshot of review + create screen](media/Discovery_8.png)
+ :::image type="content" source="media/Discovery_8.png" alt-text="Screenshot of review + create screen.":::
You will then be taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you will see new assets added to your Confirmed Inventory.
Custom discoveries are organized into Discovery Groups. They are independent see
Users can manage their discovery groups from the main ΓÇ£DiscoveryΓÇ¥ page. The default view displays a list of all your discovery groups and some key data about each one. From the list view, you can see the number of seeds, recurrence schedule, last run date and created date for each group.
-![Screenshot of discovery groups screen](media/Discovery_9.png)
Click on any discovery group to view more information, edit the group, or immediately kickstart a new discovery process.
The discovery group details page contains the run history for the group. Once ex
Run history is organized by the seed assets scanned during the discovery run. To see a list of the applicable seeds, click ΓÇ£DetailsΓÇ¥. This opens a right-hand pane that lists all the seeds and exclusions by kind and name.
-![Screenshot of run history for disco group screen](media/Discovery_10.png)
### Viewing seeds and exclusions
The Discovery page defaults to a list view of Discovery Groups, but users can al
The seed list view displays seed values with three columns: type, source name, and discovery group. The ΓÇ£type" field displays the category of the seed asset; the most common seeds are domains, hosts and IP blocks, but you can also use email contacts, ASNs, certificate common names or WhoIs organizations. The source name is simply the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups that use the seed; each value is clickable, taking you to the details page for that discovery group.
-![Screenshot of seeds view of discovery page](media/Discovery_11.png)
### Exclusions
firewall-manager Secure Cloud Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-cloud-network.md
Previously updated : 01/26/2022 Last updated : 01/12/2023
In this tutorial, you learn how to:
> The procedure in this tutorial uses Azure Firewall Manager to create a new Azure Virtual WAN secured hub. > You can use Firewall Manager to upgrade an existing hub, but you can't configure Azure **Availability Zones** for Azure Firewall. > It is also possible to convert an existing hub to a secured hub using the Azure portal, as described in [Configure Azure Firewall in a Virtual WAN hub](../virtual-wan/howto-firewall.md). But like Azure Firewall Manager, you can't configure **Availability Zones**.
-> To upgrade an existing hub and specify **Availability Zones** for Azure Firewall (recommended) you must follow the upgrade procedure in [Tutorial: Secure your virtual hub using Azure PowerShell](secure-cloud-network-powershell.md). secure-cloud-network-powershell).
+> To upgrade an existing hub and specify **Availability Zones** for Azure Firewall (recommended) you must follow the upgrade procedure in [Tutorial: Secure your virtual hub using Azure PowerShell](secure-cloud-network-powershell.md).
## Prerequisites
The two virtual networks will each have a workload server in them and will be pr
3. For **Subscription**, select your subscription. 4. For **Resource group**, select **Create new**, and type **fw-manager-rg** for the name and select **OK**. 5. For **Name**, type **Spoke-01**.
-6. For **Region**, select **(US) East US**.
+6. For **Region**, select **East US**.
7. Select **Next: IP Addresses**.
-8. For **Address space**, type **10.0.0.0/16**.
+8. For **Address space**, accept the default **10.0.0.0/16**.
9. Select **Add subnet**. 10. For **Subnet name**, type **Workload-01-SN**. 11. For **Subnet address range**, type **10.0.1.0/24**.
The two virtual networks will each have a workload server in them and will be pr
13. Select **Review + create**. 14. Select **Create**.
-Repeat this procedure to create another similar virtual network:
+Repeat this procedure to create another similar virtual network in the **fw-manager-rg** resource group:
Name: **Spoke-02**<br> Address space: **10.1.0.0/16**<br>
Create your secured virtual hub using Firewall Manager.
:::image type="content" source="./media/secure-cloud-network/1-create-new-secured-virtual-hub.jpg" alt-text="Screenshot of creating a new secured virtual hub." lightbox="./media/secure-cloud-network/1-create-new-secured-virtual-hub.jpg":::
+1. Select your **Subscription**.
5. For **Resource group**, select **fw-manager-rg**. 6. For **Region**, select **East US**. 7. For the **Secured virtual hub name**, type **Hub-01**. 8. For **Hub address space**, type **10.2.0.0/16**.
+10. Select **New vWAN**.
9. For the new virtual WAN name, type **Vwan-01**.
-10. Select **New vWAN** and select **Standard** for "Type"
-11. Leave the **Include VPN gateway to enable Trusted Security Partners** check box cleared.
+1. For **Type** Select **Standard**.
+1. Leave the **Include VPN gateway to enable Trusted Security Partners** check box cleared.
:::image type="content" source="./media/secure-cloud-network/2-create-new-secured-virtual-hub.png" alt-text="Screenshot of creating a new virtual hub with properties." lightbox="./media/secure-cloud-network/2-create-new-secured-virtual-hub.png":::
Create your secured virtual hub using Firewall Manager.
:::image type="content" source="./media/secure-cloud-network/3-azure-firewall-parameters-with-zones.png" alt-text="Screenshot of configuring Azure Firewall parameters." lightbox="./media/secure-cloud-network/3-azure-firewall-parameters-with-zones.png":::
-16. Select the **Firewall Policy** to apply at the new Azure Firewall instance. Select **Default Deny Policy**, you will refine your settings later in this article.
-17. Select **Next: Trusted Security Partner**.
+16. Select the **Firewall Policy** to apply at the new Azure Firewall instance. Select **Default Deny Policy**, you'll refine your settings later in this article.
+17. Select **Next: Security Partner Provider**.
:::image type="content" source="./media/secure-cloud-network/4-trusted-security-partner.png" alt-text="Screenshot of configuring Trusted Partners parameters." lightbox="./media/secure-cloud-network/4-trusted-security-partner.png":::
You can get the firewall public IP address after the deployment completes.
1. Open **Firewall Manager**. 2. Select **Virtual hubs**. 3. Select **hub-01**.
-4. Select **Public IP configuration**.
+4. Under **Azure Firewall**, select **Public IP configuration**.
5. Note the public IP address to use later. ### Connect the hub and spoke virtual networks
Now you can peer the hub and spoke virtual networks.
7. Select **Spoke-01** for the virtual network and select **Workload-01-SN** for the subnet. 8. For **Public IP**, select **None**. 9. Accept the other defaults and select **Next: Management**.
-10. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
-11. Review the settings on the summary page, and then select **Create**.
+1. Select **Next:Monitoring**.
+1. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+1. Review the settings on the summary page, and then select **Create**.
Use the information in the following table to configure another virtual machine named **Srv-Workload-02**. The rest of the configuration is the same as the **Srv-workload-01** virtual machine.
A firewall policy defines collections of rules to direct traffic on one or more
9. For **Destination Ports**, type **3389**. 10. For **Destination Type**, select **IP Address**. 11. For **Destination**, type the firewall public IP address that you noted previously.
- 12. For **Translated address**, type the private IP address for **Srv-Workload-01** that you noted previously.
- 13. For **Translated port**, type **3389**.
- 14. Select **Add**.
+ 1. For **Translated type**, select **IP Address**.
+ 1. For **Translated address**, type the private IP address for **Srv-Workload-01** that you noted previously.
+ 1. For **Translated port**, type **3389**.
+ 1. Select **Add**.
22. Add a **Network rule** so you can connect a remote desktop from **Srv-Workload-01** to **Srv-Workload-02**.
A firewall policy defines collections of rules to direct traffic on one or more
11. For **Destination Type**, select **IP Address**. 12. For **Destination**, type the **Srv-Workload-02** private IP address that you noted previously. 13. Select **Add**.
- 14. Select **Review + create**.
- 15. Select **Create**.
-23. In the **IDPS** page, click on **Next: Threat Intelligence**
+
+1. Select **Next: IDPS**.
+23. On the **IDPS** page, select **Next: Threat Intelligence**
:::image type="content" source="./media/secure-cloud-network/6-create-azure-firewall-policy-idps7.png" alt-text="Screenshot of configuring IDPS settings." lightbox="./media/secure-cloud-network/6-create-azure-firewall-policy-idps7.png":::
-24. In the **Threat Intelligence** page, accept defaults and click on **Review and Create**:
+24. In the **Threat Intelligence** page, accept defaults and select **Review and Create**:
:::image type="content" source="./media/secure-cloud-network/7a-create-azure-firewall-policy-threat-intelligence7.png" alt-text="Screenshot of configuring Threat Intelligence settings." lightbox="./media/secure-cloud-network/7a-create-azure-firewall-policy-threat-intelligence7.png":::
-25. Review and confirm your selection clicking on **Create** button.
+25. Review to confirm your selection and then select **Create**.
## Associate policy
firewall-manager Vhubs And Vnets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/vhubs-and-vnets.md
Previously updated : 09/14/2020 Last updated : 01/11/2023
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
Previously updated : 08/01/2022 Last updated : 01/11/2023 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
The resource group contains all the resources used in this procedure.
1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). 2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Create**. 4. For **Subscription**, select your subscription.
-1. For **Resource group name**, type **Test-FW-RG**.
-1. For **Resource group location**, select a location. All other resources that you create must be in the same location.
+1. For **Resource group** name, type **Test-FW-RG**.
+1. For **Region**, select a region. All other resources that you create must be in the same region.
1. Select **Review + create**. 1. Select **Create**.
This VNet will have two subnets.
> [!NOTE] > The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-1. Select **Networking** > **Virtual network**.
+1. On the Azure portal menu or from the **Home** page, search for **Virtual networks**.
+1. Select **Virtual networks** in the result pane.
+1. Select **Create**.
1. For **Subscription**, select your subscription. 1. For **Resource group**, select **Test-FW-RG**. 1. For **Name**, type **Test-FW-VN**.
-1. For **Region**, select the same location that you used previously.
1. Select **Next: IP addresses**.
-1. For **IPv4 Address space**, accept the default **10.0.0.0/16**.
-1. Under **Subnet name**, select **default**.
-1. For **Subnet name** change it to **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
-1. For **Address range**, change it to **10.0.1.0/26**.
+1. For **Address space**, accept the default **10.0.0.0/16**.
+1. Under **Subnet name**, select **default** and change it to **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
+1. For **Subnet address range**, change it to **10.0.1.0/26**.
1. Select **Save**. + Next, create a subnet for the workload server. 1. Select **Add subnet**.
-4. For **Subnet name**, type **Workload-SN**.
-5. For **Subnet address range**, type **10.0.2.0/24**.
-6. Select **Add**.
-7. Select **Review + create**.
-8. Select **Create**.
+1. For **Subnet name**, type **Workload-SN**.
+1. For **Subnet address range**, type **10.0.2.0/24**.
+1. Select **Add**.
+1. Select **Review + create**.
+1. Select **Create**.
### Create a virtual machine
Now create the workload virtual machine, and place it in the **Workload-SN** sub
8. Make sure that **Test-FW-VN** is selected for the virtual network and the subnet is **Workload-SN**. 9. For **Public IP**, select **None**. 11. Accept the other defaults and select **Next: Management**.
-12. For **Boot diagnostics**, select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
-13. Review the settings on the summary page, and then select **Create**.
-1. After the deployment is complete, select **Srv-Work** and note the private IP address that you'll need to use later.
+1. Accept the defaults and select **Next: Monitoring**.
+1. For **Boot diagnostics**, select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+1. Review the settings on the summary page, and then select **Create**.
+1. After the deployment is complete, select **Go to resource** and note the **Srv-Work** private IP address that you'll need to use later.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
Deploy the firewall into the VNet.
|Resource group |**Test-FW-RG** | |Name |**Test-FW01**| |Region |Select the same location that you used previously|
- |Firewall tier|**Standard**|
+ |Firewall SKU|**Standard**|
|Firewall management|**Use Firewall rules (classic) to manage this firewall**| |Choose a virtual network |**Use existing**: **Test-FW-VN**| |Public IP address |**Add new**<br>**Name**: **fw-pip**|
Deploy the firewall into the VNet.
6. Review the summary, and then select **Create** to create the firewall. This will take a few minutes to deploy.
-7. After deployment completes, go to the **Test-FW-RG** resource group, and select the **Test-FW01** firewall.
+7. After deployment completes, select the **Go to resource**.
8. Note the firewall private and public IP addresses. You'll use these addresses later. ## Create a default route When creating a route for outbound and inbound connectivity through the firewall, a default route to 0.0.0.0/0 with the virtual appliance private IP as a next hop is sufficient. This will take care of any outgoing and incoming connections to go through the firewall. As an example, if the firewall is fulfilling a TCP-handshake and responding to an incoming request, then the response is directed to the IP address who sent the traffic. This is by design.
-As a result, there is no need create an additional UDR to include the AzureFirewallSubnet IP range. This may result in dropped connections. The original default route is sufficient.
+As a result, there is no need create an additional user defined route to include the AzureFirewallSubnet IP range. This may result in dropped connections. The original default route is sufficient.
For the **Workload-SN** subnet, configure the outbound default route to go through the firewall.
-1. On the Azure portal menu, select **Create a resource**.
-2. Under **Networking**, select **Route table**.
-5. For **Subscription**, select your subscription.
-6. For **Resource group**, select **Test-FW-RG**.
-7. For **Region**, select the same location that you used previously.
-4. For **Name**, type **Firewall-route**.
+1. On the Azure portal search for **Route tables**.
+1. Select **Route tables** in the results pane.
+1. Select **Create**.
+1. For **Subscription**, select your subscription.
+1. For **Resource group**, select **Test-FW-RG**.
+1. For **Region**, select the same location that you used previously.
+1. For **Name**, type **Firewall-route**.
1. Select **Review + create**. 1. Select **Create**. After deployment completes, select **Go to resource**. 1. On the **Firewall-route** page, select **Subnets** and then select **Associate**.
-1. Select **Virtual network** > **Test-FW-VN**.
+1. For **Virtual network**, select **Test-FW-VN**.
1. For **Subnet**, select **Workload-SN**. Make sure that you select only the **Workload-SN** subnet for this route, otherwise your firewall won't work correctly. 13. Select **OK**.
For testing purposes, configure the server's primary and secondary DNS addresses
2. Select the network interface for the **Srv-Work** virtual machine. 3. Under **Settings**, select **DNS servers**. 4. Under **DNS servers**, select **Custom**.
-5. Type **209.244.0.3** in the **Add DNS server** text box, and **209.244.0.4** in the next text box.
+5. Type **209.244.0.3** and press Enter in the **Add DNS server** text box, and **209.244.0.4** in the next text box.
6. Select **Save**. 7. Restart the **Srv-Work** virtual machine.
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
You can define the destination path to use in the rewrite. The destination path
Preserve unmatched path allows you to append the remaining path after the source pattern to the new path. For example, if I set **Preserve unmatched path to Yes**.
-* If the incoming request is `www.contoso.com/sub/1.jpg`, the source pattern gets set to `/`, the destination get set to `/foo/`, and the content get served from `/foo/sub/1`.jpg from the origin.
+* If the incoming request is `www.contoso.com/sub/1.jpg`, the source pattern gets set to `/`, the destination gets set to `/foo/`, and the content gets served from `/foo/sub/1.jpg` from the origin.
-* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/`, the content get served from `/foo/image/1.jpg` from the origin.
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination gets set to `/foo/`, and the content gets served from `/foo/image/1.jpg` from the origin.
For example, if I set **Preserve unmatched path to No**.
-* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/2.jpg`, the content will always be served from `/foo/2.jpg` from the origin no matter what paths followed in `wwww.contoso.com/sub/`.
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination gets set to `/foo/2.jpg`, and the content will always be served from `/foo/2.jpg` from the origin no matter what paths followed in `wwww.contoso.com/sub/`.
::: zone-end
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/iso27001-ase-sql-workload/control-mapping.md
appropriate separation of duties.
## A.8.2.1 Classification of information
-Azure's [SQL Vulnerability Assessment service](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview)
+Azure's [SQL Vulnerability Assessment service](../../../../defender-for-cloud/sql-azure-vulnerability-assessment-overview.md)
can help you discover sensitive data stored in your databases and includes recommendations to classify that data. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition to audit that vulnerabilities identified during SQL Vulnerability Assessment scan are remediated.
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/iso27001-shared/control-mapping.md
appropriate separation of duties.
## A.8.2.1 Classification of information Azure's
-[SQL Vulnerability Assessment service](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview)
+[SQL Vulnerability Assessment service](../../../../defender-for-cloud/sql-azure-vulnerability-assessment-overview.md)
can help you discover sensitive data stored in your databases and includes recommendations to classify that data. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition to audit that vulnerabilities identified during SQL Vulnerability Assessment scan are remediated.
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Machine Configuration Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-assignments.md
Title: Understand machine configuration assignment resources description: Machine configuration creates extension resources named machine configuration assignments that map configurations to machines. Previously updated : 07/15/2022 Last updated : 01/12/2023
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
Title: Understand Azure Automanage Machine Configuration
+ Title: Understand Azure Automanage Machine Configuration
description: Learn how Azure Policy uses the machine configuration feature to audit or configure settings inside virtual machines. Last updated 01/03/2023
servers because it's included in the Arc Connected Machine agent.
> manage Azure virtual machines. To deploy the extension at scale across many machines, assign the policy initiative
-`Deploy prerequisites to enable machine configuration policies on virtual machines`
+`Deploy prerequisites to enable guest configuration policies on virtual machines`
to a management group, subscription, or resource group containing the machines that you plan to manage.
scope of the policy assignment are automatically included.
## Managed identity requirements
-Policy definitions in the initiative _Deploy prerequisites to enable guest
-configuration policies on virtual machines_ enable a system-assigned managed
+Policy definitions in the initiative `Deploy prerequisites to enable guest configuration policies on virtual machines` enable a system-assigned managed
identity, if one doesn't exist. There are two policy definitions in the initiative that manage identity creation. The IF conditions in the policy definitions ensure the correct behavior based on the current state of the
hdinsight Ambari Web Ui Auto Logout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/ambari-web-ui-auto-logout.md
To disable the auto logout feature,
**Next steps**
-* [Optimize clusters with Apache Ambari in Azure HDInsight](/azure/hdinsight/hdinsight-changing-configs-via-ambari)
-
+* [Optimize clusters with Apache Ambari in Azure HDInsight](./hdinsight-changing-configs-via-ambari.md)
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
HDInsight uses safe deployment practices, which involve gradual region deploymen
* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 * HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
-For workload specific versions, see [here.](/azure/hdinsight/hdinsight-40-component-versioning)
+For workload specific versions, see [here.](./hdinsight-40-component-versioning.md)
![Icon showing new features with text.](media/hdinsight-release-notes/new-icon-for-new-feature.png) * **Log Analytics** - Customers can enable classic monitoring to get the latest OMS version 14.19. To remove old versions, disable and enable classic monitoring.
-* **Ambari** user auto UI logout due to inactivity. For more information, see [here](/azure/hdinsight/ambari-web-ui-auto-logout)
+* **Ambari** user auto UI logout due to inactivity. For more information, see [here](./ambari-web-ui-auto-logout.md)
* **Spark** - A new and optimized version of Spark 3.1.3 is included in this release. We tested Apache Spark 3.1.2(previous version) and Apache Spark 3.1.3(current version) using the TPC-DS benchmark. The test was carried out using E8 V3  SKU, for Apache Spark on 1-TB workload. Apache Spark 3.1.3 (current version) outperformed Apache Spark 3.1.2 (previous version) by over 40% in total query runtime for TPC-DS queries using the same hardware specs. The Microsoft Spark team added optimizations available in Azure Synapse with Azure HDInsight. For more information, please refer to [ Speed up your data workloads with performance updates to Apache Spark 3.1.2 in Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/speed-up-your-data-workloads-with-performance-updates-to-apache/ba-p/2769467) ![Icon showing new regions added with text.](media/hdinsight-release-notes/new-icon-for-new-regions-added.png)
For workload specific versions, see [here.](/azure/hdinsight/hdinsight-40-compon
HDInsight will implement TLS1.2 going forward, and earlier versions will be updated on the platform. If you're running any applications on top of HDInsight and they use TLS 1.0 and 1.1, upgrade to TLS 1.2 to avoid any disruption in services.
-For more information, see [How to enable Transport Layer Security (TLS)](https://learn.microsoft.com/mem/configmgr/core/plan-design/security/enable-tls-1-2-client)
+For more information, see [How to enable Transport Layer Security (TLS)](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client)
![Icon showing end of support with text.](media/hdinsight-release-notes/new-icon-for-end-of-support.png)
healthcare-apis Deploy New Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-arm.md
When deployment is completed, the following resources and access roles are creat
- An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](/azure/event-hubs/authorize-access-shared-access-signature).
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
- A Health Data Services workspace.
To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy New Bicep Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-bicep-powershell-cli.md
In this quickstart, you'll learn how to:
> - Use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. > [!TIP]
-> To learn more about Bicep, see [What is Bicep?](/azure/azure-resource-manager/bicep/overview?tabs=bicep)
+> To learn more about Bicep, see [What is Bicep?](../../azure-resource-manager/bicep/overview.md?tabs=bicep)
## Prerequisites
To begin your deployment and complete the quickstart, you must have the followin
- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). - [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.
- - For Azure PowerShell, you'll also need to install [Bicep CLI](/azure/azure-resource-manager/bicep/install#windows) to deploy the Bicep file used in this quickstart.
+ - For Azure PowerShell, you'll also need to install [Bicep CLI](../../azure-resource-manager/bicep/install.md#windows) to deploy the Bicep file used in this quickstart.
When you have these prerequisites, you're ready to deploy the Bicep file.
Complete the following five steps to deploy the MedTech service using Azure Powe
Connect-AzAccount ```
-2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id).
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
```azurepowershell Set-AzContext <AzureSubscriptionId>
Complete the following five steps to deploy the MedTech service using the Azure
az login ```
-2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id).
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
```azurecli az account set <AzureSubscriptionId>
When deployment is completed, the following resources and access roles are creat
- An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](/azure/event-hubs/authorize-access-shared-access-signature).
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
- A Health Data Services workspace.
To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy New Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-powershell-cli.md
Complete the following five steps to deploy the MedTech service using Azure Powe
Connect-AzAccount ```
-2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id).
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
```azurepowershell Set-AzContext <AzureSubscriptionId>
Complete the following five steps to deploy the MedTech service using the Azure
az login ```
-2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id).
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
```azurecli az account set <AzureSubscriptionId>
When deployment is completed, the following resources and access roles are creat
- An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](/azure/event-hubs/authorize-access-shared-access-signature).
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
- A Health Data Services workspace.
To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
When deployment is completed, the following resources and access roles are creat
- An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](/azure/event-hubs/authorize-access-shared-access-signature). The Azure Event Hubs Data Sender role isn't used in this tutorial.
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). The Azure Event Hubs Data Sender role isn't used in this tutorial.
- An Azure IoT Hub with [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) configured to send device messages to the device message event hub.
To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
# How to enable diagnostic settings for the MedTech service
-In this article, you'll learn how to enable the diagnostic settings for the MedTech service to export logs and metrics to different destinations (for example: to an [Azure Log Analytics workspace](/azure/azure-monitor/logs/log-analytics-workspace-overview) or an [Azure storage account](../../storage/index.yml) or an [Azure event hub](../../event-hubs/index.yml)) for audit, analysis, backup, or troubleshooting of your MedTech service.
+In this article, you'll learn how to enable the diagnostic settings for the MedTech service to export logs and metrics to different destinations (for example: to an [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md) or an [Azure storage account](../../storage/index.yml) or an [Azure event hub](../../event-hubs/index.yml)) for audit, analysis, backup, or troubleshooting of your MedTech service.
## Create a diagnostic setting for the MedTech service
If you choose to include your Log Analytics workspace as a destination option fo
:::image type="content" source="media/how-to-enable-diagnostic-settings/clean-query-result-post-error-fix.png" alt-text="Screenshot of query after fixing error." lightbox="media/how-to-enable-diagnostic-settings/clean-query-result-post-error-fix.png"::: > [!TIP]
-> To learn about how to use the Log Analytics workspace, see [Azure Log Analytics workspace](/azure/azure-monitor/logs/log-analytics-workspace-overview).
+> To learn about how to use the Log Analytics workspace, see [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md).
> > To learn about how to troubleshoot the MedTech service error messages and conditions, see [Troubleshoot the MedTech service error messages and conditions](troubleshoot-error-messages-and-conditions.md).
In this article, you learned how to enable the diagnostics settings for the MedT
To learn about the MedTech service frequently asked questions (FAQs), see > [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
healthcare-apis How To Use Monitoring Tab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-tab.md
In this article, you'll learn how to use the MedTech service monitoring tab in t
:::image type="content" source="media\how-to-use-monitoring-tab\pin-metrics-to-dashboard.png" alt-text="Screenshot the MedTech service monitoring tile with red box around the pin icon." lightbox="media\how-to-use-monitoring-tab\pin-metrics-to-dashboard.png"::: > [!TIP]
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)
## Available metrics for the MedTech service
To learn how to enable the MedTech service diagnostic settings, see
> [!div class="nextstepaction"] > [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md
+
+ Title: Connect an NXP MIMXRT1060-EVK to Azure IoT Hub quickstart
+description: Use Azure RTOS embedded software to connect an NXP MIMXRT1060-EVK device to Azure IoT Hub and send telemetry.
+++
+ms.devlang: c
+ Last updated : 01/11/2022++
+# Quickstart: Connect an NXP MIMXRT1060-EVK Evaluation kit to IoT Hub
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
+**Total completion time**: 45 minutes
+
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1060-EVK)
+
+In this quickstart, you use Azure RTOS to connect the NXP MIMXRT1060-EVK evaluation kit (from now on, the NXP EVK) to Azure IoT.
+
+You'll complete the following tasks:
+
+* Install a set of embedded development tools for programming the NXP EVK in C
+* Build an image and flash it onto the NXP EVK
+* Use Azure CLI to create and manage an Azure IoT hub that the NXP EVK will securely connect to
+* Use Azure IoT Explorer to register a device with your IoT hub, view device properties, view device telemetry, and call direct commands on the device
+
+## Prerequisites
+
+* A PC running Windows 10 or Windows 11
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Hardware
+ * The [NXP MIMXRT1060-EVK](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/mimxrt1060-evk-i-mx-rt1060-evaluation-kit:MIMXRT1060-EVK) (NXP EVK)
+ * USB 2.0 A male to Micro USB male cable
+ * Wired Ethernet access
+ * Ethernet cable
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prepare the development environment
+
+To set up your development environment, first you clone a GitHub repo that contains all the assets you need for the quickstart. Then you install a set of programming tools.
+
+### Clone the repo for the quickstart
+
+Clone the following repo to download all sample device code, setup scripts, and offline versions of the documentation. If you previously cloned this repo in another quickstart, you don't need to do it again.
+
+To clone the repo, run the following command:
+
+```shell
+git clone --recursive https://github.com/azure-rtos/getting-started.git
+```
+
+### Install the tools
+
+The cloned repo contains a setup script that installs and configures the required tools. If you installed these tools in another embedded device quickstart, you don't need to do it again.
+
+> [!NOTE]
+> The setup script installs the following tools:
+> * [CMake](https://cmake.org): Build
+> * [ARM GCC](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm): Compile
+> * [Termite](https://www.compuphase.com/software_termite.htm): Monitor serial port output for connected devices
+
+To install the tools:
+
+1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
+
+ *getting-started\tools\get-toolchain.bat*
+
+1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
+1. Run the following code to confirm that CMake version 3.14 or later is installed.
+
+ ```shell
+ cmake --version
+ ```
+
+## Create the cloud components
+
+### Create an IoT hub
+
+You can use Azure CLI to create an IoT hub that handles events and messaging for your device.
+
+To create an IoT hub:
+
+1. Launch your CLI app. To run the CLI commands in the rest of this quickstart, copy the command syntax, paste it into your CLI app, edit variable values, and press Enter.
+ - If you're using Cloud Shell, right-click the link for [Cloud Shell](https://shell.azure.com/bash), and select the option to open in a new tab.
+ - If you're using Azure CLI locally, start your CLI console app and sign in to Azure CLI.
+
+1. Run [az extension add](/cli/azure/extension#az-extension-add) to install or upgrade the *azure-iot* extension to the current version.
+
+ ```azurecli-interactive
+ az extension add --upgrade --name azure-iot
+ ```
+
+1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *centralus* region.
+
+ > [!NOTE]
+ > You can optionally set an alternate `location`. To see available locations, run [az account list-locations](/cli/azure/account#az-account-list-locations).
+
+ ```azurecli
+ az group create --name MyResourceGroup --location centralus
+ ```
+
+1. Run the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
+
+ *YourIotHubName*. Replace this placeholder in the code with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your unique IoT hub name.
+
+ The `--sku F1` parameter creates the IoT hub in the Free tier. Free tier hubs have a limited feature set and are used for proof of concept applications. For more information on IoT Hub tiers, features, and pricing, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
+
+ ```azurecli
+ az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName} --sku F1 --partition-count 2
+ ```
+
+1. After the IoT hub is created, view the JSON output in the console, and copy the `hostName` value to use in a later step. The `hostName` value looks like the following example:
+
+ `{Your IoT hub name}.azure-devices.net`
+
+### Configure IoT Explorer
+
+In the rest of this quickstart, you'll use IoT Explorer to register a device to your IoT hub, to view the device properties and telemetry, and to send commands to your device. In this section, you configure IoT Explorer to connect to the IoT hub you created, and to read plug and play models from the public model repository.
+
+To add a connection to your IoT hub:
+
+1. In your CLI app, run the [az iot hub connection-string show](/cli/azure/iot/hub/connection-string#az-iot-hub-connection-string-show) command to get the connection string for your IoT hub.
+
+ ```azurecli
+ az iot hub connection-string show --hub-name {YourIoTHubName}
+ ```
+
+1. Copy the connection string without the surrounding quotation characters.
+1. In Azure IoT Explorer, select **IoT hubs** on the left menu.
+1. Select **+ Add connection**.
+1. Paste the connection string into the **Connection string** box.
+1. Select **Save**.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-add-connection.png" alt-text="Screenshot of adding a connection in IoT Explorer.":::
+
+If the connection succeeds, IoT Explorer switches to the **Devices** view.
+
+To add the public model repository:
+
+1. In IoT Explorer, select **Home** to return to the home view.
+1. On the left menu, select **IoT Plug and Play Settings**, then select **+Add** and select **Public repository** from the drop-down menu.
+1. An entry appears for the public model repository at `https://devicemodels.azure.com`.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-add-public-repository.png" alt-text="Screenshot of adding the public model repository in IoT Explorer.":::
+
+1. Select **Save**.
+
+### Register a device
+
+In this section, you create a new device instance and register it with the IoT hub you created. You'll use the connection information for the newly registered device to securely connect your physical device in a later section.
+
+To register a device:
+
+1. From the home view in IoT Explorer, select **IoT hubs**.
+1. The connection you previously added should appear. Select **View devices in this hub** below the connection properties.
+1. Select **+ New** and enter a device ID for your device; for example, `mydevice`. Leave all other properties the same.
+1. Select **Create**.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-device-created.png" alt-text="Screenshot of Azure IoT Explorer device identity.":::
+
+1. Use the copy buttons to copy the **Device ID** and **Primary key** fields.
+
+Before continuing to the next section, save each of the following values retrieved from earlier steps, to a safe location. You use these values in the next section to configure your device.
+
+* `hostName`
+* `deviceId`
+* `primaryKey`
+
+## Prepare the device
+
+To connect the NXP EVK to Azure, you'll modify a configuration file for Azure IoT settings, rebuild the image, and flash the image to the device.
+
+### Add configuration
+
+1. Open the following file in a text editor:
+
+ *getting-started\NXP\MIMXRT1060-EVK\app\azure_config.h*
+
+1. Comment out the following line near the top of the file as shown:
+
+ ```c
+ // #define ENABLE_DPS
+ ```
+
+1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
+
+ |Constant name|Value|
+ |-|--|
+ | `IOT_HUB_HOSTNAME` | {*Your host name value*} |
+ | `IOT_HUB_DEVICE_ID` | {*Your Device ID value*} |
+ | `IOT_DEVICE_SAS_KEY` | {*Your Primary key value*} |
+
+1. Save and close the file.
+
+### Build the image
+
+1. In your console or in File Explorer, run the script *rebuild.bat* at the following path to build the image:
+
+ *getting-started\NXP\MIMXRT1060-EVK\tools\rebuild.bat*
+
+2. After the build completes, confirm that the binary file was created in the following path:
+
+ *getting-started\NXP\MIMXRT1060-EVK\build\app\mimxrt1060_azure_iot.bin*
+
+### Flash the image
+
+1. On the NXP EVK, locate the **Reset** button, the Micro USB port, and the Ethernet port. You use these components in the following steps. All three are highlighted in the following picture:
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/nxp-evk-board.png" alt-text="Photo showing the NXP EVK board.":::
+
+1. Connect the Micro USB cable to the Micro USB port on the NXP EVK, and then connect it to your computer. After the device powers up, a solid green LED shows the power status.
+1. Use the Ethernet cable to connect the NXP EVK to an Ethernet port.
+1. In File Explorer, find the binary file that you created in the previous section.
+1. Copy the binary file *mimxrt1060_azure_iot.bin*
+1. In File Explorer, find the NXP EVK device connected to your computer. The device appears as a drive on your system with the drive label **RT1060-EVK**.
+1. Paste the binary file into the root folder of the NXP EVK. Flashing starts automatically and completes in a few seconds.
+
+ > [!NOTE]
+ > During the flashing process, a red LED blinks rapidly on the NXP EVK.
+
+### Confirm device connection details
+
+You can use the **Termite** app to monitor communication and confirm that your device is set up correctly.
+
+1. Start **Termite**.
+ > [!TIP]
+ > If you have issues getting your device to initialize or connect after flashing, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
+1. Select **Settings**.
+1. In the **Serial port settings** dialog, check the following settings and update if needed:
+ * **Baud rate**: 115,200
+ * **Port**: The port that your NXP EVK is connected to. If there are multiple port options in the dropdown, you can find the correct port to use. Open Windows **Device Manager**, and view **Ports** to identify which port to use.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/termite-settings.png" alt-text="Screenshot of serial port settings in the Termite app.":::
+
+1. Select OK.
+1. Press the **Reset** button on the device. The button is labeled on the device and located near the Micro USB connector.
+1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
+
+ ```output
+ Initializing DHCP
+ MAC: **************
+ IP address: 192.168.0.56
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
+ SUCCESS: DHCP initialized
+
+ Initializing DNS client
+ DNS address: 192.168.0.1
+ SUCCESS: DNS client initialized
+
+ Initializing SNTP time sync
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Jan 11, 2023 20:37:37.90 UTC
+ SUCCESS: SNTP initialized
+
+ Initializing Azure IoT Hub client
+ Hub hostname: **************.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsg;2
+ SUCCESS: Connected to IoT Hub
+
+ Receive properties: {"desired":{"$version":1},"reported":{"$version":1}}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=3{"deviceInformation":{"__t":"c","manufacturer":"NXP","model":"MIMXRT1060-EVK","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"Arm Cortex M7","processorManufacturer":"NXP","totalStorage":8192,"totalMemory":768}}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=5{"ledState":false}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=7{"telemetryInterval":{"ac":200,"av":1,"value":10}}
+
+ Starting Main loop
+ Telemetry message sent: {"temperature":40.61}.
+ ```
+
+Keep Termite open to monitor device output in the following steps.
+
+## View device properties
+
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you'll use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the NXP EVK. These capabilities rely on the device model published for the NXP EVK in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+
+To access IoT Plug and Play components for the device in IoT Explorer:
+
+1. From the home view in IoT Explorer, select **IoT hubs**, then select **View devices in this hub**.
+1. Select your device.
+1. Select **IoT Plug and Play components**.
+1. Select **Default component**. IoT Explorer displays the IoT Plug and Play components that are implemented on your device.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-default-component-view.png" alt-text="Screenshot of the device's default component in IoT Explorer.":::
+
+1. On the **Interface** tab, view the JSON content in the device model **Description**. The JSON contains configuration details for each of the IoT Plug and Play components in the device model.
+
+ Each tab in IoT Explorer corresponds to one of the IoT Plug and Play components in the device model.
+
+ | Tab | Type | Name | Description |
+ |||||
+ | **Interface** | Interface | `Getting Started Guide` | Example model for the Azure RTOS Getting Started Guides |
+ | **Properties (read-only)** | Property | `ledState` | Whether the led is on or off |
+ | **Properties (writable)** | Property | `telemetryInterval` | The interval that the device sends telemetry |
+ | **Commands** | Command | `setLedState` | Turn the LED on or off |
+
+To view device properties using Azure IoT Explorer:
+
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent.
+1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-set-telemetry-interval.png" alt-text="Screenshot of setting telemetry interval on the device in IoT Explorer.":::
+
+1. IoT Explorer responds with a notification. You can also observe the update in Termite.
+1. Set the telemetry interval back to 10.
+
+To use Azure CLI to view device properties:
+
+1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
+
+ ```azurecli
+ az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. Inspect the properties for your device in the console output.
+
+## View telemetry
+
+With Azure IoT Explorer, you can view the flow of telemetry from your device to the cloud. Optionally, you can do the same task using Azure CLI.
+
+To view telemetry in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Telemetry** tab. Confirm that **Use built-in event hub** is set to *Yes*.
+1. Select **Start**.
+1. View the telemetry as the device sends messages to the cloud.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-device-telemetry.png" alt-text="Screenshot of device telemetry in IoT Explorer.":::
+
+ > [!NOTE]
+ > You can also monitor telemetry from the device by using the Termite app.
+
+1. Select the **Show modeled events** checkbox to view the events in the data format specified by the device model.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-show-modeled-events.png" alt-text="Screenshot of modeled telemetry events in IoT Explorer.":::
+
+1. Select **Stop** to end receiving events.
+
+To use Azure CLI to view device telemetry:
+
+1. Run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. Use the names that you created previously in Azure IoT for your device and IoT hub.
+
+ ```azurecli
+ az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. View the JSON output in the console.
+
+ ```json
+ {
+ "event": {
+ "origin": "mydevice",
+ "module": "",
+ "interface": "dtmi:azurertos:devkit:gsg;2",
+ "component": "",
+ "payload": {
+ "temperature": 41.77
+ }
+ }
+ }
+ ```
+
+1. Select CTRL+C to end monitoring.
++
+## Call a direct method on the device
+
+You can also use Azure IoT Explorer to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that turns an LED on or off. Optionally, you can do the same task using Azure CLI.
+
+To call a method in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Commands** tab.
+1. For the **setLedState** command, set the **state** to **true**.
+1. Select **Send command**. You should see a notification in IoT Explorer. There will be no change on the device as there isn't an available LED to toggle. However, you can view the output in Termite to monitor the status of the methods.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-invoke-method.png" alt-text="Screenshot of calling the setLedState method in IoT Explorer.":::
+
+1. Set the **state** to **false**, and then select **Send command**. The LED should turn off.
+1. Optionally, you can view the output in Termite to monitor the status of the methods.
+
+To use Azure CLI to call a method:
+
+1. Run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command, and specify the method name and payload. For this method, setting `method-payload` to `true` would turn an LED on. There will be no change on the device as there isn't an available LED to toggle. However, you can view the output in Termite to monitor the status of the methods.
++
+ ```azurecli
+ az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
+ ```
+
+ The CLI console shows the status of your method call on the device, where `204` indicates success.
+
+ ```json
+ {
+ "payload": {},
+ "status": 200
+ }
+ ```
+
+1. Check your device to confirm the LED state.
+
+1. View the Termite terminal to confirm the output messages:
+
+ ```output
+ Received command: setLedState
+ Payload: true
+ LED is turned ON
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=15{"ledState":true}
+ ```
+
+## Troubleshoot and debug
+
+If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
+
+For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
+
+## Clean up resources
+
+If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+
+To delete a resource group by name:
+
+1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created.
+
+ ```azurecli-interactive
+ az group delete --name MyResourceGroup
+ ```
+
+1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted.
+
+ ```azurecli-interactive
+ az group list
+ ```
++
+## Next steps
+
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the NXP EVK device. You connected the NXP EVK to Azure IoT Hub, and carried out tasks such as viewing telemetry and calling a method on the device.
+
+As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [!div class="nextstepaction"]
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
+
+> [!IMPORTANT]
+> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-dps How To Troubleshoot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-troubleshoot-dps.md
Use this table to understand and resolve common errors.
* For a 429 error, follow the retry pattern of IoT Hub that has exponential backoff with a random jitter. You can follow the retry-after header provided by the SDK.
-* For 500-series server errors, retry your [connection](/azure/iot-dps/concepts-deploy-at-scale#iot-hub-connectivity-considerations) using cached credentials or a [Device Registration Status Lookup API](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup#deviceregistrationresult) call.
+* For 500-series server errors, retry your [connection](./concepts-deploy-at-scale.md#iot-hub-connectivity-considerations) using cached credentials or a [Device Registration Status Lookup API](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup#deviceregistrationresult) call.
-For related best practices, such as retrying operations, see [Best practices for large-scale IoT device deployments](/azure/iot-dps/concepts-deploy-at-scale).
+For related best practices, such as retrying operations, see [Best practices for large-scale IoT device deployments](./concepts-deploy-at-scale.md).
## Next Steps
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/development-environment.md
If you prefer to develop with other editors or from the CLI, the Azure IoT Edge
The Azure IoT Edge extension for Visual Studio Code provides IoT Edge module templates built on programming languages including C, C#, Java, Node.js, and Python. Templates for Azure functions in C# are also included.
-For more information and to download, see [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+For more information and to download, see [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
In addition to the IoT Edge extensions, you may find it helpful to install additional extensions for developing. For example, you can use [Docker Support for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=PeterJausovec.vscode-docker) to manage your images, containers, and registries. Additionally, all the major supported languages have extensions for Visual Studio Code that can help when you're developing modules.
+The [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension is useful as a companion for the Azure IoT Edge extension.
+ #### Prerequisites The module templates for some languages and services have prerequisites that are necessary to build the project folders on your development machine with Visual Studio Code.
iot-edge How To Deploy Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md
There are several ways to deploy modules to an IoT Edge device and all of them w
## Prerequisites - An [IoT hub](../iot-hub/iot-hub-create-through-portal.md) in your Azure subscription.+ - An IoT Edge device. If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md). -- [Visual Studio Code](https://code.visualstudio.com/) and the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) if deploying from Visual Studio Code.
+- [Visual Studio Code](https://code.visualstudio.com/).
+
+- The [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension and the [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension if deploying from Visual Studio Code.
## Deploy from the Azure portal
iot-edge How To Deploy Modules Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-vscode.md
Title: Deploy modules from Visual Studio Code - Azure IoT Edge
-description: Use Visual Studio Code with the Azure IoT Tools to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest.
+description: Use Visual Studio Code with Azure IoT Edge for Visual Studio Code to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest.
This article shows how to create a JSON deployment manifest, then use that file
If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md). * [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools#overview) for Visual Studio Code.
+* [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
## Configure a deployment manifest
iot-edge How To Deploy Vscode At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-vscode-at-scale.md
In this article, you set up Visual Studio Code and the IoT extension. You then l
If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md). * [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools#overview) for Visual Studio Code.
+* [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
+* [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
## Sign in to access your IoT hub
iot-edge How To Monitor Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-module-twins.md
If you see the message "A module identity doesn't exist for this module", this e
To review and edit a module twin:
-1. If not already installed, install the [Azure IoT Tools Extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) for Visual Studio Code.
+1. If not already installed, install the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
1. In the **Explorer**, expand the **Azure IoT Hub**, and then expand the device with the module you want to monitor. 1. Right-click the module and select **Edit Module Twin**. A temporary file of the module twin is downloaded to your computer and displayed in Visual Studio Code.
iot-edge How To Use Create Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-use-create-options.md
The IoT Edge deployment manifest accepts create options formatted as JSON. For e
This edgeHub example uses the **HostConfig.PortBindings** parameter to map exposed ports on the container to a port on the host device.
-If you use the Azure IoT Tools extensions for Visual Studio or Visual Studio Code, you can write the create options in JSON format in the **deployment.template.json** file. Then, when you use the extension to build the IoT Edge solution or generate the deployment manifest, it will stringify the JSON for you in the format that the IoT Edge runtime expects. For example:
+If you use the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension for Visual Studio or Visual Studio Code, you can write the create options in JSON format in the **deployment.template.json** file. Then, when you use the extension to build the IoT Edge solution or generate the deployment manifest, it will stringify the JSON for you in the format that the IoT Edge runtime expects. For example:
```json "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Install [Visual Studio Code](https://code.visualstudio.com/) first and then add
::: zone pivot="iotedge-dev-ext" -- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
+- [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension.
+- [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension.
::: zone-end
After solution creation, there are four items within the solution:
::: zone pivot="iotedge-dev-ext"
-Use Visual Studio Code and the Azure IoT Tools. You start by creating a solution, and then generating the first module in that solution. Each solution can contain multiple modules.
+Use Visual Studio Code and the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension. You start by creating a solution, and then generating the first module in that solution. Each solution can contain multiple modules.
1. Select **View** > **Command Palette**. 1. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge Solution**.
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
The following steps illustrate how to pull a Docker image of **edgeAgent** and *
For more information, see:
-* [Configure the IoT Edge agent](/azure/iot-edge/how-to-configure-proxy-support#configure-the-iot-edge-agent)
+* [Configure the IoT Edge agent](./how-to-configure-proxy-support.md#configure-the-iot-edge-agent)
* [Azure IoT Edge Agent](https://hub.docker.com/_/microsoft-azureiotedge-agent) * [Azure IoT Edge Hub](https://hub.docker.com/_/microsoft-azureiotedge-hub)
These constraints can be applied to individual modules by using create options i
## Next steps * Learn more about [IoT Edge automatic deployment](module-deployment-monitoring.md).
-* See how IoT Edge supports [Continuous integration and continuous deployment](how-to-continuous-integration-continuous-deployment.md).
+* See how IoT Edge supports [Continuous integration and continuous deployment](how-to-continuous-integration-continuous-deployment.md).
iot-edge Tutorial C Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-c-module.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module in C, install the following prerequisites on your development machine:
Installing the Azure IoT C SDK isn't required for this tutorial, but can provide
## Create a module project
-The following steps create an IoT Edge module project for C by using Visual Studio Code and the Azure IoT Tools extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
+The following steps create an IoT Edge module project for C by using Visual Studio Code and the Azure IoT Edge extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
### Create a new project
In the previous section, you created an IoT Edge solution and added code to the
## Deploy modules to device
-Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
+Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
Make sure that your IoT Edge device is up and running.
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-csharp-module.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To complete these tutorials, prepare the following additional prerequisites on your development machine:
To complete these tutorials, prepare the following additional prerequisites on y
## Create a module project
-The following steps create an IoT Edge module project for C# by using Visual Studio Code and the Azure IoT Tools extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
+The following steps create an IoT Edge module project for C# by using Visual Studio Code and the Azure IoT Edge extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
### Create a new project
In the previous section, you created an IoT Edge solution and added code to the
## Deploy and run the solution
-Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
+Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
Make sure that your IoT Edge device is up and running.
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-custom-vision.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and
+[Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module with the Custom Vision service, install the following additional prerequisites on your development machine:
First, build and push your solution to your container registry.
## Deploy modules to device
-Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
+Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
Make sure that your IoT Edge device is up and running.
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-function.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and
+[Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module in with Azure Functions, install the following additional prerequisites on your development machine:
To develop an IoT Edge module in with Azure Functions, install the following add
## Create a function project
-The Azure IoT Tools for Visual Studio Code that you installed in the prerequisites provides management capabilities as well as some code templates. In this section, you use Visual Studio Code to create an IoT Edge solution that contains an Azure Function.
+The Azure IoT Edge for Visual Studio Code that you installed in the prerequisites provides management capabilities as well as some code templates. In this section, you use Visual Studio Code to create an IoT Edge solution that contains an Azure Function.
### Create a new project
Visual Studio Code outputs a success message when your container image is pushed
## Deploy and run the solution
-You can use the Azure portal to deploy your Function module to an IoT Edge device like you did in the quickstarts. You can also deploy and monitor modules from within Visual Studio Code. The following sections use the Azure IoT Tools for VS Code that was listed in the prerequisites. Install the extension now, if you didn't already.
+You can use the Azure portal to deploy your Function module to an IoT Edge device like you did in the quickstarts. You can also deploy and monitor modules from within Visual Studio Code. The following sections use the Azure IoT Edge and IoT Hub for VS Code that was listed in the prerequisites. Install the extension now, if you didn't already.
1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
The following table lists the supported development scenarios for **Linux contai
| **Linux device architecture** | Linux AMD64 <br> Linux ARM32 <br> Linux ARM64 | Linux AMD64 <br> Linux ARM32 <br> Linux ARM64 | | **Azure services** | Azure Functions <br> Azure Stream Analytics <br> Azure Machine Learning | | | **Languages** | C <br> C# <br> Java <br> Node.js <br> Python | C <br> C# |
-| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) | [Azure IoT Edge Tools for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) <br> [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) |
+| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) <br> [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)| [Azure IoT Edge Tools for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) <br> [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) |
## Install container engine
Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These
2. Once the installation is finished, select **View** > **Extensions**.
-3. Search for **Azure IoT Tools**, which is actually a collection of extensions that help you interact with IoT Hub and IoT devices, as well as developing IoT Edge modules.
+3. Search for **Azure IoT Edge** and **Azure IoT Hub**, which are extensions that help you interact with IoT Hub and IoT devices, as well as developing IoT Edge modules.
4. Select **Install**. Each included extension installs individually.
Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These
## Create a new module project
-The Azure IoT Tools extension provides project templates for all supported IoT Edge module languages in Visual Studio Code. These templates have all the files and code that you need to deploy a working module to test IoT Edge, or give you a starting point to customize the template with your own business logic.
+The Azure IoT Edge extension provides project templates for all supported IoT Edge module languages in Visual Studio Code. These templates have all the files and code that you need to deploy a working module to test IoT Edge, or give you a starting point to customize the template with your own business logic.
For this tutorial, we use the C# module template because it is the most commonly used template.
You verified that the built container images are stored in your container regist
## View messages from device
-The SampleModule code receives messages through its input queue and passes them along through its output queue. The deployment manifest declared routes that passed messages to SampleModul