Updates from: 01/13/2023 02:17:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
Arkose Labs products integration includes the following components:
- Custom HTML, JavaScript, and API connectors integrate with the Arkose platform - **Azure Functions** - Your hosted API endpoint that works with the API connectors feature - This API validates the server-side of the Arkose Labs session token
- - Learn more in the [Azure Functions Overview](/azure/azure-functions/functions-overview)
+ - Learn more in the [Azure Functions Overview](../azure-functions/functions-overview.md)
The following diagram illustrates how the Arkose Labs platform integrates with Azure AD B2C.
Username and password are stored as environment variables, not part of the repos
#### Deploy the application to the web
-1. Deploy your Azure Function to the cloud. Learn more with [Azure Functions documentation](/azure/azure-functions/).
+1. Deploy your Azure Function to the cloud. Learn more with [Azure Functions documentation](../azure-functions/index.yml).
2. Copy the endpoint web URL of your Azure Function. 3. After deployment, select the **Upload settings** option. 4. Your environment variables are uploaded to the Application settings of the app service. Learn more on [Application settings in Azure](../azure-functions/functions-develop-vs-code.md?tabs=csharp#application-settings-in-azure).
Username and password are stored as environment variables, not part of the repos
- [Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose) - Find the Azure AD B2C sign-up user flow - [Azure AD B2C custom policy overview](./custom-policy-overview.md)-- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
For every user in SuccessFactors, Azure AD provisioning service retrieves the fo
| 26 | Manager User | employmentNav/jobInfoNav/managerUserNav | Only if `managerUserNav` is mapped | ## How full sync works
-Based on the attribute-mapping, during full sync Azure AD provisioning service sends the following "GET" OData API query to fetch effective data of all active users.
+Based on the attribute-mapping, during full sync Azure AD provisioning service sends the following "GET" OData API query to fetch effective data of all active and terminated workers.
> [!div class="mx-tdCol2BreakAll"] >| Parameter | Description |
Extending this scenario:
### Mapping employment status to account status
-By default, the Azure AD SuccessFactors connector uses the `activeEmploymentsCount` field of the `PersonEmpTerminationInfo` object to set account status. There is a known SAP SuccessFactors issue documented in [knowledge base article 3047486](https://launchpad.support.sap.com/#/notes/3047486) that at times this may disable the account of a terminated worker one day prior to the termination on the last day of work.
+By default, the Azure AD SuccessFactors connector uses the `activeEmploymentsCount` field of the `PersonEmpTerminationInfo` object to set account status. You may encounter one of the following issues with this attribute.
+1. There is a known SAP SuccessFactors issue documented in [knowledge base article 3047486](https://launchpad.support.sap.com/#/notes/3047486) that at times this may disable the account of a terminated worker one day prior to the termination on the last day of work.
+1. If the `PersonEmpTerminationInfo` object gets set to null, during termination, then AD account disabling will not work, as the provisioning engine filters out records where `personEmpTerminationInfoNav` object is set to null.
-If you are running into this issue or prefer mapping employment status to account status, you can update the mapping to expand the `emplStatus` field and use the employment status code present in the field `emplStatus.externalCode`. Based on [SAP support note 2505526](https://launchpad.support.sap.com/#/notes/2505526), here is a list of employment status codes that you can retrieve in the provisioning app.
+If you are running into any of these issues or prefer mapping employment status to account status, you can update the mapping to expand the `emplStatus` field and use the employment status code present in the field `emplStatus.externalCode`. Based on [SAP support note 2505526](https://launchpad.support.sap.com/#/notes/2505526), here is a list of employment status codes that you can retrieve in the provisioning app.
* A = Active * D = Dormant * U = Unpaid Leave
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Authenticator leverages the native Apple cryptography to achieve FIPS 140, Secur
FIPS 140 compliance for Microsoft Authenticator on Android is in progress and will follow soon.
+## Determining Microsoft Authenticator registration type in My Security-Info
+Managining and adding additional Microsoft Authenticator registrations can be performed by users by accessing https://aka.ms/mysecurityinfo or by selecting Security info from from My Account. Specific icons are used to differentiate whether the Microsoft Authenticator registration is capable of passwordless phone sign-in or MFA.
+
+Authenticator registration type | Icon
+ |
+Microsoft Authenticator: Passwordless phone sign-in | <img width="43" alt="Microsoft Authenticator passwordless sign-in Capable" src="https://user-images.githubusercontent.com/50213291/211923744-d025cd70-4b88-4603-8baf-db0fc5d28486.png">
+Microsoft Authenticator: MFA capable | <img width="43" alt="Microsoft Authenticator MFA Capable" src="https://user-images.githubusercontent.com/50213291/211921054-d11983ad-4e0d-4612-9a14-0fef625a9a2a.png">
++ ## Next steps - To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md).
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
An authentication strength Conditional Access policy works together with [MFA tr
- **Users who signed in by using certificate-based authentication aren't prompted to reauthenticate** - If a user first authenticated by using certificate-based authentication and the authentication strength requires another method, such as a FIDO2 security key, the user isn't prompted to use a FIDO2 security key and authentication fails. The user must restart their session to sign-in with a FIDO2 security key. -- **Authentication methods that are currently not supported by authentication strength** - The Email one-time pass (Guest) authentication method is not included in the available combinations.- - **Using 'Require one of the selected controls' with 'require authentication strength' control** - After you select authentication strengths grant control and additional controls, all the selected controls must be satisfied in order to gain access to the resource. Using **Require one of the selected controls** isn't applicable, and will default to requiring all the controls in the policy. -- **Multiple Conditional Access policies may be created when using "Require authentication strength" grant control**. These are two different policies and you can safely delete one of them.--- **Windows Hello for Business** ΓÇô If the user has used Windows Hello for Business as their primary authentication method it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. However, if the user has used another method as their primary authenticating method (for example, password) and the authentication strength requires them to use Windows Hello for Business they will not be prompted to use not register for Windows Hello for Business. --- **Authentication loop** can happen in one of the following scenarios:
-1. **Microsoft Authenticator (Phone Sign-in)** - When the user is required to use Microsoft Authenticator (Phone Sign-in) but the user is not registered for this method, they will be given instructions on how to set up the Microsoft Authenticator, that does not include how to enable Passwordless sign-in. As a result, the user can get into an authentication loop. To avoid this issue, make sure the user is registered for the method before the Conditional Access policy is enforced. Phone Sign-in can be registered using the steps outlined here: [Add your work or school account to the Microsoft Authenticator app ("Sign in with your credentials")](https://support.microsoft.com/en-us/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c)
-2. **Conditional Access Policy is targeting all apps** - When the Conditional Access policy is targeting "All apps" but the user is not registered for any of the methods required by the authentication strength, the user will get into an authentication loop. To avoid this issue, target specific applications in the Conditional Access policy or make sure the user is registered for at least one of the authentication methods required by the authentication strength Conditional Access policy.
+- **Authentication loop** - When the user is required to use Microsoft Authenticator (Phone Sign-in) but the user is not registered for this method, they will be given instructions on how to set up the Microsoft Authenticator, that does not include how to enable Passwordless sign-in. As a result, the user can get into an authentication loop. To avoid this issue, make sure the user is registered for the method before the Conditional Access policy is enforced. Phone Sign-in can be registered using the steps outlined here: [Add your work or school account to the Microsoft Authenticator app ("Sign in with your credentials")](https://support.microsoft.com/en-us/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c)
## Limitations
An authentication strength Conditional Access policy works together with [MFA tr
- **Require multifactor authentication and Require authentication strength can't be used together in the same Conditional Access policy** - These two Conditional Access grant controls can't be used together because the built-in authentication strength **Multifactor authentication** is equivalent to the **Require multifactor authentication** grant control.
+- **Authentication methods that are currently not supported by authentication strength** - The Email one-time pass (Guest) authentication method is not included in the available combinations.
-<!place holder: Auth Strength with CCS - will be documented in resilience-defaults doc-->
+- **Windows Hello for Business** ΓÇô If the user has used Windows Hello for Business as their primary authentication method it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. However, if the user has used another method as their primary authenticating method (for example, password) and the authentication strength requires them to use Windows Hello for Business they will not be prompted to use not register for Windows Hello for Business.
## FAQ
active-directory Troubleshoot Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-authentication-strengths.md
Previously updated : 09/26/2022 Last updated : 01/11/2023
To verify if a method can be used:
If the user is registered for an enabled method that meets the authentication strength, they might need to use another method that isn't available after primary authentication, such as Windows Hello for Business or certificate-based authentication. For more information, see [How each authentication method works](concept-authentication-methods.md#how-each-authentication-method-works). The user will need to restart the session and choose **Sign-in options** and select a method required by the authentication strength. + ## A user can't access a resource If an authentication strength requires a method that a user canΓÇÖt use, the user is blocked from sign-in. To check which method is required by an authentication strength, and which method the user is registered and enabled to use, follow the steps in the [previous section](#a-user-is-asked-to-sign-in-with-another-method-but-they-dont-see-a-method-they-expect).
active-directory Concept Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/concept-attributes.md
na Previously updated : 02/25/2021 Last updated : 01/11/2023
active-directory Concept How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/concept-how-it-works.md
Previously updated : 12/05/2019 Last updated : 01/11/2023
Cloud sync is built on top of the Azure AD services and has 2 key components: - **Provisioning agent**: The Azure AD Connect cloud provisioning agent is the same agent as Workday inbound and built on the same server-side technology as app proxy and Pass Through Authentication. It requires an outbound connection only and agents are auto-updated. -- **Provisioning service**: Same provisioning service as outbound provisioning and Workday inbound provisioning which uses a scheduler-based model. In case of cloud sync, the changes are provisioned every 2 mins.
+- **Provisioning service**: Same provisioning service as outbound provisioning and Workday inbound provisioning, which uses a scheduler-based model. Cloud sync provisions change every 2 mins.
## Initial setup
-During initial setup, a few things are done that makes cloud sync happen. These are:
+During initial setup, a few things are done that makes cloud sync happen.
- **During agent installation**: You configure the agent for the AD domains you want to provision from. This configuration registers the domains in the hybrid identity service and establishes an outbound connection to the service bus listening for requests.-- **When you enable provisioning**: You select the AD domain and enable provisioning which runs every 2 mins. Optionally you may deselect password hash sync and define notification email. You can also manage attribute transformation using Microsoft Graph APIs.
+- **When you enable provisioning**: You select the AD domain and enable provisioning, which runs every 2 mins. Optionally you may deselect password hash sync and define notification email. You can also manage attribute transformation using Microsoft Graph APIs.
## Agent installation
-The following is a walk-through of what occurs when the cloud provisioning agent is installed.
+The following items occur when the cloud provisioning agent is installed.
-- First, the Installer installs the Agent binaries and the Agent Service running under the Virtual Service Account (NETWORK SERVICE\AADProvisioningAgent). A virtual service account is a special type of account that does not have a password and is managed by Windows.
+- First, the Installer installs the Agent binaries and the Agent Service running under the Virtual Service Account (NETWORK SERVICE\AADProvisioningAgent). A virtual service account is a special type of account that doesn't have a password and is managed by Windows.
- The Installer then starts the Wizard. - The Wizard will prompt for Azure AD credentials, will then authenticate, and retrieve a token. - The wizard then asks for the current machine Domain Administrators credentials. - Using these credentials, the agent general managed service account (GMSA) for this domain is either created or located and reused if it already exists. - The agent service is now reconfigured to run under the GMSA. - The wizard now asks for domain configuration along with the Enterprise Admin (EA)/Domain Admin(DA) Account for each domain you want the agent to service.-- The GMSA account is then updated with permissions that enable it access to each domain entered above.
+- The GMSA account is then updated with permissions that enable it access to each domain entered during setup.
- Next, the wizard triggers agent registration - The agent creates a certificate and using the Azure AD token, registers itself and the certificate with the Hybrid Identity Service(HIS) Registration Service - The Wizard triggers an AgentResourceGrouping call. This call to HIS Admin Service is to assign the agent to one or more AD Domains in the HIS configuration. - The wizard now restarts the agent service.-- The agent calls a Bootstrap Service on restart (and every 10 mins afterwards) to check for configuration updates. The bootstrap service validates the agent identity. It also updates the last bootstrap time. This is important because if agents don't bootstrap, they are not getting updated Service Bus endpoints and may not be able to receive requests.
+- The agent calls a Bootstrap Service on restart (and every 10 mins afterwards) to check for configuration updates. The bootstrap service validates the agent identity. It also updates the last bootstrap time. This is important because if agents don't bootstrap, they aren't getting updated Service Bus endpoints and may not be able to receive requests.
## What is System for Cross-domain Identity Management (SCIM)?
-The [SCIM specification](https://tools.ietf.org/html/draft-scim-core-schema-01) is a standard that is used to automate the exchanging of user or group identity information between identity domains such as Azure AD. SCIM is becoming the de facto standard for provisioning and, when used in conjunction with federation standards like SAML or OpenID Connect, provides administrators an end-to-end standards-based solution for access management.
+The [SCIM specification](https://tools.ietf.org/html/draft-scim-core-schema-01) is a standard that is used to automate the exchanging of user or group identity information between identity domains such as Azure AD. SCIM is becoming the de facto standard for provisioning and, when used with federation standards like SAML or OpenID Connect, provides administrators an end-to-end standards-based solution for access management.
The Azure AD Connect cloud provisioning agent uses SCIM with Azure AD to provision and deprovision users and groups. ## Synchronization flow ![provisioning](media/concept-how-it-works/provisioning-4.png)
-Once you have installed the agent and enabled provisioning, the following flow occurs.
+Once you've installed the agent and enabled provisioning, the following flow occurs.
1. Once configured, the Azure AD Provisioning service calls the Azure AD hybrid service to add a request to the Service bus. The agent constantly maintains an outbound connection to the Service Bus listening for requests and picks up the System for Cross-domain Identity Management (SCIM) request immediately. 2. The agent breaks up the request into separate queries based on object type. 3. AD returns the result to the agent and the agent filters this data before sending it to Azure AD. 4. Agent returns the SCIM response to Azure AD. These responses are based on the filtering that happened within the agent. The agent uses scoping to filter the results. 5. The provisioning service writes the changes to Azure AD.
-6. If this is a delta Sync as opposed to a full sync, then cookie/watermark is used. New queries will get changes from that cookie/watermark onwards.
+6. If a delta Sync occurs, as opposed to a full sync, then the cookie/watermark is used. New queries will get changes from that cookie/watermark onwards.
## Supported scenarios: The following scenarios are supported for cloud sync. -- **Existing hybrid customer with a new forest**: Azure AD Connect sync is used for primary forests. Cloud sync is used for provisioning from an AD forest (including disconnected). For more information see the tutorial [here](tutorial-existing-forest.md).
+- **Existing hybrid customer with a new forest**: Azure AD Connect sync is used for primary forests. Cloud sync is used for provisioning from an AD forest (including disconnected). For more information, see the tutorial [here](tutorial-existing-forest.md).
![Existing hybrid](media/tutorial-existing-forest/existing-forest-new-forest-2.png)-- **New hybrid customer**: Azure AD Connect sync is not used. Cloud sync is used for provisioning from an AD forest. For more information see the tutorial [here](tutorial-single-forest.md).
+- **New hybrid customer**: Azure AD Connect sync isn't used. Cloud sync is used for provisioning from an AD forest. For more information, see the tutorial [here](tutorial-single-forest.md).
![New customers](media/tutorial-single-forest/diagram-2.png)
active-directory How To Accidental Deletes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-accidental-deletes.md
Previously updated : 09/10/2021 Last updated : 01/11/2023
The following document describes the accidental deletion feature for Azure AD Co
To use this feature, you set the threshold for the number of objects that, if deleted, synchronization should stop. So if this number is reached, the synchronization will stop and a notification will be sent to the email that is specified. This notification will allow you to investigate what is going on.
-For additional information and an example, see the following video.
+For more information and an example, see the following video.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWK5mV]
To use the new feature, follow the steps below.
2. Select **Azure AD Connect**. 3. Select **Manage cloud sync**. 4. Under **Configuration**, select your configuration.
-5. Under **Settings** fill in the following:
+5. Under **Settings** fill in the following information.
- **Notification email** - email used for notifications - **Prevent accidental deletions** - check this box to enable the feature - **Accidental deletion threshold** - enter the number of objects to stop synchronization and send a notification
To use the new feature, follow the steps below.
![Accidental deletes](media/how-to-accidental-deletes/accident-1.png) ## Recovering from an accidental delete instance
-If you encounter an accidental delete you will see this on the status of your provisioning agent configuration. It will say **Delete threshold exceeded**.
+If you encounter an accidental delete you'll see this on the status of your provisioning agent configuration. It will say **Delete threshold exceeded**.
![Accidental delete status](media/how-to-accidental-deletes/delete-1.png)
-By clicking on **Delete threshold exceeded**, you will see the sync status info. This will provide additional details.
+By clicking on **Delete threshold exceeded**, you'll see the sync status info. This action will provide more details.
![Sync status](media/how-to-accidental-deletes/delete-2.png)
-By right-clicking on the ellipses, you will get the following options:
+By right-clicking on the ellipses, you'll get the following options:
- View provisioning log - View agent - Allow deletes
The **Allow deletes** action will delete the objects that triggered the accident
![Yes on confirmation](media/how-to-accidental-deletes/delete-4.png)
-3. You will see confirmation that the deletions were accepted and the status will return to healthy with the next cycle.
+3. You'll see confirmation that the deletions were accepted and the status will return to healthy with the next cycle.
![Accept deletes](media/how-to-accidental-deletes/delete-8.png) ### Rejecting deletions
-If you do not want to allow the deletions, you need to do the following:
+If you don't want to allow the deletions, you need to do the following:
- investigate the source of the deletions-- fix the issue (example, OU was moved out of scope accidentally and you have now re-added it back to the scope)
+- fix the issue (example, OU was moved out of scope accidentally and you've now re-added it back to the scope)
- Run **Restart sync** on the agent configuration ## Next steps
active-directory How To Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-attribute-mapping.md
Previously updated : 04/30/2021 Last updated : 01/11/2023
active-directory How To Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-automatic-upgrade.md
na Previously updated : 12/02/2019 Last updated : 01/11/2023
active-directory How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-configure.md
Previously updated : 12/14/2021 Last updated : 01/11/2023
active-directory How To Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-expression-builder.md
Previously updated : 04/19/2021 Last updated : 01/11/2023
active-directory How To Gmsa Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-gmsa-cmdlets.md
Previously updated : 07/01/2022 Last updated : 01/11/2023
active-directory How To Inbound Synch Ms Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-inbound-synch-ms-graph.md
Previously updated : 12/04/2020 Last updated : 01/11/2023
active-directory How To Install Pshell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install-pshell.md
Previously updated : 01/31/2021 Last updated : 01/11/2023
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install.md
Previously updated : 11/11/2022 Last updated : 01/11/2023
active-directory How To Manage Registry Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-manage-registry-options.md
na Previously updated : 12/11/2020 Last updated : 01/11/2023
active-directory How To Map Usertype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-map-usertype.md
Previously updated : 05/04/2021 Last updated : 01/11/2023
active-directory How To On Demand Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-on-demand-provision.md
Previously updated : 09/10/2021 Last updated : 01/11/2023
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-prerequisites.md
Previously updated : 03/04/2022 Last updated : 01/11/2023
active-directory How To Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-transformation.md
description: This article describes how to use transformations to alter the defa
Previously updated : 12/02/2019 Last updated : 01/11/2023 ms.prod: windows-server-threshold ms.technology: identity-adfs
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
You're not required to maintain the resources that are used for verification aft
If your tenant has verified domains, in the **Select a verified domain** dropdown, select one of the domains. > [!NOTE]
-> The expected `Content-Type` header that should return is `application/json`. If you use any other header, like `application/json; charset=utf-8`, you might see this error message:
+> Content will be interpreted as UTF-8 JSON for deserialization. Supported `Content-Type` headers that should return are `application/json`, `application/json; charset=utf-8`, or ` `. If you use any other header, you might see this error message:
> > `Verification of publisher domain failed. Error getting JSON file from https:///.well-known/microsoft-identity-association. The server returned an unexpected content type header value.` >
active-directory Msal Logging Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-java.md
This article shows how to enable MSAL4J logging using the logback framework in a
} ```
-In your tenant, you'll need separate app registrations for the web app and the web API. For app registration and exposing the web API scope, follow the steps in the scenario [A web app that authenticates users and calls web APIs](/azure/active-directory/develop/scenario-web-app-call-api-overview).
+In your tenant, you'll need separate app registrations for the web app and the web API. For app registration and exposing the web API scope, follow the steps in the scenario [A web app that authenticates users and calls web APIs](./scenario-web-app-call-api-overview.md).
For instructions on how to bind to other logging frameworks, see the [SLF4J manual](http://www.slf4j.org/manual.html).
PublicClientApplication app2 = PublicClientApplication.builder(PUBLIC_CLIENT_ID)
## Next steps
-For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
+For more code samples, refer to [Microsoft identity platform code samples](sample-v2-code.md).
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
There are other ways in which applications can be granted authorization for app-
### Comparison of delegated and application permissions
-| <!-- No header--> | Delegated permissions | Application permissions |
+| | Delegated permissions | Application permissions |
|--|--|--| | Types of apps | Web / Mobile / single-page app (SPA) | Web / Daemon | | Access context | Get access on behalf of a user | Get access without a user |
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS50029 | Invalid URI - domain name contains invalid characters. Contact the tenant admin. | | AADSTS50032 | WeakRsaKey - Indicates the erroneous user attempt to use a weak RSA key. | | AADSTS50033 | RetryableError - Indicates a transient error not related to the database operations. |
-| AADSTS50034 | UserAccountNotFound - To sign into this application, the account must be added to the directory. This error can occur because the user mis-typed their username, or isn't in the tenant. An application may have chosen the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. See docs here: [Add B2B users](/azure/active-directory/external-identities/add-users-administrator). |
+| AADSTS50034 | UserAccountNotFound - To sign into this application, the account must be added to the directory. This error can occur because the user mis-typed their username, or isn't in the tenant. An application may have chosen the wrong tenant to sign into, and the currently logged in user was prevented from doing so since they did not exist in your tenant. If this user should be able to log in, add them as a guest. See docs here: [Add B2B users](../external-identities/add-users-administrator.md). |
| AADSTS50042 | UnableToGeneratePairwiseIdentifierWithMissingSalt - The salt required to generate a pairwise identifier is missing in principle. Contact the tenant admin. | | AADSTS50043 | UnableToGeneratePairwiseIdentifierWithMultipleSalts | | AADSTS50048 | SubjectMismatchesIssuer - Subject mismatches Issuer claim in the client assertion. Contact the tenant admin. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS51005 | TemporaryRedirect - Equivalent to HTTP status 307, which indicates that the requested information is located at the URI specified in the location header. When you receive this status, follow the location header associated with the response. When the original request method was POST, the redirected request will also use the POST method. | | AADSTS51006 | ForceReauthDueToInsufficientAuth - Integrated Windows authentication is needed. User logged in using a session token that is missing the integrated Windows authentication claim. Request the user to log in again. | | AADSTS52004 | DelegationDoesNotExistForLinkedIn - The user has not provided consent for access to LinkedIn resources. |
-| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. For additional information, please visit [Conditional Access device remediation](/azure/active-directory/conditional-access/troubleshoot-conditional-access). |
+| AADSTS53000 | DeviceNotCompliant - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune. For additional information, please visit [Conditional Access device remediation](../conditional-access/troubleshoot-conditional-access.md). |
| AADSTS53001 | DeviceNotDomainJoined - Conditional Access policy requires a domain joined device, and the device isn't domain joined. Have the user use a domain joined device. | | AADSTS53002 | ApplicationUsedIsNotAnApprovedApp - The app used isn't an approved app for Conditional Access. User needs to use one of the apps from the list of approved apps to use in order to get access. |
-| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. If this is unexpected, see the conditional access policy that applied to this request in the Azure Portal or contact your administrator. For additional information, please visit [troubleshooting sign-in with Conditional Access](/azure/active-directory/conditional-access/troubleshoot-conditional-access). |
+| AADSTS53003 | BlockedByConditionalAccess - Access has been blocked by Conditional Access policies. The access policy does not allow token issuance. If this is unexpected, see the conditional access policy that applied to this request in the Azure Portal or contact your administrator. For additional information, please visit [troubleshooting sign-in with Conditional Access](../conditional-access/troubleshoot-conditional-access.md). |
| AADSTS53004 | ProofUpBlockedDueToRisk - User needs to complete the multi-factor authentication registration process before accessing this content. User should register for multi-factor authentication. | | AADSTS53010 | ProofUpBlockedDueToSecurityInfoAcr - Cannot configure multi-factor authentication methods because the organization requires this information to be set from specific locations or devices. | | AADSTS53011 | User blocked due to risk on home tenant. |
The `error` field has several possible values - review the protocol documentatio
| AADSTS90055 | TenantThrottlingError - There are too many incoming requests. This exception is thrown for blocked tenants. | | AADSTS90056 | BadResourceRequest - To redeem the code for an access token, the app should send a POST request to the `/token` endpoint. Also, prior to this, you should provide an authorization code and send it in the POST request to the `/token` endpoint. Refer to this article for an overview of [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). Direct the user to the `/authorize` endpoint, which will return an authorization_code. By posting a request to the `/token` endpoint, the user gets the access token. Log in the Azure portal, and check **App registrations > Endpoints** to confirm that the two endpoints were configured correctly. | | AADSTS900561 | BadResourceRequestInvalidRequest - The endpoint only accepts {valid_verbs} requests. Received a {invalid_verb} request. {valid_verbs} represents a list of HTTP verbs supported by the endpoint (for example, POST), {invalid_verb} is an HTTP verb used in the current request (for example, GET). This can be due to developer error, or due to users pressing the back button in their browser, triggering a bad request. It can be ignored. |
-| AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. For more information, please visit [configuring external identities](/azure/active-directory/external-identities/external-identities-overview). |
+| AADSTS90072 | PassThroughUserMfaError - The external account that the user signs in with doesn't exist on the tenant that they signed into; so the user can't satisfy the MFA requirements for the tenant. This error also might occur if the users are synced, but there is a mismatch in the ImmutableID (sourceAnchor) attribute between Active Directory and Azure AD. The account must be added as an external user in the tenant first. Sign out and sign in with a different Azure AD user account. For more information, please visit [configuring external identities](../external-identities/external-identities-overview.md). |
| AADSTS90081 | OrgIdWsFederationMessageInvalid - An error occurred when the service tried to process a WS-Federation message. The message isn't valid. | | AADSTS90082 | OrgIdWsFederationNotSupported - The selected authentication policy for the request isn't currently supported. | | AADSTS90084 | OrgIdWsFederationGuestNotAllowed - Guest accounts aren't allowed for this site. |
The `error` field has several possible values - review the protocol documentatio
## Next steps
-* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
+* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
sampleApp/
In the next steps, you'll create a new folder for the JavaScript SPA and set up the user interface (UI). > [!TIP]
-> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization. It's primarily associated with a domain, like Microsoft.com. If you want to learn how applications can work with multiple tenants, refer to the [application model](/azure/active-directory/develop/application-model).
+> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization. It's primarily associated with a domain, like Microsoft.com. If you want to learn how applications can work with multiple tenants, refer to the [application model](./application-model.md).
## Create the SPA UI
The Microsoft Graph API requires the `User.Read` scope to read a user's profile.
Delve deeper into SPA development on the Microsoft identity platform in the first part of a scenario series: > [!div class="nextstepaction"]
-> [Scenario: Single-page application](scenario-spa-overview.md)
+> [Scenario: Single-page application](scenario-spa-overview.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 01/05/2023 Last updated : 01/11/2023
Welcome to what's new in the Microsoft identity platform documentation. This art
### Updated articles -- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](quickstart-v2-aspnet-core-web-api.md)
+- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)
- [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](quickstart-v2-netcore-daemon.md)
+- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](quickstart-v2-aspnet-core-web-api.md)
+- [Tutorial: Create a Blazor Server app that uses the Microsoft identity platform for authentication](tutorial-blazor-server.md)
- [Tutorial: Sign in users and call a protected API from a Blazor WebAssembly app](tutorial-blazor-webassembly.md)-- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)-- [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md) - [Web app that signs in users: App registration](scenario-web-app-sign-user-app-registration.md)-- [Microsoft identity platform docs: What's new](whats-new-docs.md)-- [Tutorial: Create a Blazor Server app that uses the Microsoft identity platform for authentication](tutorial-blazor-server.md)
+- [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md)
+ ## November 2022 ### New articles
active-directory Workload Identity Federation Block Using Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-block-using-azure-policy.md
# Block workload identity federation on managed identities using a policy
-This article describes how to block the creation of federated identity credentials on user-assigned managed identities by using Azure Policy. By blocking the creation of federated identity credentials, you can block everyone from using [workload identity federation](workload-identity-federation.md) to access Azure AD protected resources. [Azure Policy](/azure/governance/policy/overview) helps enforce certain business rules on your Azure resources and assess compliance of those resources.
+This article describes how to block the creation of federated identity credentials on user-assigned managed identities by using Azure Policy. By blocking the creation of federated identity credentials, you can block everyone from using [workload identity federation](workload-identity-federation.md) to access Azure AD protected resources. [Azure Policy](../../governance/policy/overview.md) helps enforce certain business rules on your Azure resources and assess compliance of those resources.
The Not allowed resource types built-in policy can be used to block the creation of federated identity credentials on user-assigned managed identities.
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
In this article, you learn how to create, list, and delete federated identity cr
## Important considerations and restrictions
-To create, update, or delete a federated identity credential, the account performing the action must have the [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator), [Application Developer](/azure/active-directory/roles/permissions-reference#application-developer), [Cloud Application Administrator](/azure/active-directory/roles/permissions-reference#cloud-application-administrator), or Application Owner role. The [microsoft.directory/applications/credentials/update permission](/azure/active-directory/roles/custom-available-permissions#microsoftdirectoryapplicationscredentialsupdate) is required to update a federated identity credential.
+To create, update, or delete a federated identity credential, the account performing the action must have the [Application Administrator](../roles/permissions-reference.md#application-administrator), [Application Developer](../roles/permissions-reference.md#application-developer), [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator), or Application Owner role. The [microsoft.directory/applications/credentials/update permission](../roles/custom-available-permissions.md#microsoftdirectoryapplicationscredentialsupdate) is required to update a federated identity credential.
[!INCLUDE [federated credential configuration](./includes/federated-credential-configuration-considerations.md)]
az rest -m DELETE -u 'https://graph.microsoft.com/applications/f6475511-fd81-49
- To learn how to use workload identity federation for GitHub Actions, see [Configure a GitHub Actions workflow to get an access token](/azure/developer/github/connect-from-azure). - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources. - For more information, read about how Azure AD uses the [OAuth 2.0 client credentials grant](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.-- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
A few enterprise applications can't be deleted in the Azure portal and might blo
> > Before you proceed, verify that you're connected to the tenant that you want to delete with the MSOnline module. We recommend that you run the `Get-MsolDomain` command to confirm that you're connected to the correct tenant ID and `onmicrosoft.com` domain.
-5. Run the following command to set the tenant context:
+5. Run the following commands to set the tenant context. DO NOT skip these steps or you run the risk of deleting enterprise apps from the wrong teant.
+ `Clear-AzContext -Scope CurrentUser`
`Connect-AzAccount -Tenant \<object id of the tenant you are attempting to delete\>`
+ `Get-AzContext`
>[!WARNING]
- > Before you proceed, verify that you're connected to the tenant that you want to delete with the Az PowerShell module. We recommend that you run the `Get-AzContext` command to check the connected tenant ID and `onmicrosoft.com` domain.
+ > Before you proceed, verify that you're connected to the tenant that you want to delete with the Az PowerShell module. We recommend that you run the `Get-AzContext` command to check the connected tenant ID and `onmicrosoft.com` domain. Do NOT skip the above steps or you run the risk of deleting enterprise apps from the wrong tenant.
6. Run the following command to remove any enterprise apps that you can't delete:
active-directory 3 Secure Access Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/3-secure-access-plan.md
To group resources for access:
* Microsoft Teams groups files, conversation threads, and other resources. Formulate an external access strategy for Microsoft Teams. * See, [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md) * Use entitlement management access packages to create and delegate management of packages of applications, groups, teams, SharePoint sites, etc.
- * [Create a new access package in entitlement management](/azure/active-directory/governance/entitlement-management-access-package-create)
+ * [Create a new access package in entitlement management](../governance/entitlement-management-access-package-create.md)
* Apply Conditional Access policies to up to 250 applications, with the same access requirements
- * [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
+ * [What is Conditional Access?](../conditional-access/overview.md)
* Use Cross Tenant Access Settings Inbound Access to define access for application groups of external users
- * [Overview: Cross-tenant access with Azure AD External Identities](/azure/active-directory/external-identities/cross-tenant-access-overview)
+ * [Overview: Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md)
Document the applications to be grouped. Considerations include:
Items in bold are recommended.
* [Manage external access with entitlement management](6-secure-access-entitlement-managment.md) * [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) * [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md)
-* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
+* [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
After you sign in to the Azure portal, you can create a new tenant for your orga
![Azure Active Directory - Create a tenant page - configuration tab ](media/active-directory-access-create-new-tenant/azure-ad-create-new-tenant.png)
- - Type _Contoso Organization_ into the **Organization name** box.
+ - Type your desired Organization name (for example _Contoso Organization_) into the **Organization name** box.
- - Type _Contosoorg_ into the **Initial domain name** box.
+ - Type your desired Initial domain name (for example _Contosoorg_) into the **Initial domain name** box.
- - Leave the _United States_ option in the **Country or region** box.
+ - Select your desired Country/Region or leave the _United States_ option in the **Country or region** box.
1. Select **Next: Review + Create**. Review the information you entered and if the information is correct, select **create**.
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md
Use the following list to plan for authentication deployment.
* See the video, [How to configure and enforce multi-factor authentication in your tenant](https://www.youtube.com/watch?v=qNndxl7gqVM) * See, [Plan an Azure Active Directory Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md) * **Conditional Access** - Implement automated access-control decisions for users to access cloud apps, based on conditions:
- * See, [What is Conditional Access?](/azure/active-directory/conditional-access/overview)
+ * See, [What is Conditional Access?](../conditional-access/overview.md)
* See, [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md) * **Azure AD self-service password reset (SSPR)** - Help users reset a password without administrator intervention: * See, [Passwordless authentication options for Azure AD](/articles/active-directory/authentication/concept-authentication-passwordless.md) * See, [Plan an Azure Active Directory self-service password reset deployment](../authentication/howto-sspr-deployment.md) * **Passordless authentication** - Implement passwordless authentication using the Microsoft Authenticator app or FIDO2 Security keys:
- * See, [Enable passwordless sign-in with Microsoft Authenticator](/azure/active-directory/authentication/howto-authentication-passwordless-phone)
+ * See, [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md)
* See, [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md) ## Applications and devices
Use the following list to help deploy applications and devices.
* See, [What is SSO in Azure AD?](/articles/active-directory/manage-apps/what-is-single-sign-on.md) * See, [Plan a SSO deployment](../manage-apps/plan-sso-deployment.md) * **My Apps portal** - A web-based portal to discover and access applications. Enable user productivity with self-service, for instance requesting access to groups, or managing access to resources on behalf of others.
- * See, [My Apps portal overview](/azure/active-directory/manage-apps/myapps-overview)
+ * See, [My Apps portal overview](../manage-apps/myapps-overview.md)
* **Devices** - Evaluate device integration methods with Azure AD, choose the implementation plan, and more. * See, [Plan your Azure Active Directory device deployment](../devices/plan-device-deployment.md)
The following list describes features and services for productivity gains in hyb
* **Identity governance** - Create identity governance and enhance business processes that rely on identity data. With HR products, such as Workday or Successfactors, manage employee and contingent-staff identity lifecycle with rules. These rules map Joiner-Mover-Leaver processes, such as New Hire, Terminate, Transfer, to IT actions such as Create, Enable, Disable. * See, [Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) * **Azure AD B2B collaboration** - Improve external-user collaboration with secure access to applications:
- * See, [B2B collaboration overview](/azure/active-directory/external-identities/what-is-b2b)
+ * See, [B2B collaboration overview](../external-identities/what-is-b2b.md)
* See, [Plan an Azure Active Directory B2B collaboration deployment](../fundamentals/secure-external-access-resources.md) ## Governance and reporting
Use the following list to learn about governance and reporting. Items in the lis
Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https://www.microsoft.com/en-us/security/blog/?p=114039) * **Privileged identity management (PIM)** - Manage privileged administrative roles across Azure AD, Azure resources, and other Microsoft Online Services. Use it for just-in-time access, request approval workflows, and fully integrated access reviews to help prevent malicious activities:
- * See, [Start using Privileged Identity Management](/azure/active-directory/privileged-identity-management/pim-getting-started)
+ * See, [Start using Privileged Identity Management](../privileged-identity-management/pim-getting-started.md)
* See, [Plan a Privileged Identity Management deployment](../privileged-identity-management/pim-deployment-plan.md) * **Reporting and monitoring** - Your Azure AD reporting and monitoring solution design has dependencies and constraints: legal, security, operations, environment, and processes. * See, [Azure Active Directory reporting and monitoring deployment dependencies](../reports-monitoring/plan-monitoring-and-reporting.md)
Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https:/
* **Identity governance** - Meet your compliance and risk management objectives for access to critical applications. Learn how to enforce accurate access. * See, [Govern access for applications in your environment](../governance/identity-governance-applications-prepare.md)
-Learn more: [Azure governance documentation](/azure/governance/)
+Learn more: [Azure governance documentation](../../governance/index.yml)
## Best practices for a pilot
In your first phase, target IT, usability, and other users who can test and prov
Widen the pilot to larger groups of users by using dynamic membership, or by manually adding users to the targeted group(s).
-Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)]
+Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)]
active-directory Azure Ad Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-ad-data-residency.md
# Azure Active Directory and data residency
-Azure AD is an Identity as a Service (IDaaS) solution that stores and manages identity and access data in the cloud. You can use the data to enable and manage access to cloud services, achieve mobility scenarios, and secure your organization. An instance of the Azure AD service, called a [tenant](/azure/active-directory/develop/developer-glossary#tenant), is an isolated set of directory object data that the customer provisions and owns.
+Azure AD is an Identity as a Service (IDaaS) solution that stores and manages identity and access data in the cloud. You can use the data to enable and manage access to cloud services, achieve mobility scenarios, and secure your organization. An instance of the Azure AD service, called a [tenant](../develop/developer-glossary.md#tenant), is an isolated set of directory object data that the customer provisions and owns.
## Core Store
Use the following table to see Azure AD cloud solution models based on infrastru
Learn more:
-* [Customer data storage and processing for European customers in Azure AD](/azure/active-directory/fundamentals/active-directory-data-storage-eu)
+* [Customer data storage and processing for European customers in Azure AD](./active-directory-data-storage-eu.md)
* Power BI: [Azure Active Directory ΓÇô Where is your data located?](https://aka.ms/aaddatamap) * [What is the Azure Active Directory architecture?](https://aka.ms/aadarch) * [Find the Azure geography that meets your needs](https://azure.microsoft.com/overview/datacenters/how-to-choose/)
Learn more: [Azure Active Directory, Product overview](https://www.microsoft.com
|||| |Azure AD Authentication Service|This service is stateless. The data for authentication is in the Azure AD Core Store. It has no directory data. Azure AD Authentication Service generates log data in Azure storage, and in the data center where the service instance runs. When users attempt to authenticate using Azure AD, theyΓÇÖre routed to an instance in the geographically nearest data center that is part of its Azure AD logical region. |In geo location| |Azure AD Identity and Access Management (IAM) Services|**User and management experiences**: The Azure AD management experience is stateless and has no directory data. It generates log and usage data stored in Azure Tables storage. The user experience is like the Azure portal. <br>**Identity management business logic and reporting services**: These services have locally cached data storage for groups and users. The services generate log and usage data that goes to Azure Tables storage, Azure SQL, and in Microsoft Elastic Search reporting services. |In geo location|
-|Azure AD Multi-Factor Authentication (MFA)|For details about MFA-operations data storage and retention, see [Data residency and customer data for Azure AD multifactor authentication](/azure/active-directory/authentication/concept-mfa-data-residency). Azure AD MFA logs the User Principal Name (UPN), voice-call telephone numbers, and SMS challenges. For challenges to mobile app modes, the service logs the UPN and a unique device token. Data centers in the North America region store Azure AD MFA, and the logs it creates.|North America|
+|Azure AD Multi-Factor Authentication (MFA)|For details about MFA-operations data storage and retention, see [Data residency and customer data for Azure AD multifactor authentication](../authentication/concept-mfa-data-residency.md). Azure AD MFA logs the User Principal Name (UPN), voice-call telephone numbers, and SMS challenges. For challenges to mobile app modes, the service logs the UPN and a unique device token. Data centers in the North America region store Azure AD MFA, and the logs it creates.|North America|
|Azure AD Domain Services|See regions where Azure AD Domain Services is published on [Products available by region](https://azure.microsoft.com/regions/services/). The service holds system metadata globally in Azure Tables, and it contains no personal data.|In geo location| |Azure AD Connect Health|Azure AD Connect Health generates alerts and reports in Azure Tables storage and blob storage.|In geo location| |Azure AD dynamic membership for groups, Azure AD self-service group management|Azure Tables storage holds dynamic membership rule definitions.|In geo location|
Learn more: [Azure Active Directory, Product overview](https://www.microsoft.com
|Azure AD provisioning|Azure AD provisioning creates, removes, and updates users in systems, such as software as service (SaaS) applications. It manages user creation in Azure AD and on-premises AD from cloud HR sources, like Workday. The service stores its configuration in an Azure Cosmos DB, which stores the group membership data for the user directory it keeps. Cosmos DB replicates the database to multiple datacenters in the same region as the tenant, which isolates the data, according to the Azure AD cloud solution model. Replication creates high availability and multiple reading and writing endpoints. Cosmos DB has encryption on the database information, and the encryption keys are stored in the secrets storage for Microsoft.|In geo location| |Azure AD business-to-business (B2B) collaboration|Azure AD B2B collaboration has no directory data. Users and other directory objects in a B2B relationship, with another tenant, result in user data copied in other tenants, which might have data residency implications.|In geo location| |Azure AD Identity Protection|Azure AD Identity Protection uses real-time user log-in data, with multiple signals from company and industry sources, to feed its machine-learning systems that detect anomalous logins. Personal data is scrubbed from real-time log-in data before itΓÇÖs passed to the machine learning system. The remaining log-in data identifies potentially risky usernames and logins. After analysis, the data goes to Microsoft reporting systems. Risky logins and usernames appear in reporting for Administrators.|In geo location|
-|Azure AD managed identities for Azure resources|Azure AD managed identities for Azure resources with managed identities systems can authenticate to Azure services, without storing credentials. Rather than use username and password, managed identities authenticate to Azure services with certificates. The service writes certificates it issues in Azure Cosmos DB in the East US region, which fail over to another region, as needed. Azure Cosmos DB geo-redundancy occurs by global data replication. Database replication puts a read-only copy in each region that Azure AD managed identities runs. To learn more, see [Azure services that can use managed identities to access other services](/azure/active-directory/managed-identities-azure-resources/managed-identities-status#azure-services-that-support-managed-identities-for-azure-resources). Microsoft isolates each Cosmos DB instance in an Azure AD cloud solution model. </br> The resource provider, such as the virtual machine (VM) host, stores the certificate for authentication, and identity flows, with other Azure services. The service stores its master key to access Azure Cosmos DB in a datacenter secrets management service. Azure Key Vault stores the master encryption keys.|In geo location|
-|Azure Active Directory B2C |[Azure AD B2C](/azure/active-directory-b2c/data-residency) is an identity management service to customize and manage how customers sign up, sign in, and manage their profiles when using applications. B2C uses the Core Store to keep user identity information. The Core Store database follows known storage, replication, deletion, and data-residency rules. B2C uses an Azure Cosmos DB system to store service policies and secrets. Cosmos DB has encryption and replication services on database information. Its encryption key is stored in the secrets storage for Microsoft. Microsoft isolates Cosmos DB instances in an Azure AD cloud solution model.|Customer-selectable geo location|
+|Azure AD managed identities for Azure resources|Azure AD managed identities for Azure resources with managed identities systems can authenticate to Azure services, without storing credentials. Rather than use username and password, managed identities authenticate to Azure services with certificates. The service writes certificates it issues in Azure Cosmos DB in the East US region, which fail over to another region, as needed. Azure Cosmos DB geo-redundancy occurs by global data replication. Database replication puts a read-only copy in each region that Azure AD managed identities runs. To learn more, see [Azure services that can use managed identities to access other services](../managed-identities-azure-resources/managed-identities-status.md). Microsoft isolates each Cosmos DB instance in an Azure AD cloud solution model. </br> The resource provider, such as the virtual machine (VM) host, stores the certificate for authentication, and identity flows, with other Azure services. The service stores its master key to access Azure Cosmos DB in a datacenter secrets management service. Azure Key Vault stores the master encryption keys.|In geo location|
+|Azure Active Directory B2C |[Azure AD B2C](../../active-directory-b2c/data-residency.md) is an identity management service to customize and manage how customers sign up, sign in, and manage their profiles when using applications. B2C uses the Core Store to keep user identity information. The Core Store database follows known storage, replication, deletion, and data-residency rules. B2C uses an Azure Cosmos DB system to store service policies and secrets. Cosmos DB has encryption and replication services on database information. Its encryption key is stored in the secrets storage for Microsoft. Microsoft isolates Cosmos DB instances in an Azure AD cloud solution model.|Customer-selectable geo location|
## Related resources
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-principal.md
Learn more about securing service accounts:
Conditional Access:
-Use Conditional Access to block service principals from untrusted locations. See, [Create a location-based Conditional Access policy](/azure/active-directory/conditional-access/workload-identity#create-a-location-based-conditional-access-policy).
-
+Use Conditional Access to block service principals from untrusted locations. See, [Create a location-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-location-based-conditional-access-policy).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
In this Public Preview refresh, we have enhanced the user experience with an upd
For more information, see: [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md). ---
-### Public Preview - Enabling customization capabilities for the conditional error messages in Company Branding.
---
-**Type:** New feature
-**Service category:** Other
-**Product capability:** End User Experiences
-
-Updates to the Company Branding functionality on the Azure AD/Microsoft 365 login experience, to allow customizing conditional access (CA) error messages. For more information, see: [Company Branding](../fundamentals/customize-branding.md).
-- ### Public Preview - Admins can restrict their users from creating tenants
Azure AD supports provisioning users into applications hosted on-premises or in
In December 2022 we have added the following 44 new applications in our App gallery with Federation support
-[Bionexo IDM](https://login.bionexo.com/), [SMART Meeting Pro](https://www.smarttech.com/en/business/software/meeting-pro), [Venafi Control Plane ΓÇô Datacenter](/azure/active-directory/saas-apps/venafi-control-plane-tutorial), [HighQ](../saas-apps/highq-tutorial.md), [Drawboard PDF](https://pdf.drawboard.com/), [ETU Skillsims](../saas-apps/etu-skillsims-tutorial.md), [TencentCloud IDaaS](../saas-apps/tencent-cloud-idaas-tutorial.md), [TeamHeadquarters Email Agent OAuth](https://thq.entry.com/), [Verizon MDM](https://verizonmdm.vzw.com/), [QRadar SOAR](../saas-apps/qradar-soar-tutorial.md), [Tripwire Enterprise](../saas-apps/tripwire-enterprise-tutorial.md), [Cisco Unified Communications Manager](../saas-apps/cisco-unified-communications-manager-tutorial.md), [Howspace](https://login.in.howspace.com/), [Flipsnack SAML](../saas-apps/flipsnack-saml-tutorial.md), [Albert](http://www.albertinvent.com/), [Altinget.no](https://www.altinget.no/), [Coveo Hosted Services](../saas-apps/coveo-hosted-services-tutorial.md), [Cybozu(cybozu.com)](../saas-apps/cybozu-tutorial.md), [BombBomb](https://app.bombbomb.com/app), [VMware Identity Service](../saas-apps/vmware-identity-service-tutorial.md), [Cimmaron Exchange Sync - Delegated](https://cimmaronsoftware.com/Mortgage-CRM-Exchange-Sync.aspx), [HexaSync](https://app-az.hexasync.com/login), [Trifecta Teams](https://app.trifectateams.net/), [VerosoftDesign](https://verosoft-design.vercel.app/login), [Mazepay](https://app.mazepay.com/), [Wistia](../saas-apps/wistia-tutorial.md), [Begin.AI](https://app.begin.ai/), [WebCE](../saas-apps/webce-tutorial.md), [Dream Broker Studio](https://dreambroker.com/studio/login/), [PKSHA Chatbot](../saas-apps/pksha-chatbot-tutorial.md), [PGM-BCP](https://ups-pgm-bcp.4gfactor.com/azure/), [ChartDesk SSO](../saas-apps/chartdesk-sso-tutorial.md), [Elsevier SP](../saas-apps/elsevier-sp-tutorial.md), [GreenCommerce IdentityServer](https://identity.jem-id.nl/Account/Login), [Fullview](https://app.fullview.io/sign-in), [Aqua Platform](../saas-apps/aqua-platform-tutorial.md), [SpedTrack](../saas-apps/spedtrack-tutorial.md), [Pinpoint](https://pinpoint.ddiworld.com/psg2?sso=true), [Darzin Outlook Add-in](https://outlook.darzin.com/graph-login.html), [Simply Stakeholders Outlook Add-in](https://outlook.simplystakeholders.com/graph-login.html), [tesma](../saas-apps/tesma-tutorial.md), [Parkable](../saas-apps/parkable-tutorial.md), [Unite Us](../saas-apps/unite-us-tutorial.md)
+[Bionexo IDM](https://login.bionexo.com/), [SMART Meeting Pro](https://www.smarttech.com/en/business/software/meeting-pro), [Venafi Control Plane ΓÇô Datacenter](../saas-apps/venafi-control-plane-tutorial.md), [HighQ](../saas-apps/highq-tutorial.md), [Drawboard PDF](https://pdf.drawboard.com/), [ETU Skillsims](../saas-apps/etu-skillsims-tutorial.md), [TencentCloud IDaaS](../saas-apps/tencent-cloud-idaas-tutorial.md), [TeamHeadquarters Email Agent OAuth](https://thq.entry.com/), [Verizon MDM](https://verizonmdm.vzw.com/), [QRadar SOAR](../saas-apps/qradar-soar-tutorial.md), [Tripwire Enterprise](../saas-apps/tripwire-enterprise-tutorial.md), [Cisco Unified Communications Manager](../saas-apps/cisco-unified-communications-manager-tutorial.md), [Howspace](https://login.in.howspace.com/), [Flipsnack SAML](../saas-apps/flipsnack-saml-tutorial.md), [Albert](http://www.albertinvent.com/), [Altinget.no](https://www.altinget.no/), [Coveo Hosted Services](../saas-apps/coveo-hosted-services-tutorial.md), [Cybozu(cybozu.com)](../saas-apps/cybozu-tutorial.md), [BombBomb](https://app.bombbomb.com/app), [VMware Identity Service](../saas-apps/vmware-identity-service-tutorial.md), [Cimmaron Exchange Sync - Delegated](https://cimmaronsoftware.com/Mortgage-CRM-Exchange-Sync.aspx), [HexaSync](https://app-az.hexasync.com/login), [Trifecta Teams](https://app.trifectateams.net/), [VerosoftDesign](https://verosoft-design.vercel.app/login), [Mazepay](https://app.mazepay.com/), [Wistia](../saas-apps/wistia-tutorial.md), [Begin.AI](https://app.begin.ai/), [WebCE](../saas-apps/webce-tutorial.md), [Dream Broker Studio](https://dreambroker.com/studio/login/), [PKSHA Chatbot](../saas-apps/pksha-chatbot-tutorial.md), [PGM-BCP](https://ups-pgm-bcp.4gfactor.com/azure/), [ChartDesk SSO](../saas-apps/chartdesk-sso-tutorial.md), [Elsevier SP](../saas-apps/elsevier-sp-tutorial.md), [GreenCommerce IdentityServer](https://identity.jem-id.nl/Account/Login), [Fullview](https://app.fullview.io/sign-in), [Aqua Platform](../saas-apps/aqua-platform-tutorial.md), [SpedTrack](../saas-apps/spedtrack-tutorial.md), [Pinpoint](https://pinpoint.ddiworld.com/psg2?sso=true), [Darzin Outlook Add-in](https://outlook.darzin.com/graph-login.html), [Simply Stakeholders Outlook Add-in](https://outlook.simplystakeholders.com/graph-login.html), [tesma](../saas-apps/tesma-tutorial.md), [Parkable](../saas-apps/parkable-tutorial.md), [Unite Us](../saas-apps/unite-us-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
We recognize that changing libraries is not an easy task, and cannot be accompli
### How to find out which applications in my tenant are using ADAL?
-Refer to our post on [Microsoft Q&A](/answers/questions/360928/information-how-to-find-apps-using-adal-in-your-te.html) for details on identifying ADAL apps with the help of [Azure Workbooks](/azure/azure-monitor/visualize/workbooks-overview).
-### If IΓÇÖm using ADAL, what can I expect after the deadline?
+Refer to our post on [Microsoft Q&A](/answers/questions/360928/information-how-to-find-apps-using-adal-in-your-te.html) for details on identifying ADAL apps with the help of [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md).
+### If IΓÇÖm using ADAL, what can I expect after the deadline?
- There will be no new releases (security or otherwise) to the library after June 2023. - We will not be accepting any incident reports or support requests for ADAL. ADAL to MSAL migration support would continue.
Developers can now use managed identities for their software workloads running a
For more information, see: - [Configure a user-assigned managed identity to trust an external identity provider (preview)](../develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md) - [Workload identity federation](../develop/workload-identity-federation.md)-- [Use an Azure AD workload identity (preview) on Azure Kubernetes Service (AKS)](/azure/aks/workload-identity-overview)
+- [Use an Azure AD workload identity (preview) on Azure Kubernetes Service (AKS)](../../aks/workload-identity-overview.md)
Authenticator version 6.6.8 and higher on iOS will be FIPS 140 compliant for all
In November 2022, we've added the following 22 new applications in our App gallery with Federation support
-[Adstream](/azure/active-directory/saas-apps/adstream-tutorial), [Databook](/azure/active-directory/saas-apps/databook-tutorial), [Ecospend IAM](https://ecospend.com/), [Digital Pigeon](/azure/active-directory/saas-apps/digital-pigeon-tutorial), [Drawboard Projects](/azure/active-directory/saas-apps/drawboard-projects-tutorial), [Vellum](https://www.vellum.ink/request-demo), [Veracity](https://aie-veracity.com/connect/azure), [Microsoft OneNote to Bloomberg Note Sync](https://www.bloomberg.com/professional/support/software-updates/), [DX NetOps Portal](/azure/active-directory/saas-apps/dx-netops-portal-tutorial), [itslearning Outlook integration](https://itslearning.com/global/), [Tranxfer](/azure/active-directory/saas-apps/tranxfer-tutorial), [Occupop](https://app.occupop.com/), [Nialli Workspace](https://ws.nialli.com/), [Tideways](https://app.tideways.io/login), [SOWELL](https://manager.sowellapp.com/#/?sso=true), [Prewise Learning](https://prewiselearning.com/), [CAPTOR for Intune](https://www.inkscreen.com/microsoft), [wayCloud Platform](https://app.way-cloud.de/login), [Nura Space Meeting Room](https://play.google.com/store/apps/details?id=com.meetingroom.prod), [Flexopus Exchange Integration](https://help.flexopus.com/de/microsoft-graph-integration), [Ren Systems](https://app.rensystems.com/login), [Nudge Security](https://www.nudgesecurity.io/login)
+[Adstream](../saas-apps/adstream-tutorial.md), [Databook](../saas-apps/databook-tutorial.md), [Ecospend IAM](https://ecospend.com/), [Digital Pigeon](../saas-apps/digital-pigeon-tutorial.md), [Drawboard Projects](../saas-apps/drawboard-projects-tutorial.md), [Vellum](https://www.vellum.ink/request-demo), [Veracity](https://aie-veracity.com/connect/azure), [Microsoft OneNote to Bloomberg Note Sync](https://www.bloomberg.com/professional/support/software-updates/), [DX NetOps Portal](../saas-apps/dx-netops-portal-tutorial.md), [itslearning Outlook integration](https://itslearning.com/global/), [Tranxfer](../saas-apps/tranxfer-tutorial.md), [Occupop](https://app.occupop.com/), [Nialli Workspace](https://ws.nialli.com/), [Tideways](https://app.tideways.io/login), [SOWELL](https://manager.sowellapp.com/#/?sso=true), [Prewise Learning](https://prewiselearning.com/), [CAPTOR for Intune](https://www.inkscreen.com/microsoft), [wayCloud Platform](https://app.way-cloud.de/login), [Nura Space Meeting Room](https://play.google.com/store/apps/details?id=com.meetingroom.prod), [Flexopus Exchange Integration](https://help.flexopus.com/de/microsoft-graph-integration), [Ren Systems](https://app.rensystems.com/login), [Nudge Security](https://www.nudgesecurity.io/login)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
Beginning September 30, 2024, Azure Multi-Factor Authentication Server deploymen
-### General Availability - Change of Default User Consent Settings
---
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Developer Experience
-
-Starting Sept 30th, 2022, Microsoft will require all new tenants to follow a new user consent configuration. While this won't impact any existing tenants that were created before September 30, 2022, all new tenants created after September 30, 2022, will have the default setting of ΓÇ£Enable automatic updates (Recommendation)ΓÇ¥ under User consent settings. This change reduces the risk of malicious applications attempting to trick users into granting them access to your organization's data. For more information, see: [Configure how users consent to applications](../manage-apps/configure-user-consent.md).
--- ### Public Preview - Lifecycle Workflows is now available
With this new parity update, customers can now integrate non-gallery application
For more information, see [Claims mapping policy - Microsoft Entra | Microsoft Docs](../develop/reference-claims-mapping-policy-type.md#claim-schema-entry-elements). -+
active-directory How To Connect Group Writeback Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-enable.md
Group writeback requires enabling both the original and new versions of the feat
> >The enhanced group writeback feature is enabled on the tenant and not per Azure AD Connect client instance. Please be sure that all Azure AD Connect client instances are updated to a minimal build version of 1.6.4.0 or later.
+> [!NOTE]
+> If you don't want to writeback all existing Microsoft 365 groups to Active Directory, you need to make changes to group writeback default behaviour before performing the steps in this article to enable the feature. See [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md).
+> Also the new and original versions of the feature need to be enabled in the order documented. If the original feature is enabled first, all existing Microsoft 365 groups will be written back to Active Directory.
+ ### Enable group writeback by using PowerShell 1. On your Azure AD Connect server, open a PowerShell prompt as an administrator.
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
You can modify the default behavior as follows:
- Microsoft 365 groups with up to 250,000 members can be written back to on-premises. If you plan to make changes to the default behavior, we recommend that you do so before you enable group writeback. However, you can still modify the default behavior if group writeback is already enabled. For more information, see [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md).
-
+
+> [!NOTE]
+> You need to make these changes before enabling group writeback; otherwise, all existing Microsoft 365 groups will be automatically written back to Active Directory. Also, the new and original versions of the feature need to be enabled in the order documented. If the original feature is enabled first, all existing Microsoft 365 groups will be written back to Active Directory.
+ ## Understand limitations of public previewΓÇ» Although this release has undergone extensive testing, you might still encounter issues. One of the goals of this public preview release is to find and fix any issues before the feature moves to general availability. Please also note that any public preview functionality can still receive breaking changes which may require you to make changes to you configuration to continue using this feature. We may also decide to change or remove certain functionality without prior notice.
These limitations and known issues are specific to group writeback:
- [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md) - [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md)-- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
+- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
In this tutorial, learn to deploy BIG-IP Vitural Edition (VE) in Azure infrastru
- A prepared BIG-IP virtual machine (VM) to model a secure hybrid access (SHA) proof-of-concept - A staging instance to test new BIG-IP system updates and hotfixes
-Learn more: [SHA: Secure legacy apps with Azure Active Directory](/azure/active-directory/manage-apps/secure-hybrid-access)
+Learn more: [SHA: Secure legacy apps with Azure Active Directory](./secure-hybrid-access.md)
## Prerequisites
Get-AzVmSnapshot -ResourceGroupName '<E.g.contoso-RG>' -VmName '<E.g.BIG-IP-VM>'
## Next steps
-Select a [deployment scenario](f5-aad-integration.md) and start your implementation.
+Select a [deployment scenario](f5-aad-integration.md) and start your implementation.
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
We recommend you become a verified publisher, so customers know you're the trust
## Enable single sign-on for IT admins
-There are several ways to enable SSO for IT administrators to your solution. See, [Plan a single sign-on deployment, SSO options](/azure/active-directory/manage-apps/plan-sso-deployment#single-sign-on-options).
+There are several ways to enable SSO for IT administrators to your solution. See, [Plan a single sign-on deployment, SSO options](./plan-sso-deployment.md#single-sign-on-options).
Microsoft Graph uses OIDC/OAuth. Customers use OIDC to sign in to your solution. Use the JSON Web Token (JWT) Azure AD issues to interact with Microsoft Graph. See, [OpenID Connect on the Microsoft identity platform](../develop/v2-protocols-oidc.md).
-If your solution uses SAML for IT administrator SSO, the SAML token won't enable your solution to interact with Microsoft Graph. You can use SAML for IT administrator SSO, but your solution needs to support OIDC integration with Azure AD, so it can get a JWT from Azure AD to interact with Microsoft Graph. See, [How the Microsoft identity platform uses the SAML protocol](/azure/active-directory/develop/active-directory-saml-protocol-reference).
+If your solution uses SAML for IT administrator SSO, the SAML token won't enable your solution to interact with Microsoft Graph. You can use SAML for IT administrator SSO, but your solution needs to support OIDC integration with Azure AD, so it can get a JWT from Azure AD to interact with Microsoft Graph. See, [How the Microsoft identity platform uses the SAML protocol](../develop/active-directory-saml-protocol-reference.md).
You can use one of the following SAML approaches:
https://login.microsoftonline.com/{Tenant_ID}/federationmetadata/2007-06/federat
### Assign users and groups
-After you publish the application to Azure AD, you can assign the app to users and groups to ensure it appears on the My Apps portal. This assignment is on the service principal object generated when you created the application. See, [My Apps portal overview](/azure/active-directory/manage-apps/myapps-overview).
+After you publish the application to Azure AD, you can assign the app to users and groups to ensure it appears on the My Apps portal. This assignment is on the service principal object generated when you created the application. See, [My Apps portal overview](./myapps-overview.md).
Get `AppRole` instances the application might have associated with it. It's common for SaaS applications to have various `AppRole` instances associated with them. Typically, for custom applications, there's one default `AppRole` instance. Get the `AppRole` instance ID you want to assign:
The following software-defined perimeter (SDP) solutions providers connect with
* **Strata Maverics Identity Orchestrator** * [Integrate Azure AD SSO with Maverics Identity Orchestrator SAML Connector](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) * **Zscaler Private Access**
- * [Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
+ * [Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
A common challenge for developers is the management of secrets, credentials, certificates, and keys used to secure communication between services. Managed identities eliminate the need for developers to manage these credentials.
-While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having to manage any credentials.
+While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory (Azure AD) for applications to use when connecting to resources that support Azure AD authentication. Applications can use managed identities to obtain Azure AD tokens without having to manage any credentials.
The following video shows how you can use managed identities:</br>
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-concept.md
Previously updated : 09/26/2022 Last updated : 01/11/2023
The following are known issues with role-assignable groups:
- Use the new [Exchange admin center](/exchange/exchange-admin-center) for role assignments via group membership. The old Exchange admin center doesn't support this feature. If accessing the old Exchange admin center is required, assign the eligible role directly to the user (not via role-assignable groups). Exchange PowerShell cmdlets will work as expected. - If an administrator role is assigned to a role-assignable group instead of individual users, members of the group will not be able to access Rules, Organization, or Public Folders in the new [Exchange admin center](/exchange/exchange-admin-center). The workaround is to assign the role directly to users instead of the group. - Azure Information Protection Portal (the classic portal) doesn't recognize role membership via group yet. You can [migrate to the unified sensitivity labeling platform](/azure/information-protection/configure-policy-migrate-labels) and then use the Office 365 Security & Compliance center to use group assignments to manage roles.-- [Apps admin center](https://config.office.com/) doesn't support this feature yet. Assign the Office Apps Administrator role directly to users. ## License requirements
active-directory Netsuite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netsuite-provisioning-tutorial.md
- Title: 'Tutorial: Configure NetSuite OneWorld for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and NetSuite OneWorld.
------- Previously updated : 11/21/2022--
-# Tutorial: Configuring NetSuite for automatic user provisioning
-
-The objective of this tutorial is to show you the steps you need to perform in NetSuite OneWorld and Azure AD to automatically provision and de-provision user accounts from Azure AD to NetSuite.
-
-> [!WARNING]
-> This provisioning integration will stop working with the release of NetSuite's Spring 2021 update due to a change to the NetSuite APIs that are used by Microsoft to provision users into NetSuite. This update will reach NetSuite customers between February and April of 2021. As a result of this, the provisioning functionality of the NetSuite application in the Azure Active Directory Enterprise App Gallery will be removed soon. The application's SSO functionality will remain intact. Microsoft is working with NetSuite to build a new modernized provisioning integration, but there is currently no ETA on when this will be completed.
-
-## Prerequisites
-
-The scenario outlined in this tutorial assumes that you already have the following items:
-
-* An Azure Active directory tenant.
-* A NetSuite OneWorld subscription. Note that automatic user provisioning is presently only supported with NetSuite OneWorld.
-* A user account in Netsuite with administrator permissions.
-* Integration with Azure AD requires a 2FA exemption. Please contact NetSuite's support team to request this exemption.
-
-## Assigning users to NetSuite OneWorld
-
-Azure Active Directory uses a concept called "assignments" to determine which users should receive access to selected apps. In the context of automatic user account provisioning, only the users and groups that have been "assigned" to an application in Azure AD are synchronized.
-
-Before configuring and enabling the provisioning service, you need to decide what users and/or groups in Azure AD represent the users who need access to your NetSuite app. Once decided, you can assign these users to your NetSuite app by following the instructions here:
-
-[Assign a user or group to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md)
-
-### Important tips for assigning users to NetSuite OneWorld
-
-* It is recommended that a single Azure AD user is assigned to NetSuite to test the provisioning configuration. Additional users and/or groups may be assigned later.
-
-* When assigning a user to NetSuite, you must select a valid user role. The "Default Access" role does not work for provisioning.
-
-## Enable User Provisioning
-
-This section guides you through connecting your Azure AD to NetSuite's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in NetSuite based on user and group assignment in Azure AD.
-
-> [!TIP]
-> You may also choose to enabled SAML-based Single Sign-On for NetSuite, following the instructions provided in [Azure portal](https://portal.azure.com). Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
-
-### To configure user account provisioning:
-
-The objective of this section is to outline how to enable user provisioning of Active Directory user accounts to NetSuite.
-
-1. In the [Azure portal](https://portal.azure.com), browse to the **Azure Active Directory > Enterprise Apps > All applications** section.
-
-1. If you have already configured NetSuite for single sign-on, search for your instance of NetSuite using the search field. Otherwise, select **Add** and search for **NetSuite** in the application gallery. Select NetSuite from the search results, and add it to your list of applications.
-
-1. Select your instance of NetSuite, then select the **Provisioning** tab.
-
-1. Set the **Provisioning Mode** to **Automatic**.
-
- ![Screenshot shows the NetSuite Provisioning page, with Provisioning Mode set to Automatic and other values you can set.](./media/netsuite-provisioning-tutorial/provisioning.png)
-
-1. Under the **Admin Credentials** section, provide the following configuration settings:
-
- a. In the **Admin User Name** textbox, type a NetSuite account name that has the **System Administrator** profile in NetSuite.com assigned.
-
- b. In the **Admin Password** textbox, type the password for this account.
-
-1. In the Azure portal, click **Test Connection** to ensure Azure AD can connect to your NetSuite app.
-
-1. In the **Notification Email** field, enter the email address of a person or group who should receive provisioning error notifications, and check the checkbox.
-
-1. Click **Save.**
-
-1. Under the Mappings section, select **Synchronize Azure Active Directory Users to NetSuite.**
-
-1. In the **Attribute Mappings** section, review the user attributes that are synchronized from Azure AD to NetSuite. Note that the attributes selected as **Matching** properties are used to match the user accounts in NetSuite for update operations. Select the Save button to commit any changes.
-
-1. To enable the Azure AD provisioning service for NetSuite, change the **Provisioning Status** to **On** in the Settings section
-
-1. Click **Save.**
-
-It starts the initial synchronization of any users and/or groups assigned to NetSuite in the Users and Groups section. Note that the initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity logs, which describe all actions performed by the provisioning service on your NetSuite app.
-
-For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
-
-## Additional resources
-
-* [Managing user account provisioning for Enterprise Apps](tutorial-list.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Configure Single Sign-on](netsuite-tutorial.md)
active-directory Otsuka Shokai Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/otsuka-shokai-tutorial.md
- Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Otsuka Shokai | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Otsuka Shokai.
-------- Previously updated : 11/21/2022---
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Otsuka Shokai
-
-In this tutorial, you'll learn how to integrate Otsuka Shokai with Azure Active Directory (Azure AD). When you integrate Otsuka Shokai with Azure AD, you can:
-
-* Control in Azure AD who has access to Otsuka Shokai.
-* Enable your users to be automatically signed-in to Otsuka Shokai with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Otsuka Shokai single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* Otsuka Shokai supports **IDP** initiated SSO
-
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-
-## Adding Otsuka Shokai from the gallery
-
-To configure the integration of Otsuka Shokai into Azure AD, you need to add Otsuka Shokai from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Otsuka Shokai** in the search box.
-1. Select **Otsuka Shokai** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-
-## Configure and test Azure AD single sign-on for Otsuka Shokai
-
-Configure and test Azure AD SSO with Otsuka Shokai using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Otsuka Shokai.
-
-To configure and test Azure AD SSO with Otsuka Shokai, complete the following building blocks:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Otsuka Shokai SSO](#configure-otsuka-shokai-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Otsuka Shokai test user](#create-otsuka-shokai-test-user)** - to have a counterpart of B.Simon in Otsuka Shokai that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Otsuka Shokai** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section the application is pre-configured in **IDP** initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
-
-1. Otsuka Shokai application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. Otsuka Shokai application expects **nameidentifier** to be mapped with **user.objectid**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
-
- ![image](common/default-attributes.png)
-
-1. In addition to above, PureCloud by Genesys application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
-
- | Name | Source Attribute|
- | | |
- | Appid | `<Application ID>` |
-
- >[!NOTE]
- >`<Application ID>` is the value which you have copied from the **Properties** tab of Azure portal.
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Otsuka Shokai.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Otsuka Shokai**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Otsuka Shokai SSO
-
-1. When you connect to Customer's My Page from SSO app, the wizard of SSO setting starts.
-
-2. If Otsuka ID is not registered, proceed to Otsuka-ID new registration. If you have registered Otsuka-ID, proceed to the linkage setting.
-
-3. Proceed to the end and when the top screen is displayed after logging in to Customer's My Page, the SSO settings are complete.
-
-4. The next time you connect to Customer's My Page from the SSO app, after the guidance screen opens, the top screen is displayed after logging in to Customer's My Page.
-
-### Create Otsuka Shokai test user
-
-New registration of SaaS account will be performed at the first access to Otsuka Shokai. In addition, we will also associate Azure AD account and SaaS account at the time of new creation.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Otsuka Shokai tile in the Access Panel, you should be automatically signed in to the Otsuka Shokai for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)--- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)--- [Try Otsuka Shokai with Azure AD](https://aad.portal.azure.com/)
active-directory Configure Cmmc Level 1 Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-1-controls.md
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - | | AC.L1-3.1.1<br><br>**Practice statement:** Limit information system access to authorized users, processes acting on behalf of authorized users, or devices (including other information systems).<br><br>**Objectives:**<br>Determine if:<br>[a.] authorized users are identified;<br>[b.] processes acting on behalf of authorized users are identified;<br>[c.] devices (and other systems) authorized to connect to the system are identified;<br>[d.] system access is limited to authorized users;<br>[e.] system access is limited to processes acting on behalf of authorized users; and<br>[f.] system access is limited to authorized devices (including other systems). | You're responsible for setting up Azure AD accounts, which is accomplished from external HR systems, on-premises Active Directory, or directly in the cloud. You configure Conditional Access to only grant access from a known (Registered/Managed) device. In addition, apply the concept of least privilege when granting application permissions. Where possible, use delegated permission. <br><br>Set up users<br><li>[Plan cloud HR application to Azure Active Directory user provisioning](../app-provisioning/plan-cloud-hr-provision.md) <li>[Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)<li>[Add or delete users ΓÇô Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br><br>Set up devices<li>[What is device identity in Azure Active Directory](../devices/overview.md)<br><br>Configure applications<li>[QuickStart: Register an app in the Microsoft identity platform](../develop/quickstart-register-app.md)<li>[Microsoft identity platform scopes, permissions, & consent](../develop/v2-permissions-and-consent.md)<li>[Securing service principals in Azure Active Directory](../fundamentals/service-accounts-principal.md)<br><br>Conditional access<li>[What is Conditional Access in Azure Active Directory](../conditional-access/overview.md)<li>[Conditional Access require managed device](../conditional-access/require-managed-devices.md) |
-| AC.L1-3.1.2<br><br>**Practice statement:** Limit information system access to the types of transactions and functions that authorized users are permitted to execute.<br><br>**Objectives:**<br>Determine if:<br>[a.] the types of transactions and functions that authorized users are permitted to execute are defined; and<br>[b.] system access is limited to the defined types of transactions and functions for authorized users. | You're responsible for configuring access controls such as Role Based Access Controls (RBAC) with built-in or custom roles. Use role assignable groups to manage role assignments for multiple users requiring same access. Configure Attribute Based Access Controls (ABAC) with default or custom security attributes. The objective is to granularly control access to resources protected with Azure AD.<br><br>Set up RBAC<li>[Overview of role-based access control in Active Directory ](../roles/custom-overview.md)[Azure AD built-in roles](../roles/permissions-reference.md)<li>[Create and assign a custom role in Azure Active Directory](../roles/custom-create.md)<br><br>Set up ABAC<li>[What is Azure attribute-based access control (Azure ABAC)](/azure/role-based-access-control/conditions-overview)<li>[What are custom security attributes in Azure AD?](/azure/active-directory/fundamentals/custom-security-attributes-overview)<br><br>Configure groups for role assignment<li>[Use Azure AD groups to manage role assignments](../roles/groups-concept.md) |
-| AC.L1-3.1.20<br><br>**Practice statement:** Verify and control/limit connections to and use of external information systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] connections to external systems are identified;<br>[b.] the use of external systems is identified;<br>[c.] connections to external systems are verified;<br>[d.] the use of external systems is verified;<br>[e.] connections to external systems are controlled and or limited; and<br>[f.] the use of external systems is controlled and or limited. | You're responsible for configuring conditional access policies using device controls and or network locations to control and or limit connections and use of external systems. Configure Terms of Use (TOU) for recorded user acknowledgment of terms and conditions for use of external systems for access.<br><br>Set up Conditional Access as required<li>[What is Conditional Access?](../conditional-access/overview.md)<li>[Require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md)<li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<li>[Conditional Access: Filter for devices](/azure/active-directory/conditional-access/concept-condition-filters-for-devices)<br><br>Use Conditional Access to block access<li>[Conditional Access - Block access by location](../conditional-access/howto-conditional-access-policy-location.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md) |
+| AC.L1-3.1.2<br><br>**Practice statement:** Limit information system access to the types of transactions and functions that authorized users are permitted to execute.<br><br>**Objectives:**<br>Determine if:<br>[a.] the types of transactions and functions that authorized users are permitted to execute are defined; and<br>[b.] system access is limited to the defined types of transactions and functions for authorized users. | You're responsible for configuring access controls such as Role Based Access Controls (RBAC) with built-in or custom roles. Use role assignable groups to manage role assignments for multiple users requiring same access. Configure Attribute Based Access Controls (ABAC) with default or custom security attributes. The objective is to granularly control access to resources protected with Azure AD.<br><br>Set up RBAC<li>[Overview of role-based access control in Active Directory ](../roles/custom-overview.md)[Azure AD built-in roles](../roles/permissions-reference.md)<li>[Create and assign a custom role in Azure Active Directory](../roles/custom-create.md)<br><br>Set up ABAC<li>[What is Azure attribute-based access control (Azure ABAC)](../../role-based-access-control/conditions-overview.md)<li>[What are custom security attributes in Azure AD?](../fundamentals/custom-security-attributes-overview.md)<br><br>Configure groups for role assignment<li>[Use Azure AD groups to manage role assignments](../roles/groups-concept.md) |
+| AC.L1-3.1.20<br><br>**Practice statement:** Verify and control/limit connections to and use of external information systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] connections to external systems are identified;<br>[b.] the use of external systems is identified;<br>[c.] connections to external systems are verified;<br>[d.] the use of external systems is verified;<br>[e.] connections to external systems are controlled and or limited; and<br>[f.] the use of external systems is controlled and or limited. | You're responsible for configuring conditional access policies using device controls and or network locations to control and or limit connections and use of external systems. Configure Terms of Use (TOU) for recorded user acknowledgment of terms and conditions for use of external systems for access.<br><br>Set up Conditional Access as required<li>[What is Conditional Access?](../conditional-access/overview.md)<li>[Require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md)<li>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<li>[Conditional Access: Filter for devices](../conditional-access/concept-condition-filters-for-devices.md)<br><br>Use Conditional Access to block access<li>[Conditional Access - Block access by location](../conditional-access/howto-conditional-access-policy-location.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md) |
| AC.L1-3.1.22<br><br>**Practice statement:** Control information posted or processed on publicly accessible information systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] individuals authorized to post or process information on publicly accessible systems are identified;<br>[b.] procedures to ensure FCI isn't posted or processed on publicly accessible systems are identified;<br>[c.] a review process is in place prior to posting of any content to publicly accessible systems; and<br>[d.] content on publicly accessible systems is reviewed to ensure that it doesn't include federal contract information (FCI). | You're responsible for configuring Privileged Identity Management (PIM) to manage access to systems where posted information is publicly accessible. Require approvals with justification prior to role assignment in PIM. Configure Terms of Use (TOU) for systems where posted information is publicly accessible for recorded acknowledgment of terms and conditions for posting of publicly accessible information.<br><br>Plan PIM deployment<li>[What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<li>[Plan a Privileged Identity Management deployment](../privileged-identity-management/pim-deployment-plan.md)<br><br>Configure terms of use<li>[Terms of use - Azure Active Directory](../conditional-access/terms-of-use.md)<li>[Conditional Access require terms of use ](../conditional-access/require-tou.md)<li>[Configure Azure AD role settings in PIM - Require Justification](../privileged-identity-management/pim-how-to-change-default-settings.md) | ## Identification and Authentication (IA) domain
The following table provides a list of practice statement and objectives, and Az
* [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md) * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md) * [Configure CMMC Level 2 Identification and Authentication (IA) controls](configure-cmmc-level-2-identification-and-authentication.md)
-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
+* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Configure Cmmc Level 2 Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-access-control.md
The following table provides a list of practice statement and objectives, and Az
| AC.L2-3.1.9<br><br>**Practice statement:** Provide privacy and security notices consistent with applicable CUI rules.<br><br>**Objectives:**<br>Determine if:<br>[a.] privacy and security notices required by CUI-specified rules are identified, consistent, and associated with the specific CUI category; and<br>[b.] privacy and security notices are displayed. | With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<br><br>**Conditional access** <br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><br>**Terms of use**<br>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) | | AC.L2-3.1.10<br><br>**Practice statement:** Use session lock with pattern-hiding displays to prevent access and viewing of data after a period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] the period of inactivity after which the system initiates a session lock is defined;<br>[b.] access to the system and viewing of data is prevented by initiating a session lock after the defined period of inactivity; and<br>[c.] previously visible information is concealed via a pattern-hiding display after the defined period of inactivity. | Implement device lock by using a conditional access policy to restrict access to compliant or hybrid Azure AD joined devices. Configure policy settings on the device to enforce device lock at the OS level with MDM solutions such as Intune. Endpoint Manager or group policy objects can also be considered in hybrid deployments. For unmanaged devices, configure the Sign-In Frequency setting to force users to reauthenticate.<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[User sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>Configure devices for maximum minutes of inactivity until the screen locks ([Android](/mem/intune/configuration/device-restrictions-android), [iOS](/mem/intune/configuration/device-restrictions-ios), [Windows 10](/mem/intune/configuration/device-restrictions-windows-10)).| | AC.L2-3.1.11<br><br>**Practice statement:** Terminate (automatically) a user session after a defined condition.<br><br>**Objectives:**<br>Determine if:<br>[a.] conditions requiring a user session to terminate are defined; and<br>[b.] a user session is automatically terminated after any of the defined conditions occur. | Enable Continuous Access Evaluation (CAE) for all supported applications. For application that don't support CAE, or for conditions not applicable to CAE, implement policies in Microsoft Defender for Cloud Apps to automatically terminate sessions when conditions occur. Additionally, configure Azure Active Directory Identity Protection to evaluate user and sign-in Risk. Use conditional access with Identity protection to allow user to automatically remediate risk.<br>[Continuous access evaluation in Azure AD](../conditional-access/concept-continuous-access-evaluation.md)<br>[Control cloud app usage by creating policies](/defender-cloud-apps/control-cloud-apps-with-policies)<br>[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
-|AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad.md)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts.md) |
+|AC.L2-3.1.12<br><br>**Practice statement:** Monitor and control remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] remote access sessions are permitted;<br>[b.] the types of permitted remote access are identified;<br>[c.] remote access sessions are controlled; and<br>[d.] remote access sessions are monitored. | In todayΓÇÖs world, users access cloud-based applications almost exclusively remotely from unknown or untrusted networks. It's critical to securing this pattern of access to adopt zero trust principals. To meet these controls requirements in a modern cloud world we must verify each access request explicitly, implement least privilege and assume breach.<br><br>Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions.<br>[Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Deploy Cloud App Security Conditional Access App Control for Azure AD apps](/cloud-app-security/proxy-deployment-aad.md)<br>[What is Microsoft Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[Monitor alerts raised in Microsoft Defender for Cloud Apps](/cloud-app-security/monitor-alerts.md) |
| AC.L2-3.1.13<br><br>**Practice statement:** Employ cryptographic mechanisms to protect the confidentiality of remote access sessions.<br><br>**Objectives:**<br>Determine if:<br>[a.] cryptographic mechanisms to protect the confidentiality of remote access sessions are identified; and<br>[b.] cryptographic mechanisms to protect the confidentiality of remote access sessions are implemented. | All Azure AD customer-facing web services are secured with the Transport Layer Security (TLS) protocol and are implemented using FIPS-validated cryptography.<br>[Azure Active Directory Data Security Considerations (microsoft.com)](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) |
-| AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](/azure/active-directory/conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview.md) |
-| AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview.md)<br>[Filter for devices as a condition in Conditional Access policy](/azure/active-directory/conditional-access/concept-condition-filters-for-devices.md) |
-| AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md) |
+| AC.L2-3.1.14<br><br>**Practice statement:** Route remote access via managed access control points.<br><br>**Objectives:**<br>Determine if:<br>[a.] managed access control points are identified and implemented; and<br>[b.] remote access is routed through managed network access control points. | Configure named locations to delineate internal vs external networks. Configure conditional access app control to route access via Microsoft Defender for Cloud Apps (MDCA). Configure MDCA to control and monitor all sessions. Secure devices used by privileged accounts as part of the privileged access story.<br>[Location condition in Azure Active Directory Conditional Access](../conditional-access/location-condition.md)<br>[Session controls in Conditional Access policy](../conditional-access/concept-conditional-access-session.md)<br>[Securing privileged access overview](/security/compass/overview.md) |
+| AC.L2-3.1.15<br><br>**Practice statement:** Authorize remote execution of privileged commands and remote access to security-relevant information.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged commands authorized for remote execution are identified;<br>[b.] security-relevant information authorized to be accessed remotely is identified;<br>[c.] the execution of the identified privileged commands via remote access is authorized; and<br>[d.] access to the identified security-relevant information via remote access is authorized. | Conditional Access is the Zero Trust control plane to target policies for access to your apps when combined with authentication context. You can apply different policies in those apps. Secure devices used by privileged accounts as part of the privileged access story. Configure conditional access policies to require the use of these secured devices by privileged users when performing privileged commands.<br>[Cloud apps, actions, and authentication context in Conditional Access policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Securing privileged access overview](/security/compass/overview.md)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md) |
+| AC.L2-3.1.18<br><br>**Practice statement:** Control connection of mobile devices.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices that process, store, or transmit CUI are identified;<br>[b.] mobile device connections are authorized; and<br>[c.] mobile device connections are monitored and logged. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to enforce mobile device configuration and connection profile. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md) |
| AC.L2-3.1.19<br><br>**Practice statement:** Encrypt CUI on mobile devices and mobile computing platforms.<br><br>**Objectives:**<br>Determine if:<br>[a.] mobile devices and mobile computing platforms that process, store, or transmit CUI are identified; and<br>[b.] encryption is employed to protect CUI on identified mobile devices and mobile computing platforms. | **Managed Device**<br>Configure conditional access policies to enforce compliant or HAADJ device and to ensure managed devices are configured appropriately via device management solution to encrypt CUI.<br><br>**Unmanaged Device**<br>Configure conditional access policies to require app protection policies.<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md) |
-| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](/azure/active-directory/conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb.md)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad.md)
+| AC.L2-3.1.21<br><br>**Practice statement:** Limit use of portable storage devices on external systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of portable storage devices containing CUI on external systems is identified and documented;<br>[b.] limits on the use of portable storage devices containing CUI on external systems are defined; and<br>[c.] the use of portable storage devices containing CUI on external systems is limited as defined. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of portable storage devices on systems. Configure policy settings on the Windows device to completely prohibit or restrict use of portable storage at the OS level. For all other devices where you may be unable to granularly control access to portable storage block download entirely with Microsoft Defender for Cloud Apps. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Configure authentication session management - Azure Active Directory](../conditional-access/howto-conditional-access-session-lifetime.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br>[Restrict USB devices using administrative templates in Microsoft Intune](/mem/intune/configuration/administrative-templates-restrict-usb.md)<br><br>**Microsoft Defender for Cloud Apps**<br>[Create session policies in Defender for Cloud Apps](/defender-cloud-apps/session-policy-aad.md)
### Next steps * [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md) * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md) * [Configure CMMC Level 2 Identification and Authentication (IA) controls](configure-cmmc-level-2-identification-and-authentication.md)
-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
+* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Configure Cmmc Level 2 Additional Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-additional-controls.md
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| AU.L2-3.3.1<br><br>**Practice statement:** Create and retain system audit logs and records to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit logs (for example, event types to be logged) to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity are specified;<br>[b.] the content of audit records needed to support monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity is defined;<br>[c.] audit records are created (generated);<br>[d.] audit records, once created, contain the defined content;<br>[e.] retention requirements for audit records are defined; and<br>[f.] audit records are retained as defined.<br><br>AU.L2-3.3.2<br><br>**Practice statement:** Ensure that the actions of individual system users can be uniquely traced to those users so they can be held accountable for their actions.<br><br>**Objectives:**<br>Determine if:<br>[a.] the content of the audit records needed to support the ability to uniquely trace users to their actions is defined; and<br>[b.] audit records, once created, contain the defined content. | All operations are audited within the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
-| AU.L2-3.3.4<br><br>**Practice statement:** Alert if an audit logging process fails.<br><br>**Objectives:**<br>Determine if:<br>[a.] personnel or roles to be alerted if an audit logging process failure is identified;<br>[b.] types of audit logging process failures for which alert will be generated are defined; and<br>[c] identified personnel or roles are alerted in the event of an audit logging process failure. | Azure Service Health notifies you about Azure service incidents so you can take action to mitigate downtime. Configure customizable cloud alerts for Azure Active Directory. <br>[What is Azure Service Health?](/azure/service-health/overview.md)<br>[Three ways to get notified about Azure service issues](https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/)<br>[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) |
-| AU.L2-3.3.6<br><br>**Practice statement:** Provide audit record reduction and report generation to support on-demand analysis and reporting.<br><br>**Objectives:**<br>Determine if:<br>[a.] an audit record reduction capability that supports on-demand analysis is provided; and<br>[b.] a report generation capability that supports on-demand reporting is provided. | Ensure Azure AD events are included in event logging strategy. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts. <br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](/azure/sentinel/connect-azure-active-directory.md)<br>[Tutorial: Stream logs to an Azure event hub](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
-| AU.L2-3.3.8<br><br>**Practice statement:** Protect audit information and audit logging tools from unauthorized access, modification, and deletion.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit information is protected from unauthorized access;<br>[b.] audit information is protected from unauthorized modification;<br>[c.] audit information is protected from unauthorized deletion;<br>[d.] audit logging tools are protected from unauthorized access;<br>[e.] audit logging tools are protected from unauthorized modification; and<br>[f.] audit logging tools are protected from unauthorized deletion.<br><br>AU.L2-3.3.9<br><br>**Practice statement:** Limit management of audit logging functionality to a subset of privileged users.<br><br>**Objectives:**<br>Determine if:<br>[a.] a subset of privileged users granted access to manage audit logging functionality is defined; and<br>[b.] management of audit logging functionality is limited to the defined subset of privileged users. | Azure AD logs are retained by default for 30 days. These logs are unable to modified or deleted and are only accessible to limited set of privileged roles.<br>[Sign-in logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-sign-ins.md)<br>[Audit logs in Azure Active Directory](/azure/active-directory/reports-monitoring/concept-audit-logs.md)
+| AU.L2-3.3.1<br><br>**Practice statement:** Create and retain system audit logs and records to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit logs (for example, event types to be logged) to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity are specified;<br>[b.] the content of audit records needed to support monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity is defined;<br>[c.] audit records are created (generated);<br>[d.] audit records, once created, contain the defined content;<br>[e.] retention requirements for audit records are defined; and<br>[f.] audit records are retained as defined.<br><br>AU.L2-3.3.2<br><br>**Practice statement:** Ensure that the actions of individual system users can be uniquely traced to those users so they can be held accountable for their actions.<br><br>**Objectives:**<br>Determine if:<br>[a.] the content of the audit records needed to support the ability to uniquely trace users to their actions is defined; and<br>[b.] audit records, once created, contain the defined content. | All operations are audited within the Azure AD audit logs. Each audit log entry contains a userΓÇÖs immutable objectID that can be used to uniquely trace an individual system user to each action. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification.<br>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md)<br>[Tutorial: Stream logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| AU.L2-3.3.4<br><br>**Practice statement:** Alert if an audit logging process fails.<br><br>**Objectives:**<br>Determine if:<br>[a.] personnel or roles to be alerted if an audit logging process failure is identified;<br>[b.] types of audit logging process failures for which alert will be generated are defined; and<br>[c] identified personnel or roles are alerted in the event of an audit logging process failure. | Azure Service Health notifies you about Azure service incidents so you can take action to mitigate downtime. Configure customizable cloud alerts for Azure Active Directory. <br>[What is Azure Service Health?](../../service-health/overview.md)<br>[Three ways to get notified about Azure service issues](https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/)<br>[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) |
+| AU.L2-3.3.6<br><br>**Practice statement:** Provide audit record reduction and report generation to support on-demand analysis and reporting.<br><br>**Objectives:**<br>Determine if:<br>[a.] an audit record reduction capability that supports on-demand analysis is provided; and<br>[b.] a report generation capability that supports on-demand reporting is provided. | Ensure Azure AD events are included in event logging strategy. You can collect and analyze logs by using a Security Information and Event Management (SIEM) solution such as Microsoft Sentinel. Alternatively, you can use Azure Event Hubs to integrate logs with third-party SIEM solutions to enable monitoring and notification. Use Azure AD entitlement management with access reviews to ensure compliance status of accounts. <br>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br>[Connect Azure Active Directory data to Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md)<br>[Tutorial: Stream logs to an Azure event hub](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| AU.L2-3.3.8<br><br>**Practice statement:** Protect audit information and audit logging tools from unauthorized access, modification, and deletion.<br><br>**Objectives:**<br>Determine if:<br>[a.] audit information is protected from unauthorized access;<br>[b.] audit information is protected from unauthorized modification;<br>[c.] audit information is protected from unauthorized deletion;<br>[d.] audit logging tools are protected from unauthorized access;<br>[e.] audit logging tools are protected from unauthorized modification; and<br>[f.] audit logging tools are protected from unauthorized deletion.<br><br>AU.L2-3.3.9<br><br>**Practice statement:** Limit management of audit logging functionality to a subset of privileged users.<br><br>**Objectives:**<br>Determine if:<br>[a.] a subset of privileged users granted access to manage audit logging functionality is defined; and<br>[b.] management of audit logging functionality is limited to the defined subset of privileged users. | Azure AD logs are retained by default for 30 days. These logs are unable to modified or deleted and are only accessible to limited set of privileged roles.<br>[Sign-in logs in Azure Active Directory](../reports-monitoring/concept-sign-ins.md)<br>[Audit logs in Azure Active Directory](../reports-monitoring/concept-audit-logs.md)
## Configuration Management (CM)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| CM.L2-3.4.2<br><br>**Practice statement:** Establish and enforce security configuration settings for information technology products employed in organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] security configuration settings for information technology products employed in the system are established and included in the baseline configuration; and<br>[b.] security configuration settings for information technology products employed in the system are enforced. | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity.md)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](/azure/active-directory/conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune.md)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview.md) |
-| CM.L2-3.4.5<br><br>**Practice statement:** Define, document, approve, and enforce physical and logical access restrictions associated with changes to organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] physical access restrictions associated with changes to the system are defined;<br>[b.] physical access restrictions associated with changes to the system are documented;<br>[c.] physical access restrictions associated with changes to the system are approved;<br>[d.] physical access restrictions associated with changes to the system are enforced;<br>[e.] logical access restrictions associated with changes to the system are defined;<br>[f.] logical access restrictions associated with changes to the system are documented;<br>[g.] logical access restrictions associated with changes to the system are approved; and<br>[h.] logical access restrictions associated with changes to the system are enforced. | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](/azure/active-directory/roles/custom-overview.md)<br>[What is Privileged Identity Management?](/azure/active-directory/privileged-identity-management/pim-configure.md)<br>[Approve or deny requests for Azure AD roles in PIM](/azure/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow.md) |
+| CM.L2-3.4.2<br><br>**Practice statement:** Establish and enforce security configuration settings for information technology products employed in organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] security configuration settings for information technology products employed in the system are established and included in the baseline configuration; and<br>[b.] security configuration settings for information technology products employed in the system are enforced. | Adopt a zero-trust security posture. Use conditional access policies to restrict access to compliant devices. Configure policy settings on the device to enforce security configuration settings on the device with MDM solutions such as Microsoft Intune. Microsoft Endpoint Configuration Manager(MECM) or group policy objects can also be considered in hybrid deployments and combined with conditional access require hybrid Azure AD joined device.<br><br>**Zero-trust**<br>[Securing identity with Zero Trust](/security/zero-trust/identity.md)<br><br>**Conditional access**<br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br>[Grant controls in Conditional Access policy](../conditional-access/concept-conditional-access-grant.md)<br><br>**Device policies**<br>[What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune.md)<br>[What is Defender for Cloud Apps?](/cloud-app-security/what-is-cloud-app-security.md)<br>[What is app management in Microsoft Intune?](/mem/intune/apps/app-management.md)<br>[Microsoft Endpoint Manager overview](/mem/endpoint-manager-overview.md) |
+| CM.L2-3.4.5<br><br>**Practice statement:** Define, document, approve, and enforce physical and logical access restrictions associated with changes to organizational systems.<br><br>**Objectives:**<br>Determine if:<br>[a.] physical access restrictions associated with changes to the system are defined;<br>[b.] physical access restrictions associated with changes to the system are documented;<br>[c.] physical access restrictions associated with changes to the system are approved;<br>[d.] physical access restrictions associated with changes to the system are enforced;<br>[e.] logical access restrictions associated with changes to the system are defined;<br>[f.] logical access restrictions associated with changes to the system are documented;<br>[g.] logical access restrictions associated with changes to the system are approved; and<br>[h.] logical access restrictions associated with changes to the system are enforced. | Azure Active Directory (Azure AD) is a cloud-based identity and access management service. Customers don't have physical access to the Azure AD datacenters. As such, each physical access restriction is satisfied by Microsoft and inherited by the customers of Azure AD. Implement Azure AD role based access controls. Eliminate standing privileged access, provide just in time access with approval workflows with Privileged Identity Management.<br>[Overview of Azure Active Directory role-based access control (RBAC)](../roles/custom-overview.md)<br>[What is Privileged Identity Management?](../privileged-identity-management/pim-configure.md)<br>[Approve or deny requests for Azure AD roles in PIM](../privileged-identity-management/azure-ad-pim-approval-workflow.md) |
| CM.L2-3.4.6<br><br>**Practice statement:** Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.<br><br>**Objectives:**<br>Determine if:<br>[a.] essential system capabilities are defined based on the principle of least functionality; and<br>[b.] the system is configured to provide only the defined essential capabilities. | Configure device management solutions (Such as Microsoft Intune) to implement a custom security baseline applied to organizational systems to remove non-essential applications and disable unnecessary services. Leave only the fewest capabilities necessary for the systems to operate effectively. Configure conditional access to restrict access to compliant or hybrid Azure AD joined devices. <br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Require device to be marked as compliant](../conditional-access/require-managed-devices.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md) |
-| CM.L2-3.4.7<br><br>**Practice statement:** Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.<br><br>**Objectives:**<br>Determine if:<br>[a.]essential programs are defined;<br>[b.] the use of nonessential programs is defined;<br>[c.] the use of nonessential programs is restricted, disabled, or prevented as defined;<br>[d.] essential functions are defined;<br>[e.] the use of nonessential functions is defined;<br>[f.] the use of nonessential functions is restricted, disabled, or prevented as defined;<br>[g.] essential ports are defined;<br>[h.] the use of nonessential ports is defined;<br>[i.] the use of nonessential ports is restricted, disabled, or prevented as defined;<br>[j.] essential protocols are defined;<br>[k.] the use of nonessential protocols is defined;<br>[l.] the use of nonessential protocols is restricted, disabled, or prevented as defined;<br>[m.] essential services are defined;<br>[n.] the use of nonessential services is defined; and<br>[o.] the use of nonessential services is restricted, disabled, or prevented as defined. | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](/azure/active-directory/roles/permissions-reference.md)<br>[Azure AD App Roles - App Roles vs. Groups ](/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md)<br>[Configure how users consent to applications](/azure/active-directory/manage-apps/configure-user-consent?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](/azure/active-directory/manage-apps/configure-user-consent-groups?tabs=azure-portal.md)<br>[Configure the admin consent workflow](/azure/active-directory/manage-apps/configure-admin-consent-workflow.md)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps.d)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it.md) |
-| CM.L2-3.4.8<br><br>**Practice statement:** Apply deny-by-exception (blocklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (allowlist) policy to allow the execution of authorized software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy specifying whether allowlist or blocklist is to be implemented is specified;<br>[b.] the software allowed to execute under allowlist or denied use under blocklist is specified; and<br>[c.] allowlist to allow the execution of authorized software or blocklist to prevent the use of unauthorized software is implemented as specified.<br><br>CM.L2-3.4.9<br><br>**Practice statement:** Control and monitor user-installed software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy for controlling the installation of software by users is established;<br>[b.] installation of software by users is controlled based on the established policy; and<br>[c.] installation of software by users is monitored. | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Conditional Access - Require compliant or hybrid joined devices](/azure/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md) |
+| CM.L2-3.4.7<br><br>**Practice statement:** Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.<br><br>**Objectives:**<br>Determine if:<br>[a.]essential programs are defined;<br>[b.] the use of nonessential programs is defined;<br>[c.] the use of nonessential programs is restricted, disabled, or prevented as defined;<br>[d.] essential functions are defined;<br>[e.] the use of nonessential functions is defined;<br>[f.] the use of nonessential functions is restricted, disabled, or prevented as defined;<br>[g.] essential ports are defined;<br>[h.] the use of nonessential ports is defined;<br>[i.] the use of nonessential ports is restricted, disabled, or prevented as defined;<br>[j.] essential protocols are defined;<br>[k.] the use of nonessential protocols is defined;<br>[l.] the use of nonessential protocols is restricted, disabled, or prevented as defined;<br>[m.] essential services are defined;<br>[n.] the use of nonessential services is defined; and<br>[o.] the use of nonessential services is restricted, disabled, or prevented as defined. | Use Application Administrator role to delegate authorized use of essential applications. Use App Roles or group claims to manage least privilege access within application. Configure user consent to require admin approval and don't allow group owner consent. Configure Admin consent request workflows to enable users to request access to applications that require admin consent. Use Microsoft Defender for Cloud Apps to identify unsanctioned/unknown application use. Use this telemetry to then determine essential/non-essential apps.<br>[Azure AD built-in roles - Application Administrator](../roles/permissions-reference.md)<br>[Azure AD App Roles - App Roles vs. Groups ](../develop/howto-add-app-roles-in-azure-ad-apps.md)<br>[Configure how users consent to applications](../manage-apps/configure-user-consent.md?tabs=azure-portal.md)<br>[Configure group owner consent to apps accessing group data](../manage-apps/configure-user-consent-groups.md?tabs=azure-portal.md)<br>[Configure the admin consent workflow](../manage-apps/configure-admin-consent-workflow.md)<br>[What is Defender for Cloud Apps?](/defender-cloud-apps/what-is-defender-for-cloud-apps.d)<br>[Discover and manage Shadow IT tutorial](/defender-cloud-apps/tutorial-shadow-it.md) |
+| CM.L2-3.4.8<br><br>**Practice statement:** Apply deny-by-exception (blocklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (allowlist) policy to allow the execution of authorized software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy specifying whether allowlist or blocklist is to be implemented is specified;<br>[b.] the software allowed to execute under allowlist or denied use under blocklist is specified; and<br>[c.] allowlist to allow the execution of authorized software or blocklist to prevent the use of unauthorized software is implemented as specified.<br><br>CM.L2-3.4.9<br><br>**Practice statement:** Control and monitor user-installed software.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy for controlling the installation of software by users is established;<br>[b.] installation of software by users is controlled based on the established policy; and<br>[c.] installation of software by users is monitored. | Configure MDM/configuration management policy to prevent the use of unauthorized software. Configure conditional access grant controls to require compliant or hybrid joined device to incorporate device compliance with MDM/configuration management policy into the conditional access authorization decision.<br>[What is Microsoft Intune](/mem/intune/fundamentals/what-is-intune.md)<br>[Conditional Access - Require compliant or hybrid joined devices](../conditional-access/howto-conditional-access-policy-compliant-device.md) |
## Incident Response (IR)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| IR.L2-3.6.1<br><br>**Practice statement:** Establish an operational incident-handling capability for organizational systems that includes preparation, detection, analysis, containment, recovery, and user response activities.<br><br>**Objectives:**<br>Determine if:<br>[a.] an operational incident-handling capability is established;<br>[b.] the operational incident-handling capability includes preparation;<br>[c.] the operational incident-handling capability includes detection;<br>[d.] the operational incident-handling capability includes analysis;<br>[e.] the operational incident-handling capability includes containment;<br>[f.] the operational incident-handling capability includes recovery; and<br>[g.] the operational incident-handling capability includes user response activities. | Implement incident handling and monitoring capabilities. The audit logs record all configuration changes. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. You can stream each of these logs directly into a SIEM solution, such as Microsoft Sentinel. Alternatively, use Azure Event Hubs to integrate logs with third-party SIEM solutions.<br><br>**Audit events**<br>[Audit activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-audit-logs.md)<br>[Sign-in activity reports in the Azure Active Directory portal](/azure/active-directory/reports-monitoring/concept-sign-ins.md)<br>[How To: Investigate risk](/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk.md)<br><br>**SIEM integrations**<br>[Microsoft Sentinel : Connect data from Azure Active Directory (Azure AD)](/azure/sentinel/connect-azure-active-directory.md)[Stream to Azure event hub and other SIEMs](/azure/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
+| IR.L2-3.6.1<br><br>**Practice statement:** Establish an operational incident-handling capability for organizational systems that includes preparation, detection, analysis, containment, recovery, and user response activities.<br><br>**Objectives:**<br>Determine if:<br>[a.] an operational incident-handling capability is established;<br>[b.] the operational incident-handling capability includes preparation;<br>[c.] the operational incident-handling capability includes detection;<br>[d.] the operational incident-handling capability includes analysis;<br>[e.] the operational incident-handling capability includes containment;<br>[f.] the operational incident-handling capability includes recovery; and<br>[g.] the operational incident-handling capability includes user response activities. | Implement incident handling and monitoring capabilities. The audit logs record all configuration changes. Authentication and authorization events are audited within the sign-in logs, and any detected risks are audited in the Identity Protection logs. You can stream each of these logs directly into a SIEM solution, such as Microsoft Sentinel. Alternatively, use Azure Event Hubs to integrate logs with third-party SIEM solutions.<br><br>**Audit events**<br>[Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md)<br>[Sign-in activity reports in the Azure Active Directory portal](../reports-monitoring/concept-sign-ins.md)<br>[How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md)<br><br>**SIEM integrations**<br>[Microsoft Sentinel : Connect data from Azure Active Directory (Azure AD)](../../sentinel/connect-azure-active-directory.md)[Stream to Azure event hub and other SIEMs](../reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md) |
## Maintenance (MA)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - | | MA.L2-3.7.5<br><br>**Practice statement:** Require multifactor authentication to establish nonlocal maintenance sessions via external network connections and terminate such connections when nonlocal maintenance is complete.<br><br>**Objectives:**<br>Determine if:<br>[a.] multifactor authentication is used to establish nonlocal maintenance sessions via external network connections; and<br>[b.] nonlocal maintenance sessions established via external network connections are terminated when nonlocal maintenance is complete.| Accounts assigned administrative rights are targeted by attackers, including accounts used to establish non-local maintenance sessions. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.<br>[Conditional Access - Require MFA for administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md) |
-| MP.L2-3.8.7<br><br>**Practice statement:** Control the use of removable media on system components.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of removable media on system components is controlled. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant#require-device-to-be-marked-as-compliant.md)<br>[Require hybrid Azure AD joined device](/conditional-access/concept-conditional-access-grant#require-hybrid-azure-ad-joined-device.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
+| MP.L2-3.8.7<br><br>**Practice statement:** Control the use of removable media on system components.<br><br>**Objectives:**<br>Determine if:<br>[a.] the use of removable media on system components is controlled. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to control the use of removable media on systems. Deploy and manage Removable Storage Access Control using Intune or Group Policy. Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/conditional-access/concept-conditional-access-grant#require-hybrid-azure-ad-joined-device.md)<br><br>**Intune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Removable storage access control**<br>[Deploy and manage Removable Storage Access Control using Intune](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-intune?view=o365-worldwide&preserve-view=true)<br>[Deploy and manage Removable Storage Access Control using group policy](/microsoft-365/security/defender-endpoint/deploy-manage-removable-storage-group-policy?view=o365-worldwide&preserve-view=true) |
## Personnel Security (PS)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| PS.L2-3.9.2<br><br>**Practice statement:** Ensure that organizational systems containing CUI are protected during and after personnel actions such as terminations and transfers.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy and/or process for terminating system access and any credentials coincident with personnel actions is established;<br>[b.] system access and credentials are terminated consistent with personnel actions such as termination or transfer; and<br>[c] the system is protected during and after personnel transfer actions. | Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions.<br><br>**Account provisioning**<br>[What is identity provisioning with Azure AD?](/azure/active-directory/cloud-sync/what-is-provisioning.md)<br>[Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis.md)<br>[What is Azure AD Connect cloud sync?](/azure/active-directory/cloud-sync/what-is-cloud-sync.md)<br><br>**Revoke all associated authenticators**<br>[Revoke user access in an emergency in Azure Active Directory](/azure/active-directory/enterprise-users/users-revoke-access.md) |
+| PS.L2-3.9.2<br><br>**Practice statement:** Ensure that organizational systems containing CUI are protected during and after personnel actions such as terminations and transfers.<br><br>**Objectives:**<br>Determine if:<br>[a.] a policy and/or process for terminating system access and any credentials coincident with personnel actions is established;<br>[b.] system access and credentials are terminated consistent with personnel actions such as termination or transfer; and<br>[c] the system is protected during and after personnel transfer actions. | Configure provisioning (including disablement upon termination) of accounts in Azure AD from external HR systems, on-premises Active Directory, or directly in the cloud. Terminate all system access by revoking existing sessions.<br><br>**Account provisioning**<br>[What is identity provisioning with Azure AD?](../cloud-sync/what-is-provisioning.md)<br>[Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)<br>[What is Azure AD Connect cloud sync?](../cloud-sync/what-is-cloud-sync.md)<br><br>**Revoke all associated authenticators**<br>[Revoke user access in an emergency in Azure Active Directory](../enterprise-users/users-revoke-access.md) |
## System and Communications Protection (SC)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - | | SC.L2-3.13.3<br><br>**Practice statement:** Separate user functionality form system management functionality. <br><br>**Objectives:**<br>Determine if:<br>[a.] user functionality is identified;<br>[b.] system management functionality is identified; and<br>[c.] user functionality is separated from system management functionality. | Maintain separate user accounts in Azure Active Directory for everyday productivity use and administrative or system/privileged management. Privileged accounts should be cloud-only or managed accounts and not synchronized from on-premises to protect the cloud environment from on-premises compromise. System/privileged access should only be permitted from a security hardened privileged access workstation (PAW). Configure Conditional Access device filters to restrict access to administrative applications from PAWs that are enabled using Azure Virtual Desktops.<br>[Why are privileged access devices important](/security/compass/privileged-access-devices.md)<br>[Device Roles and Profiles](/security/compass/privileged-access-devices.md)<br>[Filter for devices as a condition in Conditional Access policy](../conditional-access/concept-condition-filters-for-devices.md)<br>[Azure Virtual Desktop](https://azure.microsoft.com/products/virtual-desktop/) |
-| SC.L2-3.13.4<br><br>**Practice statement:** Prevent unauthorized and unintended information transfer via shared system resources.<br><br>**Objectives:**<br>Determine if:<br>[a.] unauthorized and unintended information transfer via shared system resources is prevented. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md) |
-| SC.L2-3.13.13<br><br>**Practice statement:** Control and monitor the use of mobile code.<br><br>**Objectives:**<br>Determine if:<br>[a.] use of mobile code is controlled; and<br>[b.] use of mobile code is monitored. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SC.L2-3.13.4<br><br>**Practice statement:** Prevent unauthorized and unintended information transfer via shared system resources.<br><br>**Objectives:**<br>Determine if:<br>[a.] unauthorized and unintended information transfer via shared system resources is prevented. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to ensure devices are compliant with system hardening procedures. Include compliance with company policy regarding software patches to prevent attackers from exploiting flaws.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md) |
+| SC.L2-3.13.13<br><br>**Practice statement:** Control and monitor the use of mobile code.<br><br>**Objectives:**<br>Determine if:<br>[a.] use of mobile code is controlled; and<br>[b.] use of mobile code is monitored. | Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM) or group policy objects (GPO) to disable the use of mobile code. Where use of mobile code is required monitor the use with endpoint security such as Microsoft Defender for Endpoint.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
## System and Information Integrity (SI)
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| SI.L2-3.14.7<br><br>**Practice statement:**<br><br>**Objectives:** Identify unauthorized use of organizational systems.<br>Determine if:<br>[a.] authorized use of the system is defined; and<br>[b.] unauthorized use of the system is identified. | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
+| SI.L2-3.14.7<br><br>**Practice statement:**<br><br>**Objectives:** Identify unauthorized use of organizational systems.<br>Determine if:<br>[a.] authorized use of the system is defined; and<br>[b.] unauthorized use of the system is identified. | Consolidate telemetry: Azure AD logs to stream to SIEM, such as Azure Sentinel Configure device management policies via MDM (such as Microsoft Intune), Microsoft Endpoint Manager (MEM), or group policy objects (GPO) to require Intrusion Detection/Protection (IDS/IPS) such as Microsoft Defender for Endpoint is installed and in use. Use telemetry provided by the IDS/IPS to identify unusual activities or conditions related to inbound and outbound communications traffic or unauthorized use.<br><br>Configure Conditional Access policies to enforce device compliance.<br><br>**Conditional Access**<br>[Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br><br>**InTune**<br>[Device compliance policies in Microsoft Intune](/mem/intune/protect/device-compliance-get-started.md)<br><br>**Defender for Endpoint**<br>[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide&preserve-view=true) |
### Next steps
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
The following table provides a list of practice statement and objectives, and Az
| CMMC practice statement and objectives | Azure AD guidance and recommendations | | - | - |
-| IA.L2-3.5.3<br><br>**Practice statement:** Use multifactor authentication for local and network access to privileged accounts and for network access to non-privileged accounts. <br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] multifactor authentication is implemented for local access to privileged accounts;<br>[c.] multifactor authentication is implemented for network access to privileged accounts; and<br>[d.] multifactor authentication is implemented for network access to non-privileged accounts. | The following items are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the previous requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and higher.<br>[Grant controls in Conditional Access policy - Azure Active Directory](/azure/active-directory/conditional-access/concept-conditional-access-grant.md)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview.md)<br>[Authentication methods and features - Azure Active Directory](/azure/active-directory/authentication/concept-authentication-methods.md) |
-| IA.L2-3.5.4<br><br>**Practice statement:** Employ replay-resistant authentication mechanisms for network access to privileged and non-privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] replay-resistant authentication mechanisms are implemented for network account access to privileged and non-privileged accounts. | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](/azure/active-directory/standards/nist-overview.md) |
+| IA.L2-3.5.3<br><br>**Practice statement:** Use multifactor authentication for local and network access to privileged accounts and for network access to non-privileged accounts. <br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] multifactor authentication is implemented for local access to privileged accounts;<br>[c.] multifactor authentication is implemented for network access to privileged accounts; and<br>[d.] multifactor authentication is implemented for network access to non-privileged accounts. | The following items are definitions for the terms used for this control area:<li>**Local Access** - Access to an organizational information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.<li>**Network Access** - Access to an information system by a user (or a process acting on behalf of a user) communicating through a network (for example, local area network, wide area network, Internet).<li>**Privileged User** - A user that's authorized (and therefore, trusted) to perform security-relevant functions that ordinary users aren't authorized to perform.<br><br>Breaking down the previous requirement means:<li>All users are required MFA for network/remote access.<li>Only privileged users are required MFA for local access. If regular user accounts have administrative rights only on their computers, they're not a ΓÇ£privileged accountΓÇ¥ and don't require MFA for local access.<br><br> You're responsible for configuring Conditional Access to require multifactor authentication. Enable Azure AD Authentication methods that meet AAL2 and higher.<br>[Grant controls in Conditional Access policy - Azure Active Directory](../conditional-access/concept-conditional-access-grant.md)<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](./nist-overview.md)<br>[Authentication methods and features - Azure Active Directory](../authentication/concept-authentication-methods.md) |
+| IA.L2-3.5.4<br><br>**Practice statement:** Employ replay-resistant authentication mechanisms for network access to privileged and non-privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] replay-resistant authentication mechanisms are implemented for network account access to privileged and non-privileged accounts. | All Azure AD Authentication methods at AAL2 and above are replay resistant.<br>[Achieve NIST authenticator assurance levels with Azure Active Directory](./nist-overview.md) |
| IA.L2-3.5.5<br><br>**Practice statement:** Prevent reuse of identifiers for a defined period.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period within which identifiers can't be reused is defined; and<br>[b.] reuse of identifiers is prevented within the defined period. | All user, group, device object globally unique identifiers (GUIDs) are guaranteed unique and non-reusable for the lifetime of the Azure AD tenant.<br>[user resource type - Microsoft Graph v1.0](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true)<br>[group resource type - Microsoft Graph v1.0](/graph/api/resources/group?view=graph-rest-1.0&preserve-view=true)<br>[device resource type - Microsoft Graph v1.0](/graph/api/resources/device?view=graph-rest-1.0&preserve-view=true) |
-| IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](/azure/active-directory/devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/users.md)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser.md)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser.md)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice.md)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice.md) |
+| IA.L2-3.5.6<br><br>**Practice statement:** Disable identifiers after a defined period of inactivity.<br><br>**Objectives:**<br>Determine if:<br>[a.] a period of inactivity after which an identifier is disabled is defined; and<br>[b.] identifiers are disabled after the defined period of inactivity. | Implement account management automation with Microsoft Graph and Azure AD PowerShell. Use Microsoft Graph to monitor sign-in activity and Azure AD PowerShell to take action on accounts within the required time frame.<br><br>**Determine inactivity**<br>[Manage inactive user accounts in Azure AD](../reports-monitoring/howto-manage-inactive-user-accounts.md)<br>[Manage stale devices in Azure AD](../devices/manage-stale-devices.md)<br><br>**Remove or disable accounts**<br>[Working with users in Microsoft Graph](/graph/api/resources/users.md)<br>[Get a user](/graph/api/user-get?tabs=http)<br>[Update user](/graph/api/user-update?tabs=http)<br>[Delete a user](/graph/api/user-delete?tabs=http)<br><br>**Work with devices in Microsoft Graph**<br>[Get device](/graph/api/device-get?tabs=http)<br>[Update device](/graph/api/device-update?tabs=http)<br>[Delete device](/graph/api/device-delete?tabs=http)<br><br>**[Use Azure AD PowerShell](/powershell/module/azuread/)**<br>[Get-AzureADUser](/powershell/module/azuread/get-azureaduser.md)<br>[Set-AzureADUser](/powershell/module/azuread/set-azureaduser.md)<br>[Get-AzureADDevice](/powershell/module/azuread/get-azureaddevice.md)<br>[Set-AzureADDevice](/powershell/module/azuread/set-azureaddevice.md) |
| IA.L2-3.5.7<br><br>**Practice statement:**<br><br>**Objectives:** Enforce a minimum password complexity and change of characters when new passwords are created.<br>Determine if:<br>[a.] password complexity requirements are defined;<br>[b.] password change of character requirements are defined;<br>[c.] minimum password complexity requirements as defined are enforced when new passwords are created; and<br>[d.] minimum password change of character requirements as defined are enforced when new passwords are created.<br><br>IA.L2-3.5.8<br><br>**Practice statement:** Prohibit password reuse for a specified number of generations.<br><br>**Objectives:**<br>Determine if:<br>[a.] the number of generations during which a password cannot be reused is specified; and<br>[b.] reuse of passwords is prohibited during the specified number of generations. | We **strongly encourage** passwordless strategies. This control is only applicable to password authenticators, so removing passwords as an available authenticator renders this control not applicable.<br><br>Per NIST SP 800-63 B Section 5.1.1: Maintain a list of commonly used, expected, or compromised passwords.<br><br>With Azure AD password protection, default global banned password lists are automatically applied to all users in an Azure AD tenant. To support your business and security needs, you can define entries in a custom banned password list. When users change or reset their passwords, these banned password lists are checked to enforce the use of strong passwords.<br>For customers that require strict password character change, password reuse and complexity requirements use hybrid accounts configured with Password-Hash-Sync. This action ensures the passwords synchronized to Azure AD inherit the restrictions configured in Active Directory password policies. Further protect on-premises passwords by configuring on-premises Azure AD Password Protection for Active Directory Domain Services.<br>[NIST Special Publication 800-63 B](https://pages.nist.gov/800-63-3/sp800-63b.html)<br>[NIST Special Publication 800-53 Revision 5 (IA-5 - Control enhancement (1)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf)<br>[Eliminate bad passwords using Azure AD password protection](../authentication/concept-password-ban-bad.md)<br>[What is password hash synchronization with Azure AD?](../hybrid/whatis-phs.md) |
-| IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](/azure/active-directory/fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](/azure/active-directory/authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/security/business/solutions/passwordless-authentication?ef_id=369464fc2ba818d0bd6507de2cde3d58:G:s&OCID=AIDcmmdamuj0pc_SEM_369464fc2ba818d0bd6507de2cde3d58:G:s&msclkid=369464fc2ba818d0bd6507de2cde3d58) |
+| IA.L2-3.5.9<br><br>**Practice statement:** Allow temporary password use for system logons with an immediate change to a permanent password.<br><br>**Objectives:**<br>Determine if:<br>[a.] an immediate change to a permanent password is required when a temporary password is used for system sign-on. | An Azure AD user initial password is a temporary single use password that once successfully used is immediately required to be changed to a permanent password. Microsoft strongly encourages the adoption of passwordless authentication methods. Users can bootstrap Passwordless authentication methods using Temporary Access Pass (TAP). TAP is a time and use limited passcode issued by an admin that satisfies strong authentication requirements. Use of passwordless authentication along with the time and use limited TAP completely eliminates the use of passwords (and their reuse).<br>[Add or delete users - Azure Active Directory](../fundamentals/add-users-azure-active-directory.md)<br>[Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md)<br>[Passwordless authentication](/security/business/solutions/passwordless-authentication?ef_id=369464fc2ba818d0bd6507de2cde3d58:G:s&OCID=AIDcmmdamuj0pc_SEM_369464fc2ba818d0bd6507de2cde3d58:G:s&msclkid=369464fc2ba818d0bd6507de2cde3d58) |
| IA.L2-3.5.10<br><br>**Practice statement:** Store and transmit only cryptographically protected passwords.<br><br>**Objectives:**<br>Determine if:<br>[a.] passwords are cryptographically protected in storage; and<br>[b.] passwords are cryptographically protected in transit. | **Secret Encryption at Rest**:<br>In addition to disk level encryption, when at rest, secrets stored in the directory are encrypted using the Distributed Key Manager(DKM). The encryption keys are stored in Azure AD core store and in turn are encrypted with a scale unit key. The key is stored in a container that is protected with directory ACLs, for highest privileged users and specific services. The symmetric key is typically rotated every six months. Access to the environment is further protected with operational controls and physical security.<br><br>**Encryption in Transit**:<br>To assure data security, Directory Data in Azure AD is signed and encrypted while in transit between data centers within a scale unit. The data is encrypted and unencrypted by the Azure AD core store tier, which resides inside secured server hosting areas of the associated Microsoft data centers.<br><br>Customer-facing web services are secured with the Transport Layer Security (TLS) protocol.<br>For more information, [download](https://azure.microsoft.com/resources/azure-active-directory-data-security-considerations/) *Data Protection Considerations - Data Security*. On page 15, there are more details.<br>[Demystifying Password Hash Sync (microsoft.com)](https://www.microsoft.com/security/blog/2019/05/30/demystifying-password-hash-sync/)<br>[Azure Active Directory Data Security Considerations](https://aka.ms/aaddatawhitepaper) | |IA.L2-3.5.11<br><br>**Practice statement:** Obscure feedback of authentication information.<br><br>**Objectives:**<br>Determine if:<br>[a.] authentication information is obscured during the authentication process. | By default, Azure AD obscures all authenticator feedback. |
The following table provides a list of practice statement and objectives, and Az
* [Configure Azure Active Directory for CMMC compliance](configure-azure-active-directory-for-cmmc-compliance.md) * [Configure CMMC Level 1 controls](configure-cmmc-level-1-controls.md) * [Configure CMMC Level 2 Access Control (AC) controls](configure-cmmc-level-2-access-control.md)
-* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
+* [Configure CMMC Level 2 additional controls](configure-cmmc-level-2-additional-controls.md)
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
To learn more about VU Security and its complete set of solutions, visit
To get started with the VU Identity Card, ensure the following prerequisites are met: -- A tenant [configured](/azure/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant)
+- A tenant [configured](./verifiable-credentials-configure-tenant.md)
for Entra Verified ID service. - If you don\'t have an existing tenant, you can [create an Azure
User flow is specific to your application or website. However if you are using o
## Next steps - [Verifiable credentials admin API](admin-api.md)-- [Request Service REST API issuance specification](issuance-request-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Use the traditional VNet option when:
The overlay solution has the following limitations today
-* Only available for Linux and not for Windows.
* You can't deploy multiple overlay clusters on the same subnet. * Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay. * You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
You can create an AKS cluster using a system-assigned managed identity by runnin
az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \
- --node-count 3 \
--network-plugin kubenet \
- --vnet-subnet-id $SUBNET_ID
+ --service-cidr 10.0.0.0/16 \
+ --dns-service-ip 10.0.0.10 \
+ --pod-cidr 10.244.0.0/16 \
+ --docker-bridge-address 172.17.0.1/16 \
+ --vnet-subnet-id $SUBNET_ID
```
+* The *--service-cidr* is optional. This address is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+
+* The *--dns-service-ip* is optional. The address should be the *.10* address of your service IP address range.
+
+* The *--pod-cidr* is optional. This address should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+ * This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed if you need more addresses for additional nodes.
+ * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*.
+ * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
+
+* The *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network.
> [!Note] > If you wish to enable an AKS cluster to include a [Calico network policy][calico-network-policies] you can use the following command.
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
description: Learn how to use a public load balancer with a Standard SKU to expose your services with Azure Kubernetes Service (AKS). ++ Last updated 12/19/2022-- #Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an Azure Load Balancer with a Standard SKU.
You can customize different settings for your standard public load balancer at c
> [!IMPORTANT] > Only one outbound IP option (managed IPs, bring your own IP, or IP prefix) can be used at a given time.
+### Change the inbound pool type (PREVIEW)
+
+AKS nodes can be referenced in the load balancer backend pools by either their IP configuration (VMSS based membership) or by their IP address only. Utilizing the IP address based backend pool membership provides higher efficiencies when updating services and provisioning load balancers, especially at high node counts. Provisioning new clusters with IP based backend pools and converting existing clusters is now supported. When combined with NAT Gateway or user-defined routing egress types, provisioning of new nodes and services will be more performant.
+
+Two different pool membership types are available:
+
+- `nodeIPConfiguration` - legacy VMSS IP configuration based pool membership type
+- `nodeIP` - IP-based membership type
+
+#### Requirements
+
+* The `aks-preview` extension must be at least version 0.5.103.
+* The AKS cluster must be version 1.23 or newer.
+* The AKS cluster must be using standard load balancers and virtual machine scale sets.
+
+#### Limitations
+
+* Clusters using IP based backend pools are limited to 2500 nodes.
++
+#### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+#### Register the `IPBasedLoadBalancerPreview` preview feature
+
+To create an AKS cluster with IP based backend pools, you must enable the `IPBasedLoadBalancerPreview` feature flag on your subscription.
+
+Register the `IPBasedLoadBalancerPreview` feature flag by using the `az feature register` command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "IPBasedLoadBalancerPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/IPBasedLoadBalancerPreview')].{Name:name,State:properties.state}"
+```
+
+When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+#### Create a new AKS cluster with IP-based inbound pool membership
+
+```azurecli-interactive
+az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --load-balancer-backend-pool-type=nodeIP
+```
+
+#### Update an existing AKS cluster to use IP-based inbound pool membership
+
+> [!WARNING]
+> This operation will cause a temporary disruption to incoming service traffic in the cluster. The impact time will increase with larger clusters that have many nodes.
+
+```azurecli-interactive
+az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --load-balancer-backend-pool-type=nodeIP
+```
+ ### Scale the number of managed outbound public IPs Azure Load Balancer provides outbound and inbound connectivity from a virtual network. Outbound rules make it simple to configure network address translation for the public standard load balancer.
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
helm repo update
kubectl create namespace kured # Install kured in that namespace with Helm 3 (only on Linux nodes, kured is not working on Windows nodes)
-helm install my-release kubereboot/kured --namespace kured --set nodeSelector."kubernetes.io/os"=linux
+helm install my-release kubereboot/kured --namespace kured --set nodeSelector."kubernetes\.io/os"=linux
``` You can also configure additional parameters for `kured`, such as integration with Prometheus or Slack. For more information about additional configuration parameters, see the [kured Helm chart][kured-install].
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Although you can sign in to and change agent nodes, doing this operation is disc
You may only customize the NSGs on custom subnets. You may not customize NSGs on managed subnets or at the NIC level of the agent nodes. AKS has egress requirements to specific endpoints, to control egress and ensure the necessary connectivity, see [limit egress traffic](limit-egress-traffic.md). For ingress, the requirements are based on the applications you have deployed to cluster.
-## Stopped or de-allocated clusters
+## Stopped, de-allocated, and "Not Ready" nodes
-As stated earlier, manually de-allocating all cluster nodes via the IaaS APIs/CLI/portal renders the cluster out of support. The only supported way to stop/de-allocate all nodes is to [stop the AKS cluster](start-stop-cluster.md#stop-an-aks-cluster), which preserves the cluster state for up to 12 months.
+If you do not need your AKS workloads to run continuously, you can [stop the AKS cluster](start-stop-cluster.md#stop-an-aks-cluster) which stops all nodepools and the control plane, and start it again when needed. When you stop a cluster using the `az aks stop` command, the cluster state will be preserved for up to 12 months. After 12 months the cluster state and all of its resources will be deleted.
-Clusters that are stopped for more than 12 months will no longer preserve state.
+Manually de-allocating all cluster nodes via the IaaS APIs/CLI/portal is not a supported way to stop an AKS cluster or nodepool. The cluster will be considered out of support and will be stopped by AKS after 30 days. The clusters will then be subject to the same 12 month preservation policy as a correctly stopped cluster.
-Clusters that are de-allocated outside of the AKS APIs have no state preservation guarantees. The control planes for clusters in this state will be archived after 30 days, and deleted after 12 months.
+Clusters with 0 "Ready" nodes (or all "Not Ready") and 0 Running VMs will be stopped after 30 days.
AKS reserves the right to archive control planes that have been configured out of support guidelines for extended periods equal to and beyond 30 days. AKS maintains backups of cluster etcd metadata and can readily reallocate the cluster. This reallocation can be initiated by any PUT operation bringing the cluster back into support, such as an upgrade or scale to active agent nodes.
-If your subscription is suspended or deleted, your cluster's control plane and state will be deleted after 90 days.
+All clusters in a suspended or deleted subscription will be stopped immediately and deleted after 30 days
## Unsupported alpha and beta Kubernetes features
api-management Api Management Howto Configure Custom Domain Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-custom-domain-gateway.md
When you provision a [self-hosted Azure API Management gateway](self-hosted-gateway-overview.md), it is not assigned a host name and has to be referenced by its IP address. This article shows how to map an existing custom DNS name (also referred to as hostname) to a self-hosted gateway. + ## Prerequisites To perform the steps described in this article, you must have:
api-management Api Management Howto Provision Self Hosted Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-provision-self-hosted-gateway.md
Provisioning a gateway resource in your Azure API Management instance is a prerequisite for deploying a self-hosted gateway. This article walks through the steps to provision a gateway resource in API Management. + ## Prerequisites Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management Identity Provider Adal Retirement Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/identity-provider-adal-retirement-sep-2025.md
Your service is impacted by this change if:
## What is the deadline for the change?
-On 30 September, 2025, these identity providers will stop functioning. To avoid disruption of your developer portal, you need to update your Azure AD applications and identity provider configuration in Azure API Management by that date. Your developer portal might be at a security risk after Microsoft ADAL support ends in June 1, 2023. Learn more in [the official announcement](/azure/active-directory/fundamentals/whats-new#adal-end-of-support-announcement).
+On 30 September, 2025, these identity providers will stop functioning. To avoid disruption of your developer portal, you need to update your Azure AD applications and identity provider configuration in Azure API Management by that date. Your developer portal might be at a security risk after Microsoft ADAL support ends in June 1, 2023. Learn more in [the official announcement](../../active-directory/fundamentals/whats-new.md#adal-end-of-support-announcement).
Developer portal sign-in and sign-up with Azure AD or Azure AD B2C will stop working past 30 September, 2025 if you don't update your ADAL-based Azure AD or Azure AD B2C identity providers. This new authentication method is more secure, as it relies on the OAuth 2.0 authorization code flow with PKCE and uses an up-to-date software library.
If you have questions, get answers from community experts in [Microsoft Q&A](htt
## Next steps
-See all [upcoming breaking changes and feature retirements](overview.md).
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
Review [Automated API deployments with APIOps][28] in the Azure Architecture Cen
[26]: https://github.com/microsoft/api-guidelines/blob/vNext/azure/Guidelines.md [27]: https://github.com/Azure/azure-api-style-guide [28]: /azure/architecture/example-scenario/devops/automated-api-deployments-apiops
-[29]: /azure/api-management/api-management-howto-properties
-[30]: /azure/api-management/backends
+[29]: ./api-management-howto-properties.md
+[30]: ./backends.md
api-management How To Configure Cloud Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-cloud-metrics-logs.md
This article provides details for configuring cloud metrics and logs for the [se
The self-hosted gateway has to be associated with an API management service and requires outbound TCP/IP connectivity to Azure on port 443. The gateway leverages the outbound connection to send telemetry to Azure, if configured to do so. + ## Metrics By default, the self-hosted gateway emits a number of metrics through [Azure Monitor](https://azure.microsoft.com/services/monitor/), same as the managed gateway [in the cloud](api-management-howto-use-azure-monitor.md).
api-management How To Configure Local Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md
This article provides details for configuring local metrics and logs for the [self-hosted gateway](./self-hosted-gateway-overview.md) deployed on a Kubernetes cluster. For configuring cloud metrics and logs, see [this article](how-to-configure-cloud-metrics-logs.md). + ## Metrics The self-hosted gateway supports [StatsD](https://github.com/statsd/statsd), which has become a unifying protocol for metrics collection and aggregation. This section walks through the steps for deploying StatsD to Kubernetes, configuring the gateway to emit metrics via StatsD, and using [Prometheus](https://prometheus.io/) to monitor the metrics.
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
Deploying the API Management gateway on an Azure Arc-enabled Kubernetes cluster
> [!NOTE] > You can also deploy the self-hosted gateway [directly to Kubernetes](./how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md). + ## Prerequisites * [Connect your Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md) within a supported Azure Arc region.
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
This article provides the steps for deploying self-hosted gateway component of A
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). + ## Prerequisites - [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management How To Deploy Self Hosted Gateway Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-docker.md
This article provides the steps for deploying self-hosted gateway component of A
> [!NOTE] > Hosting self-hosted gateway in Docker is best suited for evaluation and development use cases. Kubernetes is recommended for production use. Learn how to [deploy with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md) or using [deployment YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md) to learn how to deploy self-hosted gateway to Kubernetes. + ## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
This article provides the steps for deploying self-hosted gateway component of A
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). + ## Prerequisites - Create a Kubernetes cluster, or have access to an existing one.
api-management How To Deploy Self Hosted Gateway Kubernetes Opentelemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md
You learn how to:
> * Generate metrics by consuming APIs on the self-hosted gateway. > * Use the metrics from the OpenTelemetry Collector. + ## Prerequisites - [Create an Azure API Management instance](get-started-create-service-instance.md)
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
This article describes the steps for deploying the self-hosted gateway component
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md). + ## Prerequisites - Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
This article provides guidance on how to run [self-hosted gateway](./self-hosted
[!INCLUDE [preview](./includes/preview/preview-callout-self-hosted-gateway-deprecation.md)] + ## Access token Without a valid access token, a self-hosted gateway can't access and download configuration data from the endpoint of the associated API Management service. The access token can be valid for a maximum of 30 days. It must be regenerated, and the cluster configured with a fresh token, either manually or via automation before it expires.
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
This article explains how to migrate existing self-hosted gateway deployments to
> [!IMPORTANT] > Support for Azure API Management self-hosted gateway version 0 and version 1 container images is ending on 1 October 2023, along with its corresponding Configuration API v1. [Learn more in our deprecation documentation](./breaking-changes/self-hosted-gateway-v0-v1-retirement-oct-2023.md) + ## What's new? As we strive to make it easier for customers to deploy our self-hosted gateway, we've **introduced a new configuration API** that removes the dependency on Azure Storage, unless you're using [API inspector](api-management-howto-api-inspector.md) or quotas.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
This article explains how the self-hosted gateway feature of Azure API Managemen
For an overview of the features across the various gateway offerings, see [API gateway in API Management](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways). + ## Hybrid and multi-cloud API management The self-hosted gateway feature expands API Management support for hybrid and multi-cloud environments and enables organizations to efficiently and securely manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
az webapp list-runtimes --os linux | grep PHP
::: zone pivot="platform-windows"
-Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 8.0:
+Run the following command in the [Cloud Shell](https://shell.azure.com) to set the PHP version to 7.4:
```azurecli-interactive
-az webapp config set --resource-group <resource-group-name> --name <app-name> --php-version 8.0
+az webapp config set --resource-group <resource-group-name> --name <app-name> --php-version 7.4
``` ::: zone-end
application-gateway Tutorial Protect Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-protect-application-gateway.md
This article helps you create an Azure Application Gateway with a DDoS protected virtual network. Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your application gateways from large scale DDoS attacks. > [!IMPORTANT]
-> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](/azure/ddos-protection/ddos-protection-overview).
+> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
In this tutorial, you learn how to:
To delete the resource group:
Advance to the next article to learn how to: > [!div class="nextstepaction"]
-> [Configure an application gateway with TLS termination using the Azure portal](create-ssl-portal.md)
+> [Configure an application gateway with TLS termination using the Azure portal](create-ssl-portal.md)
attestation Custom Tcb Baseline Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/custom-tcb-baseline-enforcement.md
# Custom TCB baseline enforcement for SGX attestation
-Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](/azure/security/fundamentals/trusted-hardware-identity-management) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM lags the latest baseline offered by Intel and is expected to remain at tcbEvaluationDataNumber 10.
+Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](../security/fundamentals/trusted-hardware-identity-management.md) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM lags the latest baseline offered by Intel and is expected to remain at tcbEvaluationDataNumber 10.
-The custom TCB baseline enforcement feature in Azure Attestation will enable you to perform SGX attestation against a desired TCB baseline, as opposed to the Azure default TCB baseline which is applied across [Azure Confidential Computing](/azure/confidential-computing/) (ACC) fleet today.
+The custom TCB baseline enforcement feature in Azure Attestation will enable you to perform SGX attestation against a desired TCB baseline, as opposed to the Azure default TCB baseline which is applied across [Azure Confidential Computing](../confidential-computing/index.yml) (ACC) fleet today.
## Why use custom TCB baseline enforcement feature?
Minimum PSW Windows version: "2.7.101.2"
### New users
-1. Create an attestation provider using Azure portal experience. [Details here](/azure/attestation/quickstart-portal#create-and-configure-the-provider-with-unsigned-policies)
+1. Create an attestation provider using Azure portal experience. [Details here](./quickstart-portal.md#create-and-configure-the-provider-with-unsigned-policies)
-2. Go to overview page and view the current default policy of the attestation provider. [Details here](/azure/attestation/quickstart-portal#view-an-attestation-policy)
+2. Go to overview page and view the current default policy of the attestation provider. [Details here](./quickstart-portal.md#view-an-attestation-policy)
3. Click on **View current and available TCB baselines for attestation**, view **Available TCB baselines**, identify the desired TCB identifier and click Cancel
Minimum PSW Windows version: "2.7.101.2"
Shared provider users need to migrate to custom providers to be able to perform attestation against custom TCB baseline
-1. Create an attestation provider using Azure portal experience. [Details here](/azure/attestation/quickstart-portal#create-and-configure-the-provider-with-unsigned-policies)
+1. Create an attestation provider using Azure portal experience. [Details here](./quickstart-portal.md#create-and-configure-the-provider-with-unsigned-policies)
-2. Go to overview page and view the current default policy of the attestation provider. [Details here](/azure/attestation/quickstart-portal#view-an-attestation-policy)
+2. Go to overview page and view the current default policy of the attestation provider. [Details here](./quickstart-portal.md#view-an-attestation-policy)
3. Click on **View current and available TCB baselines for attestation**, view **Available TCB baselines**, identify the desired TCB identifier and click Cancel
Shared provider users need to migrate to custom providers to be able to perform
### Existing custom provider users
-1. Go to overview page and view the current default policy of the attestation provider. [Details here](/azure/attestation/quickstart-portal#view-an-attestation-policy)
+1. Go to overview page and view the current default policy of the attestation provider. [Details here](./quickstart-portal.md#view-an-attestation-policy)
2. Click on **View current and available TCB baselines for attestation**, view **Available TCB baselines**, identify the desired TCB identifier and click Cancel
c:[type=="x-ms-attestation-type"] => issue(type="tee", value=c.value);
- If the PSW version of ACC node is lower than the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will fail - If the PSW version of ACC node is greater than or equal to the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will pass - For customers who do not configure a custom TCB baseline in attestation policy, attestation will be performed against the Azure default TCB baseline-- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline--
+- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
You can delete an empty Hybrid Runbook Worker group from the portal.
## Automatic upgrade of extension
-Hybrid Worker extension supports [Automatic upgrade](/azure/virtual-machines/automatic-extension-upgrade) of minor versions by default. We recommend that you enable Automatic upgrades to take advantage of any security or feature updates without manual overhead. However, to prevent the extension from automatically upgrading (for example, if there is a strict change windows and can only be updated at specific time), you can opt out of this feature by setting the `enableAutomaticUpgrade`property in ARM, Bicep template, PowerShell cmdlets to *false*. Set the same property to *true* whenever you want to re-enable the Automatic upgrade.
+Hybrid Worker extension supports [Automatic upgrade](../virtual-machines/automatic-extension-upgrade.md) of minor versions by default. We recommend that you enable Automatic upgrades to take advantage of any security or feature updates without manual overhead. However, to prevent the extension from automatically upgrading (for example, if there is a strict change windows and can only be updated at specific time), you can opt out of this feature by setting the `enableAutomaticUpgrade`property in ARM, Bicep template, PowerShell cmdlets to *false*. Set the same property to *true* whenever you want to re-enable the Automatic upgrade.
```powershell $extensionType = "HybridWorkerForLinux/HybridWorkerForWindows"
New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Locati
#### [Bicep template](#tab/bicep-template)
-You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/azure/azure-resource-manager/bicep/overview)
+You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](../azure-resource-manager/bicep/overview.md)
```Bicep param automationAccount string
Using [VM insights](../azure-monitor/vm/vminsights-overview.md), you can monitor
- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md). - To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).-- To learn about VM extensions for Arc-enabled VMware vSphere VMs, see [Manage VMware VMs in Azure through Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md).
+- To learn about VM extensions for Arc-enabled VMware vSphere VMs, see [Manage VMware VMs in Azure through Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md).
automation Start Stop Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/start-stop-vm.md
# Troubleshoot Start/Stop VMs during off-hours issues > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
+> Start/Stop VM during off-hours, version 1 is deprecated and unavailable in the marketplace now. We recommend that you start using [version 2](../../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
This article provides information on troubleshooting and resolving issues that arise when you deploy the Azure Automation Start/Stop VMs during off-hours feature on your VMs.
If you don't see your problem here or you can't resolve your issue, try one of t
* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
+* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
azure-app-configuration Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api.md
Last updated 11/28/2022
# Azure App Configuration Data Plane REST API
-The documentation on the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane#control-plane) REST API for Azure App Configuration is available in the [Azure REST documentation](/rest/api/appconfiguration/). The following reference pages describe the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane#data-plane) REST API for Azure App Configuration. The data plane REST API is available at the endpoint of an App Configuration store, for example, `https://{store-name}.azconfig.io`.
+The documentation on the [control plane](../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) REST API for Azure App Configuration is available in the [Azure REST documentation](/rest/api/appconfiguration/). The following reference pages describe the [data plane](../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) REST API for Azure App Configuration. The data plane REST API is available at the endpoint of an App Configuration store, for example, `https://{store-name}.azconfig.io`.
## Resources
The documentation on the [control plane](/azure/azure-resource-manager/managemen
## Development - [Fiddler](./rest-api-fiddler.md)-- [Postman](./rest-api-postman.md)
+- [Postman](./rest-api-postman.md)
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--| |HPE Superdome Flex 280|1.20.0|1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1)
-|HPE Apollo 4200 Gen10 Plus (directly connected mode) |1.7.18 <sup>*</sup>|1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|
-|HPE Apollo 4200 Gen10 Plus (indirectly connected mode) |1.22.6 <sup>*</sup>|v1.10.0_2022-08-09 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|
-
-<sup>*</sup>Azure Kubernetes Service (AKS) on Azure Stack HCI
+|HPE Apollo 4200 Gen10 Plus | 1.22.6 | v1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|
### Kublr
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
Follow the instructions to sign in again. An error message states that you're su
## Configure just-in-time cluster access with Azure AD
-Another option for cluster access control is to use [Privileged Identity Management (PIM)](/azure/active-directory/privileged-identity-management/pim-configure) for just-in-time requests.
+Another option for cluster access control is to use [Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-configure.md) for just-in-time requests.
>[!NOTE]
-> [Azure AD PIM](/azure/active-directory/privileged-identity-management/pim-configure) is an Azure AD Premium capability that requires a Premium P2 SKU. For more on Azure AD SKUs, see the [pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
+> [Azure AD PIM](../../active-directory/privileged-identity-management/pim-configure.md) is an Azure AD Premium capability that requires a Premium P2 SKU. For more on Azure AD SKUs, see the [pricing guide](https://azure.microsoft.com/pricing/details/active-directory/).
To configure just-in-time access requests for your cluster, complete the following steps:
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --featur
## Next steps - Securely connect to the cluster by using [Cluster Connect](cluster-connect.md).-- Read about the [architecture of Azure RBAC on Arc-enabled Kubernetes](conceptual-azure-rbac.md).
+- Read about the [architecture of Azure RBAC on Arc-enabled Kubernetes](conceptual-azure-rbac.md).
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
False whl k8s-extension C:\Users\somename\.azure\c
* `Microsoft.KubernetesConfiguration/extensions` * `Microsoft.KubernetesConfiguration/fluxConfigurations`
-* [Registration](/azure/azure-resource-manager/management/resource-providers-and-types#azure-portal) of the following Azure resource providers:
+* [Registration](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) of the following Azure resource providers:
* Microsoft.ContainerService * Microsoft.Kubernetes
az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connec
#### Using Kubelet identity as authentication method for AKS clusters
-When working with AKS clusters, one of the authentication options to use is kubelet identity. By default, AKS creates its own kubelet identity in the managed resource group. If you prefer, you can use a [pre-created kubelet managed identity](/azure/aks/use-managed-identity#use-a-pre-created-kubelet-managed-identity). To do so, add the parameter `--config useKubeletIdentity=true` at the time of Flux extension installation.
+When working with AKS clusters, one of the authentication options to use is kubelet identity. By default, AKS creates its own kubelet identity in the managed resource group. If you prefer, you can use a [pre-created kubelet managed identity](../../aks/use-managed-identity.md#use-a-pre-created-kubelet-managed-identity). To do so, add the parameter `--config useKubeletIdentity=true` at the time of Flux extension installation.
```azurecli az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config useKubeletIdentity=true
az k8s-extension delete -g <resource-group> -c <cluster-name> -n flux -t managed
## Next steps * Read more about [configurations and GitOps](conceptual-gitops-flux2.md).
-* Learn how to [use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md).
+* Learn how to [use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
# What is Azure Arc resource bridge (preview)?
-Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/) preview), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](/azure/azure-arc/system-center-virtual-machine-manager/) preview).
+Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/index.yml) preview), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](../system-center-virtual-machine-manager/index.yml) preview).
Arc resource bridge is a packaged virtual machine that hosts a *management* Kubernetes cluster and requires no user management. The virtual machine is deployed on the on-premises infrastructure, and an ARM resource of Arc resource bridge is created in Azure. The two resources are then connected, allowing VM self-service and management from Azure. The on-premises resource bridge uses guest management to tag local resources, making them available in Azure.
You may need to allow specific URLs to [ensure outbound connectivity is not bloc
## Next steps * Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md).
-* Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+* Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Azure Arc-enabled servers support the installation of the Connected Machine agen
Azure Arc-enabled servers do not support installing the agent on virtual machines running in Azure, or on virtual machines running on Azure Stack Hub or Azure Stack Edge, as they are already modeled as Azure VMs and able to be managed directly in Azure.
+> [!NOTE]
+> For additional information on using Arc-enabled servers in VMware environments, see the [VMware FAQ](vmware-faq.md).
+ ## Supported operating systems The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent. Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, are not supported operating environments.
azure-arc Vmware Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/vmware-faq.md
+
+ Title: Azure Arc-enabled servers VMware Frequently Asked Questions
+description: Learn how to use Azure Arc-enabled servers on virtual machines running in VMware environments.
Last updated : 12/21/2022+++
+# Azure Arc-enabled servers VMware Frequently Asked Questions
+
+This article addresses frequently asked questions about Arc-enabled servers on virtual machines running in VMware environments.
+
+## What is Azure Arc?
+
+Azure Arc is the overarching brand for a suite of Azure hybrid products that extend specific Azure public cloud services and/or management capabilities beyond Azure to on-premises environments and 3rd-party clouds. Azure Arc-enabled server, for example, allows you to use the same Azure management tools you would with a VM running in Azure with a VM running on-premises in a VMware cluster.
+
+## What's the difference between Arc-enabled server and Arc-enabled\<hypervisor\>?
+
+> [!NOTE]
+> Arc-enabled\<hypervisor\> refers to Arc-enabled VMare environments such as Arc-enabled VMware vSphere. **Arc-enabled VMware vSphere is currently in Public Preview**.
+
+The easiest way to think of this is as follows:
+
+- Arc-enabled server is responsible for the guest operating system and knows nothing of the virtualization platform that itΓÇÖs running on. Since Arc-enabled server also supports bare-metal machines, there may, in fact, not even be a host hypervisor.
+
+- Arc-enabled VMware vSphere is a superset of Arc-enabled server that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management such as VM start, stop, resize, create, and delete. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. See [What is Azure Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview.md) to learn more.
+
+> [!NOTE]
+> Arc-enabled VMware vSphere also provides guest operating system managementΓÇöin fact, it uses the same components as Arc-enabled server. However, during Public Preview, not all Azure services supported by Arc-enabled server are available for Arc-enabled VMware vSphereΓÇöcurrently Azure Monitor, Update Management and Microsoft Defender for Cloud are not supported. Arc-enabled VMware vSphere is not supported by Azure VMware Solution (AVS).
+>
+
+## Can I use Azure Arc-enabled server on VMs running in VMware environments?
+
+Yes. Azure Arc-enabled server works with VMs running on VMware vSphere as well as Azure VMware Solution (AVS) and supports the full breadth of guest management capabilities across security, monitoring, and governance.
+
+## Which operating systems does Azure Arc work with?
+
+Arc-enabled server and/or Arc-enabled \<hypervisor\> works with all supported versions of Windows Server and major distributions of Linux.
+
+<!--To address this question properly, we need to specify which Arc service the question applies to. LetΓÇÖs assume the question applies to Arc-enabled server and/or Arc-enabled \<hypervisor\>: it works with all supported versions of Windows Server and major distributions of Linux. -->
+
+## Should I use Arc-enabled server, Arc-enabled\<hypervisor\>, and can I use both?
+
+While Arc-enabled server and Arc-enabled VMware vSphere can be used in conjunction with one another, please note that this will produce dual representations of the same underlying Virtual Machine. This scenario may produce potentially duplicate guest management and is not advisable.
+
azure-cache-for-redis Cache Retired Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-retired-features.md
If you don't upgrade your Redis 4 cache by June 30, 2023, the cache is automatic
Cloud Service version 4 caches can't be upgraded to version 6 until they're migrated to a cache based on Azure Virtual Machine Scale Set.
-For more information, see [Caches with a dependency on Cloud Services (classic)](/azure/azure-cache-for-redis/cache-faq).
+For more information, see [Caches with a dependency on Cloud Services (classic)](./cache-faq.yml).
Starting on April 30, 2023, Cloud Service caches receive only critical security updates and critical bug fixes. Cloud Service caches won't support any new features released after April 30, 2023. We highly recommend migrating your caches to Azure Virtual Machine Scale Set.
No, the upgrade can't be rolled back.
## Next steps <!-- Add a context sentence for the following links --> - [What's new](cache-whats-new.md)-- [Azure Cache for Redis FAQ](cache-faq.yml)
+- [Azure Cache for Redis FAQ](cache-faq.yml)
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
Use the following table to compare feature and functional differences between th
| Feature/behavior | In-process<sup>3</sup> | Isolated worker process | | - | - | - |
-| [Supported .NET versions](./dotnet-isolated-process-guide.md#supported-versions) | Long Term Support (LTS) versions | All supported versions + .NET Framework |
+| [Supported .NET versions](dotnet-isolated-process-guide.md#supported-versions) | Long Term Support (LTS) versions | [All supported versions](dotnet-isolated-process-guide.md#supported-versions) + .NET Framework |
| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | | Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | | Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported (public preview)](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions) |
-| Model types exposed by bindings | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient]<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations |
-| HTTP trigger model types| [HttpRequest]/[ObjectResult] | [HttpRequestData]/[HttpResponseData] |
+| Model types exposed by bindings | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient)<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations |
+| HTTP trigger model types| [HttpRequest](/dotnet/api/system.net.http.httpclient) / [ObjectResult](/dotnet/api/microsoft.aspnetcore.mvc.objectresult) | [HttpRequestData](/dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true) / [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata?view=azure-dotnet&preserve-view=true) |
| Output binding interaction | Return values (single output only)<br/>`out` parameters<br/>`IAsyncCollector` | Return values (expanded model with single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)) | | Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported | | Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](dotnet-isolated-process-guide.md#dependency-injection) | | Middleware | Not supported | [Supported](dotnet-isolated-process-guide.md#middleware) |
-| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via dependency injection | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext] or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)|
+| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)|
| Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported (public preview)](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights) | | Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) | | Cold start times<sup>2</sup> | (Baseline) | Additionally includes process launch |
To learn more, see:
+ [Develop .NET class library functions](functions-dotnet-class-library.md) + [Develop .NET isolated worker process functions](dotnet-isolated-process-guide.md)
+[ILogger]: /dotnet/api/microsoft.extensions.logging.ilogger
+[ILogger&lt;T&gt;]: /dotnet/api/microsoft.extensions.logging.logger-1
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
Azure Functions version 4.x is highly backwards compatible to version 3.x. Most
> > After the deadline, function apps can be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps are not eligible for new features, security patches, and performance optimizations. You'll get related service support once you upgraded them to version 4.x. >
->End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all Azure Functions runtime languages (e.g .NET, Python, node.js, PowerShell etc).
+>End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
> >We highly recommend you migrating your function apps to version 4.x of the Functions runtime by following this article. >
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Azure Monitor Agent replaces the Azure Monitor legacy monitoring agents:
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-rule-overview.md), where you define which data you want each agent to collect. Data collection rules let you manage data collection settings at scale and define unique, scoped configurations for subsets of machines. You can define a rule to send data from multiple machines to multiple destinations across regions and tenants. > [!NOTE]
-> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
+> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
**To collect data using Azure Monitor Agent:**
In addition to the generally available data collection listed above, Azure Monit
| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - | | [Change Tracking](../../automation/change-tracking/overview.md) | Change Tracking: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) |
-| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](/azure/network-watcher/azure-monitor-agent-with-connection-monitor) |
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
## Supported regions
View [supported operating systems for Azure Arc Connected Machine agent](../../a
## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Previously updated : 9/14/2022 Last updated : 1/10/2023 # Customer intent: As an IT manager, I want to understand how I should move from using legacy agents to Azure Monitor Agent.
Azure Monitor Agent provides the following benefits over legacy agents:
Your migration plan to the Azure Monitor Agent should take into account: -- **Current and new feature requirements:** Review [Azure Monitor Agent's supported services and features](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent has the features you require. If you currently use unsupported features you can temporarily do without, consider migrating to the new agent to benefit from added security and reduced cost immediately. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to *discover what solutions and features you're using today that depend on the legacy agent*.-
- If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
--- **Installing Azure Monitor Agent alongside a legacy agent:** If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, and you still need a legacy agent, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort.-
- Azure Monitor Agent can run alongside the legacy Log Analytics agents on the same machine so that you can continue to use existing functionality during evaluation or migration. You can begin the transition, but ensure you understand the limitations:
- - Be careful when you collect duplicate data from the same machine. Duplicate data could skew query results and affect downstream features like alerts, dashboards, or workbooks. For example, VM Insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents.
- If you install Azure Monitor Agent and create a data collection rule for these events and performance data, you'll collect duplicate data. If you're using both agents to collect the same type of data, make sure the agents are *collecting data from different machines* or *sending the data to different destinations*. Collecting duplicate data also generates more charges for data ingestion and retention.
-
+- **Service (legacy Solutions) requirements:**
+ - Review [Azure Monitor Agent's supported services list](agents-overview.md#supported-services-and-features) to ensure that Azure Monitor Agent supports the services you require. If you currently use service(s) in preview, start testing your scenarios during the preview phase. This will save time and ensure you're ready to deploy to production as soon as the service becomes generally available. Moreover you benefit from added security and reduced cost immediately.
+ - Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to *discover what solutions and features you're using today that depend on the legacy agents*.
+ - If you use Microsoft Sentinel, see [Gap analysis for Microsoft Sentinel](../../sentinel/ama-migrate.md#gap-analysis-between-agents) for a comparison of the extra data collected by Microsoft Sentinel.
+
+- **Installing Azure Monitor Agent alongside a legacy agent:**
+ - If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later.
+ - Azure Monitor Agent **can run alongside the legacy Log Analytics agents on the same machine** so that you can continue to use existing functionality during evaluation or migration. You can begin the transition, but ensure you understand the **limitations below**:
+ - Be careful when you collect duplicate data from the same machine, as this could skew query results, affect downstream features like alerts, dashboards, workbooks and generate more charges for data ingestion and retention. To avoid data duplication, ensure the agents are *collecting data from different machines* or *sending the data to different destinations*. Additionally,
+ - For **Defender for Cloud**, you will only be [billed once per machine](/azure/defender-for-cloud/auto-deploy-azure-monitoring-agent#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when running both agents
+ - For **Sentinel**, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents.
- Running two telemetry agents on the same machine consumes double the resources, including but not limited to CPU, memory, storage space, and network bandwidth. ## Prerequisites
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
The [data collection rule](../essentials/data-collection-rule-overview.md) defin
You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace. > [!NOTE]
-> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
+> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
To create the data collection rule in the Azure portal:
Learn more about:
- [Azure Monitor Agent](azure-monitor-agent-overview.md). - [Data collection rules](../essentials/data-collection-rule-overview.md).-- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
+- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To complete this procedure, you need:
You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace. > [!NOTE]
-> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
+> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
### [Portal](#tab/portal) 1. On the **Monitor** menu, select **Data Collection Rules**.
Examples of using a custom XPath to filter events:
- [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md). - Learn more about [Azure Monitor Agent](azure-monitor-agent-overview.md).-- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
+- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
The data collection rule defines:
You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace. > [!NOTE]
-> To send data across tenants, you must first enable [Azure Lighthouse](/azure/lighthouse/overview).
+> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
### [Portal](#tab/portal)
Learn more about:
- [Azure Monitor Agent](azure-monitor-agent-overview.md). - [Data collection rules](../essentials/data-collection-rule-overview.md).-- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
+- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
When a log alert rule is created, the query is validated for correct syntax. But
- Rules were created via the API, and validation was skipped by the user. - The query [runs on multiple resources](../logs/cross-workspace-query.md), and one or more of the resources was deleted or moved.-- The [query fails](/azure/azure-monitor/logs/api/errors) because:
+- The [query fails](../logs/api/errors.md) because:
- The logging solution wasn't [deployed to the workspace](../insights/solutions.md#install-a-monitoring-solution), so tables aren't created. - Data stopped flowing to a table in the query for more than 30 days. - [Custom logs tables](../agents/data-sources-custom-logs.md) aren't yet created, because the data flow hasn't started.
Try the following steps to resolve the problem:
- Learn about [log alerts in Azure](./alerts-unified-log.md). - Learn more about [configuring log alerts](../logs/log-query-overview.md).-- Learn more about [log queries](../logs/log-query-overview.md).
+- Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Title: Monitor performance on Azure VMs - Application Insights
-description: Application performance monitoring for Azure Virtual Machines and Azure Virtual Machine Scale Sets. Chart load and response time, dependency information, and set alerts on performance.
+ Title: Monitor performance on Azure VMs - Azure Application Insights
+description: Application performance monitoring for Azure Virtual Machine and Azure Virtual Machine Scale Sets.
Previously updated : 11/15/2022 Last updated : 01/11/2023 ms.devlang: csharp, java, javascript, python
-# Deploy Application Insights Agent on virtual machines and Virtual Machine Scale Sets
+# Application Insights for Azure VMs and Virtual Machine Scale Sets
-Enabling monitoring for your .NET or Java-based web applications running on [Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) and [Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
+Enabling monitoring for your ASP.NET and ASP.NET Core IIS-hosted applications running on [Azure virtual machines](https://azure.microsoft.com/services/virtual-machines/) or [Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) is now easier than ever. Get all the benefits of using Application Insights without modifying your code.
-This article walks you through enabling Application Insights monitoring by using Application Insights Agent. It also provides preliminary guidance for automating the process for large-scale deployments.
-
-Java-based applications running on Azure Virtual Machines and Azure Virtual Machine Scale Sets are monitored with the [Application Insights Java 3.0 agent](./java-in-process-agent.md), which is generally available.
-
-> [!IMPORTANT]
-> Application Insights Agent for ASP.NET and ASP.NET Core applications running on Azure Virtual Machines and Azure Virtual Machine Scale Sets is currently in public preview. For monitoring your ASP.NET applications running on-premises, use [Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md), which is generally available and fully supported.
->
-> The preview version for Azure Virtual Machines and Azure Virtual Machine Scale Sets is provided without a service-level agreement. We don't recommend it for production workloads. Some features might not be supported, and some might have constrained capabilities.
->
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This article walks you through enabling Application Insights monitoring using the Application Insights Agent and provides preliminary guidance for automating the process for large-scale deployments.
## Enable Application Insights
Auto-instrumentation is easy to enable. Advanced configuration isn't required.
For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). > [!NOTE]
-> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications, and Java. Use an SDK to instrument Node.js and Python applications hosted on Azure Virtual Machines and Azure Virtual Machine Scale Sets.
-
+> Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and Virtual Machine Scale Sets.
### [.NET Framework](#tab/net)
-The Application Insights Agent autocollects the same dependency signals out-of-the-box as the SDK. To learn more, see [Dependency autocollection](asp-net-dependencies.md#net).
+The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
-### [.NET Core/.NET](#tab/core)
+### [.NET Core / .NET](#tab/core)
-The Application Insights Agent autocollects the same dependency signals out-of-the-box as the SDK. To learn more, see [Dependency autocollection](asp-net-dependencies.md#net).
+The Application Insights Agent auto-collects the same dependency signals out-of-the-box as the SDK. See [Dependency auto-collection](./auto-collect-dependencies.md#net) to learn more.
### [Java](#tab/Java)
-We recommend [Application Insights Java 3.0 agent](./java-in-process-agent.md) for Java. The most popular libraries, frameworks, logs, and dependencies are [autocollected](./java-in-process-agent.md#autocollected-requests) along with many [other configurations](./java-standalone-config.md).
+We recommend [Application Insights Java 3.0 agent](./java-in-process-agent.md) for Java. The most popular libraries, frameworks, logs, and dependencies are [auto-collected](./java-in-process-agent.md#autocollected-requests), with a multitude of [other configurations](./java-standalone-config.md)
### [Node.js](#tab/nodejs)
To monitor Python apps, use the [SDK](./opencensus-python.md).
-## Manage Application Insights Agent for .NET applications on virtual machines by using PowerShell
+Before installing the Application Insights Agent, you'll need a connection string. [Create a new Application Insights Resource](./create-workspace-resource.md) or copy the connection string from an existing application insights resource.
+
+### Enable Monitoring for Virtual Machines
+
+### Method 1 - Azure portal / GUI
+1. Go to Azure portal and navigate to your Application Insights resource and copy your connection string to the clipboard.
+
+ :::image type="content"source="./media/azure-vm-vmss-apps/connect-string.png" alt-text="Screenshot of the connection string." lightbox="./media/azure-vm-vmss-apps/connect-string.png":::
+
+2. Navigate to your virtual machine, open the "Extensions + applications" pane under the "Settings" section in the left side navigation menu, and select "+ Add"
-Before you install Application Insights Agent, you'll need a connection string. [Create a new Application Insights resource](./create-new-resource.md) or copy the connection string from an existing Application Insights resource.
+ :::image type="content"source="./media/azure-vm-vmss-apps/add-extension.png" alt-text="Screenshot of the extensions pane with an add button." lightbox="media/azure-vm-vmss-apps/add-extension.png":::
+
+3. Select the "Application Insights Agent" card, and select "Next"
+
+ :::image type="content"source="./media/azure-vm-vmss-apps/select-extension.png" alt-text="Screenshot of the install an extension pane with a next button." lightbox="media/azure-vm-vmss-apps/select-extension.png":::
+
+4. Paste the connection string you copied at step 1 and select "Review + Create"
+
+ :::image type="content"source="./media/azure-vm-vmss-apps/install-extension.png" alt-text="Screenshot of the create pane with a review and create button." lightbox="media/azure-vm-vmss-apps/install-extension.png":::
+
+#### Method 2 - PowerShell
> [!NOTE]
-> If you're new to PowerShell, see the [Get Started Guide](/powershell/azure/get-started-azureps).
+> New to PowerShell? Check out the [Get Started Guide](/powershell/azure/get-started-azureps).
-Install or update Application Insights Agent as an extension for virtual machines:
+Install or update the Application Insights Agent as an extension for Azure virtual machines
```powershell
-$publicCfgJsonString = '
+# define variables to match your environment before running
+$ResourceGroup = "<myVmResourceGroup>"
+$VMName = "<myVmName>"
+$Location = "<myVmLocation>"
+$ConnectionString = "<myAppInsightsResourceConnectionString>"
+
+$publicCfgJsonString = @"
{
- "redfieldConfiguration": {
- "instrumentationKeyMap": {
- "filters": [
- {
- "appFilter": ".*",
- "machineFilter": ".*",
- "virtualPathFilter": ".*",
- "instrumentationSettings" : {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/" # Application Insights connection string, create new Application Insights resource if you don't have one. https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.insights%2Fcomponents
- }
+ "redfieldConfiguration": {
+ "instrumentationKeyMap": {
+ "filters": [
+ {
+ "appFilter": ".*",
+ "machineFilter": ".*",
+ "virtualPathFilter": ".*",
+ "instrumentationSettings" : {
+ "connectionString": "$ConnectionString"
+ }
+ }
+ ]
}
- ]
}
- }
-}
-';
-$privateCfgJsonString = '{}';
+ }
+"@
-Set-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Location "<myVmLocation>" -Name "ApplicationMonitoring" -Publisher "Microsoft.Azure.Diagnostics" -Type "ApplicationMonitoringWindows" -Version "2.8" -SettingString $publicCfgJsonString -ProtectedSettingString $privateCfgJsonString
-```
+$privateCfgJsonString = '{}'
+
+Set-AzVMExtension -ResourceGroupName $ResourceGroup -VMName $VMName -Location $Location -Name "ApplicationMonitoringWindows" -Publisher "Microsoft.Azure.Diagnostics" -Type "ApplicationMonitoringWindows" -Version "2.8" -SettingString $publicCfgJsonString -ProtectedSettingString $privateCfgJsonString
+```
> [!NOTE]
-> You can install or update Application Insights Agent as an extension across multiple virtual machines at scale by using a PowerShell loop.
-
-Uninstall Application Insights Agent extension from a virtual machine:
+> For more complicated at-scale deployments you can use a PowerShell loop to install or update the Application Insights Agent extension across multiple VMs.
+Query Application Insights Agent extension status for Azure Virtual Machine
```powershell
-Remove-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name "ApplicationMonitoring"
+Get-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name ApplicationMonitoringWindows -Status
```
-Query Application Insights Agent extension status for a virtual machine:
-
+Get list of installed extensions for Azure Virtual Machine
```powershell
-Get-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name ApplicationMonitoring -Status
+Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions"
```-
-Get a list of installed extensions for a virtual machine:
-
+Uninstall Application Insights Agent extension from Azure Virtual Machine
```powershell
-Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions"
-
-# Name : ApplicationMonitoring
-# ResourceGroupName : <myVmResourceGroup>
-# ResourceType : Microsoft.Compute/virtualMachines/extensions
-# Location : southcentralus
-# ResourceId : /subscriptions/<mySubscriptionId>/resourceGroups/<myVmResourceGroup>/providers/Microsoft.Compute/virtualMachines/<myVmName>/extensions/ApplicationMonitoring
+Remove-AzVMExtension -ResourceGroupName "<myVmResourceGroup>" -VMName "<myVmName>" -Name "ApplicationMonitoring"
```
-You can also view installed extensions in the [Azure Virtual Machine section](../../virtual-machines/extensions/overview.md) of the Azure portal.
- > [!NOTE]
-> Verify installation by selecting **Live Metrics Stream** within the Application Insights resource associated with the connection string you used to deploy the Application Insights Agent extension. If you're sending data from multiple virtual machines, select the target virtual machines under **Server Name**. It might take up to a minute for data to begin flowing.
+> Verify installation by selecting **Live Metrics Stream** within the Application Insights resource associated with the connection string you used to deploy the Application Insights Agent extension. If you're sending data from multiple Virtual Machines, select the target Azure virtual machines under **Server Name**. It might take up to a minute for data to begin flowing.
-## Manage Application Insights Agent for .NET applications on Virtual Machine Scale Sets by using PowerShell
+## Enable Monitoring for Virtual Machine Scale Sets
-Install or update Application Insights Agent as an extension for a Virtual Machine Scale Set:
+### Method 1 - Azure portal / GUI
+Follow the prior steps for VMs, but navigate to your Virtual Machine Scale Sets instead of your VM.
+### Method 2 - PowerShell
+Install or update the Application Insights Agent as an extension for Azure Virtual Machine Scale Set
```powershell
+# set resource group, vmss name, and connection string to reflect your enivornment
+$ResourceGroup = "<myVmResourceGroup>"
+$VMSSName = "<myVmName>"
+$ConnectionString = "<myAppInsightsResourceConnectionString>"
$publicCfgHashtable = @{ "redfieldConfiguration"= @{
$publicCfgHashtable =
"machineFilter"= ".*"; "virtualPathFilter"= ".*"; "instrumentationSettings" = @{
- "connectionString"= "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/" # Application Insights connection string, create new Application Insights resource if you don't have one. https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.insights%2Fcomponents
+ "connectionString"= "$ConnectionString"
} } )
$publicCfgHashtable =
} }; $privateCfgHashtable = @{};-
-$vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myVmssName>"
-
+$vmss = Get-AzVmss -ResourceGroupName $ResourceGroup -VMScaleSetName $VMSSName
Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitoringWindows" -Publisher "Microsoft.Azure.Diagnostics" -Type "ApplicationMonitoringWindows" -TypeHandlerVersion "2.8" -Setting $publicCfgHashtable -ProtectedSetting $privateCfgHashtable-
-Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
-
-# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance.
-```
-
-Uninstall the application monitoring extension from Virtual Machine Scale Sets:
-
-```powershell
-$vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myVmssName>"
-
-Remove-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitoring"
-
-Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
-
-# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance.
+Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss
+# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance
```
-Query the application monitoring extension status for Virtual Machine Scale Sets:
-
+Get list of installed extensions for Azure Virtual Machine Scale Sets
```powershell
-# Not supported by extensions framework
+Get-AzResource -ResourceId "/subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<myVmssName>/extensions"
```
-Get a list of installed extensions for Virtual Machine Scale Sets:
-
+Uninstall application monitoring extension from Azure Virtual Machine Scale Sets
```powershell
-Get-AzResource -ResourceId /subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<myVmssName>/extensions
-
-# Name : ApplicationMonitoringWindows
-# ResourceGroupName : <myResourceGroup>
-# ResourceType : Microsoft.Compute/virtualMachineScaleSets/extensions
-# Location :
-# ResourceId : /subscriptions/<mySubscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/<myVmssName>/extensions/ApplicationMonitoringWindows
+# set resource group and vmss name to reflect your environment
+$vmss = Get-AzVmss -ResourceGroupName "<myResourceGroup>" -VMScaleSetName "<myVmssName>"
+Remove-AzVmssExtension -VirtualMachineScaleSet $vmss -Name "ApplicationMonitoringWindows"
+Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
+# Note: Depending on your update policy, you might need to run Update-AzVmssInstance for each instance
``` ## Troubleshooting Find troubleshooting tips for the Application Insights Monitoring Agent extension for .NET applications running on Azure virtual machines and Virtual Machine Scale Sets.
-> [!NOTE]
-> The following steps don't apply to Node.js and Python applications, which require SDK instrumentation.
-
-Extension execution output is logged to files found in the following directories:
-
+If you are having trouble deploying the extension, then review execution output which is logged to files found in the following directories:
```Windows C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWindows\<version>\ ```
+If your extension has deployed successfully but you're unable to see telemetry, it could be one of the following issues covered in [Agent Troubleshooting](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot#known-issues).
+- Conflicting DLLs in an app's bin directory
+- Conflict with IIS shared configuration
[!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)]
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWi
### 2.8.44 -- Updated Application Insights .NET/.NET Core SDK to 2.20.1 - red field-- Enabled SQL query collection-- Enabled support for Azure Active Directory authentication
+- Updated Application Insights .NET/.NET Core SDK to 2.20.1 - red field.
+- Enabled SQL query collection.
+- Enabled support for Azure Active Directory authentication.
### 2.8.42
-Updated Application Insights .NET/.NET Core SDK to 2.18.1 - red field
+- Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1 - red field.
### 2.8.41
-Added ASP.NET Core auto-instrumentation feature
+- Added ASP.NET Core auto-instrumentation feature.
## Next steps-
-* Learn how to [deploy an application to a Virtual Machine Scale Set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md).
-* [Set up availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
+* Learn how to [deploy an application to an Azure Virtual Machine Scale Set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-deploy-app.md).
+* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your endpoint is down.
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
If your application is behind a firewall and can't connect directly to Applicati
} ```
+You can also set the http proxy using the environment variable `APPLICATIONINSIGHTS_PROXY`, which takes the format `https://<host>:<port>`. It then takes precedence over the proxy specified in the JSON configuration.
+ Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if they're set, and `http.nonProxyHosts`, if needed. ## Recovery from ingestion failures
azure-monitor Autoscale Multiprofile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md
Previously updated : 09/30/2022 Last updated : 01/10/2023
The example below shows an autoscale setting with a default profile and recurrin
In the above example, on Monday after 6 AM, the recurring profile will be used. If the instance count is less than 3, autoscale scales to the new minimum of three. Autoscale continues to use this profile and scales based on CPU% until Monday at 6 PM. At all other times scaling will be done according to the default profile, based on the number of requests. After 6 PM on Monday, autoscale switches to the default profile. If for example, the number of instances at the time is 12, autoscale scales in to 10, which the maximum allowed for the default profile.
+## Multiple contiguous profiles
+Autoscale transitions between profiles based on their start times. The end time for a given profile is determined by the start time of the following profile.
+
+In the portal, the end time field becomes the next start time for the default profile. You can't specify the same time for the end of one profile and the start of the next. The portal will force the end time to be one minute before the start time of the following profile. During this minute, the default profile will become active. If you don't want the default profile to become active between recurring profiles, leave the end time field empty.
+
+> [!TIP]
+> To set up multiple contiguous profiles using the portal, leave the end time empty. The current profile will stop being used when the next profile becomes active. Only specify an end time when you want to revert to the default profile.
+ ## Multiple profiles using templates, CLI, and PowerShell When creating multiple profiles using templates, the CLI, and PowerShell, follow the guidelines below. ## [ARM templates](#tab/templates)
-Follow the rules below when using ARM templates to create autoscale settings with multiple profiles:
- See the autoscale section of the [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings) for a full template reference.
-* Create a default profile for each recurring profile. If you have two recurring profiles, create two matching default profiles.
-* The default profile must contain a `recurrence` section that is the same as the recurring profile, with the `hours` and `minutes` elements set for the end time of the recurring profile. If you don't specify a recurrence with a start time for the default profile, the last recurrence rule will remain in effect.
-* The `name` element for the default profile is an object with the following format: `"name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Recurring profile name\"}"` where the recurring profile name is the value of the `name` element for the recurring profile. If the name isn't specified correctly, the default profile will appear as another recurring profile.
- *The rules above don't apply for non-recurring scheduled profiles.
+There is no specification in the template for end time. A profile will remain active until the next profile's start time.
+ ## Add a recurring profile using AIM templates
-The example below shows how to create two recurring profiles. One profile for weekends between 06:00 and 19:00, Saturday and Sunday, and a second for Mondays between 04:00 and 15:00. Note the two default profiles, one for each recurring profile.
+The example below shows how to create two recurring profiles. One profile for weekends from 00:01 on Saturday morning and a second Weekday profile starting on Mondays at 04:00. That means that the weekend profile will start on Saturday morning at one minute passed midnight and end on Monday morning at 04:00. The Weekday profile will start at 4am on Monday end just after midnight on Saturday morning.
Use the following command to deploy the template: ` az deployment group create --name VMSS1-Autoscale-607 --resource-group rg-vmss1 --template-file VMSS1-autoscale.json`
where *VMSS1-autoscale.json* is the the file containing the JSON object below.
"targetResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1", "profiles": [ {
- "name": "Monday profile",
+ "name": "Weekday profile",
"capacity": { "minimum": "3", "maximum": "20",
where *VMSS1-autoscale.json* is the the file containing the JSON object below.
"schedule": { "timeZone": "E. Europe Standard Time", "days": [
- "Saturday",
- "Sunday"
+ "Saturday"
], "hours": [
- 6
- ],
- "minutes": [
0
- ]
- }
- }
- },
- {
- "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Weekend profile\"}",
- "capacity": {
- "minimum": "2",
- "maximum": "10",
- "default": "2"
- },
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "E. Europe Standard Time",
- "days": [
- "Saturday",
- "Sunday"
- ],
- "hours": [
- 19
- ],
- "minutes": [
- 0
- ]
- }
- },
- "rules": [
- {
- "scaleAction": {
- "direction": "Increase",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT3M"
- },
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
- "operator": "GreaterThan",
- "statistic": "Average",
- "threshold": 50,
- "timeAggregation": "Average",
- "timeGrain": "PT1M",
- "timeWindow": "PT1M",
- "Dimensions": [],
- "dividePerInstance": false
- }
- },
- {
- "scaleAction": {
- "direction": "Decrease",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT3M"
- },
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
- "operator": "LessThan",
- "statistic": "Average",
- "threshold": 39,
- "timeAggregation": "Average",
- "timeGrain": "PT1M",
- "timeWindow": "PT3M",
- "Dimensions": [],
- "dividePerInstance": false
- }
- }
- ]
- },
- {
- "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Monday profile\"}",
- "capacity": {
- "minimum": "2",
- "maximum": "10",
- "default": "2"
- },
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "E. Europe Standard Time",
- "days": [
- "Monday"
- ],
- "hours": [
- 15
], "minutes": [
- 0
+ 1
] }
- },
- "rules": [
- {
- "scaleAction": {
- "direction": "Increase",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT3M"
- },
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
- "operator": "GreaterThan",
- "statistic": "Average",
- "threshold": 50,
- "timeAggregation": "Average",
- "timeGrain": "PT1M",
- "timeWindow": "PT1M",
- "Dimensions": [],
- "dividePerInstance": false
- }
- },
- {
- "scaleAction": {
- "direction": "Decrease",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT3M"
- },
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
- "operator": "LessThan",
- "statistic": "Average",
- "threshold": 39,
- "timeAggregation": "Average",
- "timeGrain": "PT1M",
- "timeWindow": "PT3M",
- "Dimensions": [],
- "dividePerInstance": false
- }
- }
- ]
+ }
} ], "notifications": [],
where *VMSS1-autoscale.json* is the the file containing the JSON object below.
} ]
-}
-
+}
``` ## [CLI](#tab/cli)
$DefaultProfileThursdayProfile = New-AzAutoscaleProfile -DefaultCapacity "1" -Ma
* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest) * [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings)
-* [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
+* [PowerShell Az PowerShell module.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
* [REST API reference. Autoscale Settings](https://learn.microsoft.com/rest/api/monitor/autoscale-settings).
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
-* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
-* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a Virtual Machine Scale Set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
A daily cap on a Log Analytics workspace allows you to avoid unexpected increase
> [!IMPORTANT] > You should use care when setting a daily cap because when data collection stops, your ability to observe and receive alerts when the health conditions of your resources will be impacted. It can also impact other Azure services and solutions whose functionality may depend on up-to-date data being available in the workspace. Your goal shouldn't be to regularly hit the daily limit but rather use it as an infrequent method to avoid unplanned charges resulting from an unexpected increase in the volume of data collected. >
-> For strategies to reduce your Azure Monitor costs, see [Cost optimization and Azure Monitor](/azure/azure-monitor/best-practices-cost).
+> For strategies to reduce your Azure Monitor costs, see [Cost optimization and Azure Monitor](../best-practices-cost.md).
## How the daily cap works Each workspace has a daily cap that defines its own data volume limit. When the daily cap is reached, a warning banner appears across the top of the page for the selected Log Analytics workspace in the Azure portal, and an operation event is sent to the *Operation* table under the **LogManagement** category. You can optionally create an alert rule to send an alert when this event is created.
To help you determine an appropriate daily cap for your workspace, see [Azure M
## Workspaces with Microsoft Defender for Cloud
-Some data security-related data types collected [Microsoft Defender for Cloud](../../security-center/index.yml) or Microsoft Sentinel are collected despite any daily cap. The data types listed below will not be capped except for workspaces in which Microsoft Defender for Cloud was installed before June 19, 2017:
+Some data security-related data types collected [Microsoft Defender for Cloud](../../security-center/index.yml) or Microsoft Sentinel are collected despite any daily cap, when the [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers-select-plan.md) solution was enabled on a workspace after June 19, 2017. The following data types will be subject to this special exception from the daily cap:
- WindowsEvent - SecurityAlert
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
The endpoint URI uses the following format, where the `Data Collection Endpoint`
### Body
-The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR.
+The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR. Additionally, it is important to ensure that the request body is properly encoded in UTF-8 to prevent any issues with data transmission.
## Sample call
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
If your web app is an ASP.NET Core application, it must be running on the [lates
Profiler isn't currently supported on free or shared app service plans. Upgrade to one of the basic plans for Profiler to start working.
+> [!NOTE]
+> The Azure Functions consumption plan isn't supported. See [Profile live Azure Functions app with Application Insights](./profiler-azure-functions.md).
+ ## Make sure you're searching for Profiler data within the right timeframe If the data you're trying to view is older than a couple of weeks, try limiting your time filter and try again. Traces are deleted after seven days.
azure-monitor Vminsights Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-policy.md
Last updated 12/13/2022
# Enable VM insights by using Azure Policy
-[Azure Policy](/azure/governance/policy/overview) lets you set and enforce requirements for all new resources you create and resources you modify. VM insights policy initiatives, which are predefined sets of policies created for VM insights, install the agents required for VM insights and enable monitoring on all new virtual machines in your Azure environment. This article explains how to enable VM insights for Azure virtual machines, Virtual Machine Scale Sets, and hybrid virtual machines connected with Azure Arc using predefined VM insights policy initiates.
+[Azure Policy](../../governance/policy/overview.md) lets you set and enforce requirements for all new resources you create and resources you modify. VM insights policy initiatives, which are predefined sets of policies created for VM insights, install the agents required for VM insights and enable monitoring on all new virtual machines in your Azure environment. This article explains how to enable VM insights for Azure virtual machines, Virtual Machine Scale Sets, and hybrid virtual machines connected with Azure Arc using predefined VM insights policy initiates.
> [!NOTE] > For information about how to use Azure Policy with Azure virtual machine scale sets and how to work with Azure Policy directly to enable Azure virtual machines, see [Deploy Azure Monitor at scale using Azure Policy](../best-practices.md).
To track the progress of remediation tasks, select **Remediate** from the **Poli
Learn how to: - [View VM insights Map](vminsights-maps.md) to see application dependencies. -- [View Azure VM performance](vminsights-performance.md) to identify bottlenecks and overall utilization of your VM's performance.
+- [View Azure VM performance](vminsights-performance.md) to identify bottlenecks and overall utilization of your VM's performance.
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
na Previously updated : 10/14/2021 Last updated : 01/12/2023
This article describes Azure NetApp Files feature availability in Azure Governme
## Feature availability
-For Azure Government regions supported by Azure NetApp Files, see the *[Products Available by Region page](https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia)*.
+For Azure Government regions supported by Azure NetApp Files, see the *[Products Available by Region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true)*.
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud are also available on supported Azure Government regions ***except for the features listed in the following table***:
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
|: |: |: | | Azure NetApp Files cross-region replication | Generally available (GA) | [Limited](cross-region-replication-introduction.md#supported-region-pairs) | | Azure NetApp Files backup | Public preview | No |
+| Standard network features | Generally available (GA) | No |
## Portal access
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 09/14/2022 Last updated : 01/10/2023 # Bicep CLI commands
The command returns an array of available versions.
## publish
-The `publish` command adds a module to a registry. The Azure container registry must exist and the account publishing to the registry must have the correct permissions. For more information about setting up a module registry, see [Use private registry for Bicep modules](private-module-registry.md).
+The `publish` command adds a module to a registry. The Azure container registry must exist and the account publishing to the registry must have the correct permissions. For more information about setting up a module registry, see [Use private registry for Bicep modules](private-module-registry.md). To publish a module, the account must have the correct profile and permissions to access the registry. You can configure the profile and credential precedence for authenticating to the registry in the [Bicep config file](./bicep-config-modules.md#configure-profiles-and-credentials).
After publishing the file to the registry, you can [reference it in a module](modules.md#file-in-registry).
The `publish` command doesn't recognize aliases that you've defined in a [bicepc
When your Bicep file uses modules that are published to a registry, the `restore` command gets copies of all the required modules from the registry. It stores those copies in a local cache. A Bicep file can only be built when the external files are available in the local cache. Typically, you don't need to run `restore` because it's called automatically by `build`.
-To restore external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the credential precedence for authenticating to the registry in the [Bicep config file](./bicep-config-modules.md#credentials-for-publishingrestoring-modules).
+To restore external modules to the local cache, the account must have the correct profile and permissions to access the registry. You can configure the profile and credential precedence for authenticating to the registry in the [Bicep config file](./bicep-config-modules.md#configure-profiles-and-credentials).
To use the restore command, you must have Bicep CLI version **0.4.1008 or later**. This command is currently only available when calling the Bicep CLI directly. It's not currently available through the Azure CLI command.
azure-resource-manager Bicep Config Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md
Title: Module setting for Bicep config description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 04/08/2022 Last updated : 01/11/2023 # Add module settings in the Bicep config file
-In a **bicepconfig.json** file, you can create aliases for module paths and configure credential precedence for restoring a module.
+In a **bicepconfig.json** file, you can create aliases for module paths and configure profile and credential precedence for publishing and restoring modules.
-This article describes the settings that are available for working with [modules](modules.md).
+This article describes the settings that are available for working with [Bicep modules](modules.md).
## Aliases for modules
You can override the public module registry alias definition in the bicepconfig.
} ```
-## Credentials for publishing/restoring modules
+## Configure profiles and credentials
-To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the credential precedence for authenticating to the registry. By default, Bicep uses the credentials from the user authenticated in Azure CLI or Azure PowerShell. To customize the credential precedence, see [Add credential precedence to Bicep config](bicep-config.md#credential-precedence).
+To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the profile and the credential precedence for authenticating to the registry. By default, Bicep uses the `AzureCloud` profile and the credentials from the user authenticated in Azure CLI or Azure PowerShell. You can customize `currentProfile` and `credentialPrecedence` in the config file.
+
+```json
+{
+ "cloud": {
+ "currentProfile": "AzureCloud",
+ "profiles": {
+ "AzureCloud": {
+ "resourceManagerEndpoint": "https://management.azure.com",
+ "activeDirectoryAuthority": "https://login.microsoftonline.com"
+ },
+ "AzureChinaCloud": {
+ "resourceManagerEndpoint": "https://management.chinacloudapi.cn",
+ "activeDirectoryAuthority": "https://login.chinacloudapi.cn"
+ },
+ "AzureUSGovernment": {
+ "resourceManagerEndpoint": "https://management.usgovcloudapi.net",
+ "activeDirectoryAuthority": "https://login.microsoftonline.us"
+ }
+ },
+ "credentialPrecedence": [
+ "AzureCLI",
+ "AzurePowerShell"
+ ]
+ }
+}
+```
+
+The available profiles are:
+
+- AzureCloud
+- AzureChinaCloud
+- AzureUSGovernment
+
+You can customize these profiles, or add new profiles for your on-premises environments.
+
+The available credential types are:
+
+- AzureCLI
+- AzurePowerShell
+- Environment
+- ManagedIdentity
+- VisualStudio
+- VisualStudioCode
+ ## Next steps
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file description: Describes the configuration file for your Bicep deployments Previously updated : 12/06/2022 Last updated : 01/09/2023 # Configure your Bicep environment
To create a `bicepconfig.json` file in Visual Studio Code, open the Command Pale
## Available settings
-When working with [modules](modules.md), you can add aliases for module paths. These aliases simplify your Bicep file because you don't have to repeat complicated paths. For more information, see [Add module settings to Bicep config](bicep-config-modules.md).
+When working with [modules](modules.md), you can add aliases for module paths. These aliases simplify your Bicep file because you don't have to repeat complicated paths. You can also configure cloud profile and credential precedence for authenticating to Azure from Bicep CLI and Visual Studio Code. The credentials are used to publish modules to registries and to restore external modules to the local cache when using the insert resource function.For more information, see [Add module settings to Bicep config](bicep-config-modules.md).
The [Bicep linter](linter.md) checks Bicep files for syntax errors and best practice violations. You can override the default settings for the Bicep file validation by modifying `bicepconfig.json`. For more information, see [Add linter settings to Bicep config](bicep-config-linter.md).
-You can also configure the credential precedence for authenticating to Azure from Bicep CLI and Visual Studio Code. The credentials are used to publish modules to registries and to restore external modules to the local cache when using the insert resource function.
-
-## Credential precedence
-
-You can configure the credential precedence for authenticating to the registry. By default, Bicep uses the credentials from the user authenticated in Azure CLI or Azure PowerShell. To customize the credential precedence, add `cloud` and `credentialPrecedence` elements to the config file.
-
-```json
-{
- "cloud": {
- "credentialPrecedence": [
- "AzureCLI",
- "AzurePowerShell"
- ]
- }
-}
-```
-
-The available credential types are:
--- AzureCLI-- AzurePowerShell-- Environment-- ManagedIdentity-- VisualStudio-- VisualStudioCode-- ## Intellisense The Bicep extension for Visual Studio Code supports intellisense for your `bicepconfig.json` file. Use the intellisense to discover available properties and values.
azure-resource-manager Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/private-module-registry.md
Title: Create private registry for Bicep module description: Learn how to set up an Azure container registry for private Bicep modules Previously updated : 04/01/2022 Last updated : 01/10/2023 # Create private registry for Bicep modules
A Bicep registry is hosted on [Azure Container Registry (ACR)](../../container-r
1. To publish modules to a registry, you must have permission to **push** an image. To deploy a module from a registry, you must have permission to **pull** the image. For more information about the roles that grant adequate access, see [Azure Container Registry roles and permissions](../../container-registry/container-registry-roles.md).
-1. Depending on the type of account you use to deploy the module, you may need to customize which credentials are used. These credentials are needed to get the modules from the registry. By default, credentials are obtained from Azure CLI or Azure PowerShell. You can customize the precedence for getting the credentials in the **bicepconfig.json** file. For more information, see [Credentials for restoring modules](bicep-config-modules.md#credentials-for-publishingrestoring-modules).
+1. Depending on the type of account you use to deploy the module, you may need to customize which credentials are used. These credentials are needed to get the modules from the registry. By default, credentials are obtained from Azure CLI or Azure PowerShell. You can customize the precedence for getting the credentials in the **bicepconfig.json** file. For more information, see [Credentials for restoring modules](bicep-config-modules.md#configure-profiles-and-credentials).
> [!IMPORTANT] > The private container registry is only available to users with the required access. However, it's accessed through the public internet. For more security, you can require access through a private endpoint. See [Connect privately to an Azure container registry using Azure Private Link](../../container-registry/container-registry-private-link.md).
azure-resource-manager Microsoft Common Dropdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-dropdown.md
When filtering is enabled, the control includes a text box for adding the filter
"type": "Microsoft.Common.DropDown", "label": "Example drop down", "placeholder": "",
- "defaultValue": "Value two",
+ "defaultValue": ["Value two"],
"toolTip": "", "multiselect": true, "selectAll": true,
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
# NSG service tags for Azure Video Indexer
-Azure Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
+Azure Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
+
+> [!NOTE]
+> If you are already using "AzureVideoAnalyzerForMedia" Network Service Tag you may experience issues with your networking security group starting 9 January 2023. This is because we are moving to a new Security Tag label "VideoIndexer" that was unfortunately not launched to GA in the UI before removing the preceding "AzureVideoAnalyzerForMedia" tag. The mitigatation is to run the following command from Powershell CLI:
+
+`$nsg | Add-AzNetworkSecurityRuleConfig -Name $rulename -Description "Testing our Service Tag" -Access Allow -Protocol * -Direction Inbound -Priority 100 -SourceAddressPrefix "YourTagDisplayName" -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange $port`
+
+Where `YourTagDisplayName` needs to be replaced with VideoIndexer
+ Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
To stay up-to-date with the most recent Azure Video Indexer developments, this a
For more information, see [supported languages](language-support.md).
+### Face grouping
+
+Significantly reduced number of low-quality face detection occurrences in the UI and [insights.json](video-indexer-output-json-v2.md#insights). Enhancing the quality and usability through improved grouping algorithm.
+ ## November 2022 ### Speakers' names can now be edited from the Azure Video Indexer website
backup Backup Azure Backup Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-import-export.md
The amount of time it takes to process an Azure import job varies. Process time
To monitor the status of your import job from the Azure portal, go to the **Azure Data Box** pane and select the job.
-For more information on the status of the import jobs, see [Monitor Azure Import/Export Jobs](/azure/import-export/storage-import-export-view-drive-status?tabs=azure-portal-preview).
+For more information on the status of the import jobs, see [Monitor Azure Import/Export Jobs](../import-export/storage-import-export-view-drive-status.md?tabs=azure-portal-preview).
### Finish the workflow
After the initial backup is finished, you can safely delete the data imported to
## Next steps
-* For any questions about the Azure Import/Export service workflow, see [Use the Microsoft Azure Import/Export service to transfer data to Blob storage](../import-export/storage-import-export-service.md).
+* For any questions about the Azure Import/Export service workflow, see [Use the Microsoft Azure Import/Export service to transfer data to Blob storage](../import-export/storage-import-export-service.md).
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
Unable to find changes in a file. This could be due to various reasons. Please r
## MARS offline seeding using customer-owned disks (Import/Export) is not working
-Azure Import/Export now uses Azure Data Box APIs for offline seeding on customer-owned disks. The Azure portal also list the Import/Export jobs created using the new API under [Azure Data Box jobs](/azure/import-export/storage-import-export-view-drive-status?tabs=azure-portal-preview) with the Model column as Import/Export.
+Azure Import/Export now uses Azure Data Box APIs for offline seeding on customer-owned disks. The Azure portal also list the Import/Export jobs created using the new API under [Azure Data Box jobs](../import-export/storage-import-export-view-drive-status.md?tabs=azure-portal-preview) with the Model column as Import/Export.
MARS agent versions lower than *2.0.9250.0* used the [old Azure Import/Export APIs](/rest/api/storageimportexport/), which will be discontinued after February 28, 2023 and the old MARS agents (version lower than 2.0.9250.0) can't do offline seeding using your own disks. So, we recommend you to use MARS agent 2.0.9250 or higher that uses the new Azure Data Box APIs for offline seeding on your own disks.
If you've ongoing Import/Export jobs created from older MARS agents, you can sti
## Next steps - Get more details on [how to back up Windows Server with the Azure Backup agent](tutorial-backup-windows-server-to-azure.md).-- If you need to restore a backup, see [restore files to a Windows machine](backup-azure-restore-windows-server.md).
+- If you need to restore a backup, see [restore files to a Windows machine](backup-azure-restore-windows-server.md).
backup Backup Azure Vms Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-encryption.md
To enable backups for ADE encrypted VMs using Azure RBAC enabled key vaults, you
:::image type="content" source="./media/backup-azure-vms-encryption/enable-key-vault-encryption-inline.png" alt-text="Screenshot shows the checkbox to enable ADE encrypted key vault." lightbox="./media/backup-azure-vms-encryption/enable-key-vault-encryption-expanded.png":::
-Learn about the [different available roles](/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations). The **Key Vault Administrator** role can allow permissions to *get*, *list*, and *back up* both secret and key.
+Learn about the [different available roles](../key-vault/general/rbac-guide.md?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations). The **Key Vault Administrator** role can allow permissions to *get*, *list*, and *back up* both secret and key.
-For Azure RBAC enabled key vaults, you can create custom role with the following set of permissions. Learn [how to create custom role](/azure/active-directory/roles/custom-create).
+For Azure RBAC enabled key vaults, you can create custom role with the following set of permissions. Learn [how to create custom role](../active-directory/roles/custom-create.md).
| Action | Description | | | |
You can also set the access policy using [PowerShell](./backup-azure-vms-automat
If you run into any issues, review these articles: - [Common errors](backup-azure-vms-troubleshoot.md) when backing up and restoring encrypted Azure VMs.-- [Azure VM agent/backup extension](backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md) issues.
+- [Azure VM agent/backup extension](backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md) issues.
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md
This figure shows the architecture of an Azure Bastion deployment. In this diagr
## <a name="host-scaling"></a>Host scaling
-Azure Bastion supports manual host scaling. You can configure the number of host instances (scale units) in order to manage the number of concurrent RDP/SSH connections that Azure Bastion can support. Increasing the number of host instances lets Azure Bastion manage more concurrent sessions. Decreasing the number of instances decreases the number of concurrent supported sessions. Azure Bastion supports up to 50 host instances. This feature is available for the Azure Bastion Standard SKU only.
+Azure Bastion supports manual host scaling. You can configure the number of host **instances** (scale units) in order to manage the number of concurrent RDP/SSH connections that Azure Bastion can support. Increasing the number of host instances lets Azure Bastion manage more concurrent sessions. Decreasing the number of instances decreases the number of concurrent supported sessions. Azure Bastion supports up to 50 host instances. This feature is available for the Azure Bastion Standard SKU only.
For more information, see the [Configuration settings](configuration-settings.md#instance) article.
bastion Tutorial Protect Bastion Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-protect-bastion-host.md
In this tutorial, you deploy Bastion using the Standard SKU tier and adjust host
Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on one of your VMs and maintain yourself. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) > [!IMPORTANT]
-> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](/azure/ddos-protection/ddos-protection-overview).
+> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
In this tutorial, you'll learn how to:
In this tutorial, you deployed Bastion to a virtual network and connected to a V
> [Bastion features and configuration settings](configuration-settings.md) > [!div class="nextstepaction"]
-> [Bastion - VM connections and features](vm-about.md)
+> [Bastion - VM connections and features](vm-about.md)
cloud-shell Cloud Shell Predictive Intellisense https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/cloud-shell-predictive-intellisense.md
For more information on PowerShell profiles, see [About_Profiles][06].
[01]: /powershell/module/psreadline/about/about_psreadline [02]: /powershell/azure/az-predictor [03]: /powershell/module/psreadline/set-psreadlineoption
-[04]: /azure/cloud-shell/using-cloud-shell-editor
+[04]: ./using-cloud-shell-editor.md
[05]: /powershell/scripting/learn/shell/using-predictors
-[06]: /powershell/module/microsoft.powershell.core/about/about_profiles
-
+[06]: /powershell/module/microsoft.powershell.core/about/about_profiles
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
Other
[02]: ../key-vault/general/manage-with-cli2.md#prerequisites [03]: ../service-fabric/service-fabric-cli.md [04]: ../storage/common/storage-use-azcopy-v10.md
-[05]: /azure/azure-functions/functions-run-local
+[05]: ../azure-functions/functions-run-local.md
[06]: /cli/azure/ [07]: /powershell/azure [08]: /powershell/scripting/whats-new/what-s-new-in-powershell-72
Other
[28]: medilets.png [29]: persisting-shell-storage.md [30]: quickstart-powershell.md
-[31]: quickstart.md
+[31]: quickstart.md
cognitive-services Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/azure-data-explorer.md
The [Anomaly Detector API](/azure/cognitive-services/anomaly-detector/overview-m
### Function 1: series_uv_anomalies_fl()
-The function **[series_uv_anomalies_fl()](/azure/data-explorer/kusto/functions-library/series-uv-anomalies-fl?tabs=adhoc)** detects anomalies in time series by calling the [Univariate Anomaly Detection API](/azure/cognitive-services/anomaly-detector/overview). The function accepts a limited set of time series as numerical dynamic arrays and the required anomaly detection sensitivity level. Each time series is converted into the required JSON (JavaScript Object Notation) format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of high/low/all anomalies, the modeled baseline time series, its normal high/low boundaries (a value above or below the high/low boundary is an anomaly) and the detected seasonality.
+The function **[series_uv_anomalies_fl()](/azure/data-explorer/kusto/functions-library/series-uv-anomalies-fl?tabs=adhoc)** detects anomalies in time series by calling the [Univariate Anomaly Detection API](../overview.md). The function accepts a limited set of time series as numerical dynamic arrays and the required anomaly detection sensitivity level. Each time series is converted into the required JSON (JavaScript Object Notation) format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of high/low/all anomalies, the modeled baseline time series, its normal high/low boundaries (a value above or below the high/low boundary is an anomaly) and the detected seasonality.
### Function 2: series_uv_change_points_fl()
cognitive-services Use Display Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Web-Search/use-display-requirements.md
Previously updated : 03/02/2022- Last updated : 01/12/2023+ # Bing Search API use and display requirements
These use and display requirements apply to any implementation of the content an
|Term |Description | |||
-|Answer | A category of results returned in a response. For example, a response from the Bing Web Search API can include answers in the categories of webpage results, image, video, visual, and news. |
+|Answer | A category of results returned in a response. For example, a response from the Bing Web Search API can include answers in the categories of webpage results, image, video, and news. |
|Response | Any and all answers and associated data received in response to a single call to a Search API. | |Result | An item of information in an answer. For example, the set of data connected with a single news article is a result in a news answer. | |Search APIs | collectively, the Bing Custom Search, Entity Search, Image Search, News Search, Video Search, Visual Search, Local Business Search, and Web Search APIs. |
Do not:
- Copy, store, or cache any data you receive from the Bing Spell Check or Bing Autosuggest APIs. - Use data you receive from Bing Spell Check or Bing Autosuggest APIs as part of any machine learning or similar algorithmic activity. Do not use this data to train, evaluate, or improve new or existing services that you or third parties might offer.
+- Display data received from the Bing Spell Check or Bing Autosuggest APIs on the same page as content from any general web search engine, large language models or advertising network.
## Bing Search APIs
Do not:
- Use data received from the Search APIs as part of any machine learning or similar algorithmic activity. Do not use this data to train, evaluate, or improve new or existing services that you or third parties might offer.
+- Display data received from the Search APIs on the same page as content from any general web search engine, large language models or advertising network.
+ - Modify the content of results (other than to reformat them in a way that does not violate any other requirement), unless required by law or agreed to by Microsoft. - Omit attribution information and URLs associated with result content.
cognitive-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Tutorials/build-enrollment-app.md
The sample app is written using JavaScript and the React Native framework. It ca
1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository. > [!WARNING]
- > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](/azure/cognitive-services/authentication) for other ways to authenticate the service.
+ > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](../../authentication.md) for other ways to authenticate the service.
1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
The sample app is written using JavaScript and the React Native framework. It ca
1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. > [!WARNING]
- > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](/azure/cognitive-services/authentication) for other ways to authenticate the service.
+ > For local development and testing only, you can enter the API key and endpoint as environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables. See the [Cognitive Services Authentication guide](../../authentication.md) for other ways to authenticate the service.
1. Run the app using either a simulated device from Xcode, or your own iOS device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
When you're ready to release your app for production, you'll build an archive of
## Next steps
-In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](../overview-identity.md). Read the other sections on adding users before you begin development.
+In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](../overview-identity.md). Read the other sections on adding users before you begin development.
cognitive-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md
This diagram provides a high-level overview of the workflow.
![Diagram of the Batch Synthesis API workflow.](media/long-audio-api/long-audio-api-workflow.png) > [!TIP]
-> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks.
+> You can also use the [Speech SDK](speech-sdk.md) to create synthesized audio longer than 10 minutes by iterating over the text and synthesizing it in chunks. For a C# example, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_samples.cs).
You can use the following REST API operations for batch synthesis:
cognitive-services Ingestion Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/ingestion-client.md
Internally, the tool uses Speech and Language services, and follows best practic
:::image type="content" source="media/ingestion-client/architecture-1.png" alt-text="Diagram that shows the Ingestion Client Architecture.":::
-The following Speech service features are used by the Ingestion Client:
+The following Speech service feature is used by the Ingestion Client:
- [Batch speech-to-text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.-- [Speaker identification](./speaker-recognition-overview.md): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection.
-Language service features used by the Ingestion Client:
+Here are some Language service features that are used by the Ingestion Client:
- [Personally Identifiable Information (PII) extraction and redaction](../language-service/personally-identifiable-information/how-to-call-for-conversations.md): Identify, categorize, and redact sensitive information in conversation transcription. - [Sentiment analysis and opinion mining](../language-service/sentiment-opinion-mining/overview.md): Analyze transcriptions and associate positive, neutral, or negative sentiment at the utterance and conversation-level.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Previously updated : 09/16/2022 Last updated : 01/12/2023
The tables in this section summarizes the locales and voices supported for Text-
Additional remarks for Text-to-speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below.
+> [!TIP]
+> Check the [voice samples](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs.
+ [!INCLUDE [Language support include](includes/language-support/tts.md)] ### Voice styles and roles
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
Speech feature summaries are provided below with links for more information.
Use [speech-to-text](speech-to-text.md) to transcribe audio into text, either in real time or asynchronously.
+> [!TIP]
+> You can try speech-to-text in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code.
+ Convert audio to text from a range of sources, including microphones, audio files, and blob storage. Use speaker diarisation to determine who said what and when. Get readable transcripts with automatic formatting and punctuation. The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, you can create and train [custom speech models](custom-speech-overview.md) with acoustic, language, and pronunciation data. Custom speech models are private and can offer a competitive advantage.
-You can try speech to text with [this demo web app](https://azure.microsoft.com/services/cognitive-services/speech-to-text/#features) or in the [Speech Studio](https://aka.ms/speechstudio/speechtotexttool).
- ### Text-to-speech With [text to speech](text-to-speech.md), you can convert input text into humanlike synthesized speech. Use neural voices, which are humanlike voices powered by deep neural networks. Use the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to fine-tune the pitch, pronunciation, speaking rate, volume, and more.
We offer quickstarts in many popular programming languages. Each quickstart is d
* [Speech-to-text quickstart](get-started-speech-to-text.md) * [Text-to-speech quickstart](get-started-text-to-speech.md) * [Speech translation quickstart](./get-started-speech-translation.md)
-* [Intent recognition quickstart](./get-started-intent-recognition.md)
-* [Speaker recognition quickstart](./get-started-speaker-recognition.md)
## Code samples
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
In this overview, you learn about the benefits and capabilities of the speech-to
Speech-to-text, also known as speech recognition, enables real-time or offline transcription of audio streams into text. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md?tabs=stt). > [!NOTE]
-> Microsoft uses the same recognition technology for Cortana and Office products.
+> Microsoft uses the same recognition technology for Windows and Office products.
## Get started
To get started, try the [speech-to-text quickstart](get-started-speech-to-text.m
In depth samples are available in the [Azure-Samples/cognitive-services-speech-sdk](https://aka.ms/csspeech/samples) repository on GitHub. There are samples for C# (including UWP, Unity, and Xamarin), C++, Java, JavaScript (including Browser and Node.js), Objective-C, Python, and Swift. Code samples for Go are available in the [Microsoft/cognitive-services-speech-sdk-go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) repository on GitHub. - ## Batch transcription Batch transcription is a set of [Speech-to-text REST API](rest-speech-to-text.md) operations that enable you to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. For more information on how to use the batch transcription API, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
Release notes for `3.0.015490002-onprem-amd64`:
The [Translator][tr-containers] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.019410001-amd64-preview`.
-This container image has the following tags available.
+This container image has the following tags available. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/v2/azure-cognitive-services/translator/text-translation/tags/list).
| Image Tags | Notes | |-|:|
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/encrypt-data-at-rest.md
az keyvault key delete \
### Delete training, validation, and training results data
- The Files API allows customers to upload their training data for the purpose of fine-tuning a model. This data is stored in Azure Storage, within the same region as the resource and logically isolated with their Azure subscription and API Credentials. Uploaded files can be deleted by the user via the [DELETE API operation](/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-python#delete-your-training-files).
+ The Files API allows customers to upload their training data for the purpose of fine-tuning a model. This data is stored in Azure Storage, within the same region as the resource and logically isolated with their Azure subscription and API Credentials. Uploaded files can be deleted by the user via the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-training-files).
### Delete fine-tuned models and deployments
-The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-python#delete-your-model-deployment).
+The Fine-tunes API allows customers to create their own fine-tuned version of the OpenAI models based on the training data that you've uploaded to the service via the Files APIs. The trained fine-tuned models are stored in Azure Storage in the same region, encrypted at rest and logically isolated with their Azure subscription and API credentials. Fine-tuned models and deployments can be deleted by the user by calling the [DELETE API operation](./how-to/fine-tuning.md?pivots=programming-language-python#delete-your-model-deployment).
## Disable customer-managed keys
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The number of examples typically range from 0 to 100 depending on how many can f
### Models
-The service provides users access to several different models. Each model provides a different capability and price point. The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and speed.
+The service provides users access to several different models. Each model provides a different capability and price point. The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and increasing order of speed.
The Codex series of models is a descendant of GPT-3 and has been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our [models concept page](./concepts/models.md). ## Next steps
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
In this tutorial, you learn how to:
Currently, access to this service is granted only by application. You can apply for access to the Azure OpenAI service by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. * <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a> * The following Python libraries: openai, num2words, matplotlib, plotly, scipy, scikit-learn, transformers.
-* An Azure OpenAI Service resource with **text-search-curie-doc-001** and **text-search-curie-query-001** models deployed. These models are currently only available in [certain regions](/azure/cognitive-services/openai/concepts/models#model-summary-table-and-region-availability). If you don't have a resource the process is documented in our [resource deployment guide](../how-to/create-resource.md).
+* An Azure OpenAI Service resource with **text-search-curie-doc-001** and **text-search-curie-query-001** models deployed. These models are currently only available in [certain regions](../concepts/models.md#model-summary-table-and-region-availability). If you don't have a resource the process is documented in our [resource deployment guide](../how-to/create-resource.md).
> [!NOTE] > If you have never worked with the Hugging Face transformers library it has its own specific [prerequisites](https://huggingface.co/docs/transformers/installation) that are required before you can successfully run `pip install transformers`.
res["summary"][9]
Using this approach, you can use embeddings as a search mechanism across documents in a knowledge base. The user can then take the top search result and use it for their downstream task, which prompted their initial query.
+## Video
+
+There is video walkthrough of this tutorial including the pre-requisite steps which can viewed on this [community YouTube post](https://www.youtube.com/watch?v=PSLO-yM6eFY).
+ ## Clean up resources If you created an OpenAI resource solely for completing this tutorial and want to clean up and remove an OpenAI resource, you'll need to delete your deployed models, and then delete the resource or associated resource group if it's dedicated to your test resource. Deleting the resource group also deletes any other resources associated with it.
If you created an OpenAI resource solely for completing this tutorial and want t
Learn more about Azure OpenAI's models: > [!div class="nextstepaction"]
-> [Next steps button](../concepts/models.md)
+> [Next steps button](../concepts/models.md)
cognitive-services Responsible Guidance Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-guidance-integration.md
When you get ready to integrate and responsibly use AI-powered products or featu
- **User Study**: Any consent or disclosure recommendations should be framed in a user study. Evaluate the first and continuous-use experience with a representative sample of the community to validate that the design choices lead to effective disclosure. Conduct user research with 10-20 community members (affected stakeholders) to evaluate their comprehension of the information and to determine if their expectations are met. -- **Transparency & Explainability:** Consider enabling and using Personalizer's [inference explainability](/azure/cognitive-services/personalizer/concepts-features?branch=main#inference-explainability) capability to better understand which features play a significant role in Personalizer's decision choice in each Rank call. This capability empowers you to provide your users with transparency regarding how their data played a role in producing the recommended best action. For example, you can give your users a button labeled "Why These Suggestions?" that shows which top features played a role in producing the Personalizer results. This information can also be used to better understand what data attributes about your users, contexts, and actions are working in favor of Personalizer's choice of best action, which are working against it, and which may have little or no effect. This capability can also provide insights about your user segments and help you identify and address potential biases.
+- **Transparency & Explainability:** Consider enabling and using Personalizer's [inference explainability](./concepts-features.md?branch=main) capability to better understand which features play a significant role in Personalizer's decision choice in each Rank call. This capability empowers you to provide your users with transparency regarding how their data played a role in producing the recommended best action. For example, you can give your users a button labeled "Why These Suggestions?" that shows which top features played a role in producing the Personalizer results. This information can also be used to better understand what data attributes about your users, contexts, and actions are working in favor of Personalizer's choice of best action, which are working against it, and which may have little or no effect. This capability can also provide insights about your user segments and help you identify and address potential biases.
- **Adversarial use**: consider establishing a process to detect and act on malicious manipulation. There are actors that will take advantage of machine learning and AI systems' ability to learn from their environment. With coordinated attacks, they can artificially fake patterns of behavior that shift the data and AI models toward their goals. If your use of Personalizer could influence important choices, make sure you have the appropriate means to detect and mitigate these types of attacks in place.
cognitive-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/responsible-use-of-ai-overview.md
Azure Cognitive Services provides information and guidelines on how to responsib
## Personalizer
-* [Transparency note and use cases](/azure/cognitive-services/personalizer/responsible-use-cases)
-* [Characteristics and limitations](/azure/cognitive-services/personalizer/responsible-characteristics-and-limitations)
-* [Integration and responsible use](/azure/cognitive-services/personalizer/responsible-guidance-integration)
-* [Data and privacy](/azure/cognitive-services/personalizer/responsible-data-and-privacy)
+* [Transparency note and use cases](./personalizer/responsible-use-cases.md)
+* [Characteristics and limitations](./personalizer/responsible-characteristics-and-limitations.md)
+* [Integration and responsible use](./personalizer/responsible-guidance-integration.md)
+* [Data and privacy](./personalizer/responsible-data-and-privacy.md)
## QnA Maker
Azure Cognitive Services provides information and guidelines on how to responsib
* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context) * [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context) * [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
-
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context)
cognitive-services Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-features.md
For a comprehensive list of Azure service security recommendations see the [Cogn
|:|:| | [Transport Layer Security (TLS)](/dotnet/framework/network-programming/tls) | All of the Cognitive Services endpoints exposed over HTTP enforce the TLS 1.2 protocol. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should follow these guidelines: </br>- The client operating system (OS) needs to support TLS 1.2.</br>- The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request. Depending on the language and platform, specifying TLS is done either implicitly or explicitly.</br>- For .NET users, consider the [Transport Layer Security best practices](/dotnet/framework/network-programming/tls). | | [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use managed roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](./authentication.md). |
-| [Key rotation](./authentication.md)| Each Cognitive Services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your service in the event that a key gets leaked. To learn about this and other authentication options, see [Rotate keys](/azure/cognitive-services/rotate-keys). |
+| [Key rotation](./authentication.md)| Each Cognitive Services resource has two API keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your service in the event that a key gets leaked. To learn about this and other authentication options, see [Rotate keys](./rotate-keys.md). |
| [Environment variables](cognitive-services-environment-variables.md) | Environment variables are name-value pairs that are stored within a specific development environment. You can store your credentials in this way as a more secure alternative to using hardcoded values in your code. However, if your environment is compromised, the environment variables are compromised as well, so this is not the most secure approach.</br></br> For instructions on how to use environment variables in your code, see the [Environment variables guide](cognitive-services-environment-variables.md). | | [Customer-managed keys (CMK)](./encryption/cognitive-services-encryption-keys-portal.md) | This feature is for services that store customer data at rest (longer than 48 hours). While this data is already double-encrypted on Azure servers, users can get extra security by adding another layer of encryption, with keys they manage themselves. You can link your service to Azure Key Vault and manage your data encryption keys there. </br></br>You need special approval to get the E0 SKU for your service, which enables CMK. Within 3-5 business days after you submit the [request form](https://aka.ms/cogsvc-cmk), you'll get an update on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once you're approved for using the E0 SKU, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. You won't be able to upgrade from F0 to the new E0 SKU. </br></br>Only some services can use CMK; look for your service on the [Customer-managed keys](./encryption/cognitive-services-encryption-keys-portal.md) page.| | [Virtual networks](./cognitive-services-virtual-networks.md) | Virtual networks allow you to specify which endpoints can make API calls to your resource. The Azure service will reject API calls from devices outside of your network. You can set a formula-based definition of the allowed network, or you can define an exhaustive list of endpoints to allow. This is another layer of security that can be used in combination with others. |
communication-services Identifiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/identifiers.md
There are user identities that you create yourself and there are external identi
* For an introduction to communication identities, see [Identity model](./identity-model.md). * To learn how to quickly create identities for testing, see the [quick-create identity quickstart](../quickstarts/identity/quick-create-identity.md). * To learn how to use Communication Services together with Microsoft Teams, see [Teams interoperability](./teams-interop.md).
+* To learn how to use a Raw ID, see [Use cases for string identifiers in Communication SDKs](./raw-id-use-cases.md).
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting.
+*Azure Communication Services provides developers tools to integrate Microsoft Teams Data Loss Prevention that is compatible with Microsoft Teams. For more information, go to [how to implement Data Loss Prevention (DLP] (../../../../how-to/chat-sdk/data-loss-prevention.md)
+ ## Server capabilities The following table shows supported server-side capabilities available in Azure Communication
communication-services Raw Id Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/raw-id-use-cases.md
+
+ Title: Azure Communication Services - Use cases for string identifiers
+description: Learn how to use Raw ID in SDKs
+++++ Last updated : 12/23/2022++
+#Customer intent: As a developer, I want to learn how to correctly use Raw ID so that I can build applications that run efficiently.
++
+# Use cases for string identifiers in Communication SDKs
+
+This article provides use cases for choosing a string (Raw ID) as a representation type of the [CommunicationIdentifier type](./identifiers.md#the-communicationidentifier-type) in Azure Communication Services SDKs. Following this guidance will help you understand some use cases when you might want to choose a Raw ID over the CommunicationIdentifier derived types.
+
+## Use cases for choosing an identifier
+A common task when implementing communication scenarios is to identify participants of conversations. When you're using Communication Services SDKs, *CommunicationIdentifier* provides the capability of uniquely identifying these participants.
+
+CommunicationIdentifier has the following advantages:
+- Provides good auto-complete in IDEs.
+- Allows using a switch case by type to address different application flows.
+- Allows restricting communication to specific types.
+- Allow access to identifier details and use them to call other APIs (such as the Microsoft Graph API) to provide a rich experience for communication participants.
+
+On top of this, the *CommunicationIdentifier* and the derived types (`MicrosoftTeamsUserIdentifier`, `PhoneNumberIdentifier`, etc.) can be converted to its string representation (Raw ID) and restored from the string, making the following scenarios easier to implement:
+- Store identifiers in a database and use them as keys.
+- Use identifiers as keys in dictionaries.
+- Implement intuitive REST CRUD APIs by using identifiers as key in REST API paths, instead of having to rely on POST payloads.
+- Use identifiers as keys in declarative UI frameworks such as React to avoid unnecessary re-rendering.
+
+### Creating CommunicationIdentifier and retrieving Raw ID
+*CommunicationIdentifier* can be created from a Raw ID and a Raw ID can be retrieved from a type derived from *CommunicationIdentifier*. It removes the need of any custom serialization methods that might take in only certain object properties and omit others. For example, the `MicrosoftTeamsUserIdentifier` has multiple properties such as `IsAnonymous` or `Cloud` or methods to retrieve these values (depending on a platform). Using methods provided by Communication Identity SDK guarantees that the way of serializing identifiers will stay canonical and consistent even if more properties will be added.
+
+Get Raw ID from CommunicationUserIdentifier:
+
+```csharp
+public async Task GetRawId()
+{
+ ChatMessage message = await ChatThreadClient.GetMessageAsync("678f26ef0c");
+ CommunicationIdentifier communicationIdentifier = message.Sender;
+ String rawId = communicationIdentifier.RawId;
+}
+```
+
+Instantiate CommunicationUserIdentifier from a Raw ID:
+
+```csharp
+public void CommunicationIdentifierFromGetRawId()
+{
+ String rawId = "8:acs:bbbcbc1e-9f06-482a-b5d8-20e3f26ef0cd_45ab2481-1c1c-4005-be24-0ffb879b1130";
+ CommunicationIdentifier communicationIdentifier = CommunicationIdentifier.FromRawId(rawId);
+}
+```
+
+You can find more platform-specific examples in the following article: [Understand identifier types](./identifiers.md)
+
+## Storing CommunicationIdentifier in a database
+One of the typical jobs that may be required from you is mapping ACS users to users coming from Contoso user database or identity provider. This is usually achieved by adding an extra column or field in Contoso user DB or Identity Provider. However, given the characteristics of the Raw ID (stable, globally unique, and deterministic), you may as well choose it as a primary key for the user storage.
+
+Assuming a `ContosoUser` is a class that represents a user of your application, and you want to save it along with a corresponding CommunicationIdentifier to the database. The original value for a `CommunicationIdentifier` can come from the Communication Identity, Calling or Chat APIs or from a custom Contoso API but can be represented as a `string` data type in your programming language no matter what the underlying type is:
+
+```csharp
+public class ContosoUser
+{
+ public string Name { get; set; }
+ public string Email { get; set; }
+ public string CommunicationId { get; set; }
+}
+```
+
+You can access `RawId` property of the `CommunicationId` to get a string that can be stored in the database:
+
+```csharp
+public void StoreToDatabase()
+{
+ CommunicationIdentifier communicationIdentifier;
+
+ ContosoUser user = new ContosoUser()
+ {
+ Name = "John",
+ Email = "john@doe.com",
+ CommunicationId = communicationIdentifier.RawId
+ };
+ SaveToDb(user);
+}
+```
+
+If you want to get `CommunicationIdentifier` from the stored Raw ID, you need to pass the raw string to `FromRawId()` method:
+
+```csharp
+public void GetFromDatabase()
+{
+ ContosoUser user = GetFromDb("john@doe.com");
+ CommunicationIdentifier communicationIdentifier = CommunicationIdentifier.FromRawId(user.CommunicationId);
+}
+```
+It will return `CommunicationUserIdentifier`, `PhoneNumberIdentifier`, `MicrosoftTeamsUserIdentifier` or `UnknownIdentifier` based on the identifier type.
+
+## Storing CommunicationIdentifier in collections
+If your scenario requires working with several *CommunicationIdentifier* objects in memory, you may want to store them in a collection (dictionary, list, hash set, etc.). A collection is useful, for example, for maintaining a list of call or chat participants. As the hashing logic relies on the value of a Raw ID, you can use *CommunicationIdentifier* in collections that require elements to have a reliable hashing behavior. The following examples demonstrate adding *CommunicationIdentifier* objects to different types of collections and checking if they're contained in a collection by instantiating new identifiers from a Raw ID value.
+
+The following example shows how Raw ID can be used as a key in a dictionary to store user's messages:
+
+```csharp
+public void StoreMessagesForContosoUsers()
+{
+ var communicationUser = new CommunicationUserIdentifier("8:acs:bbbcbc1e-9f06-482a-b5d8-20e3f26ef0cd_45ab2481-1c1c-4005-be24-0ffb879b1130");
+ var teamsUserUser = new CommunicationUserIdentifier("45ab2481-1c1c-4005-be24-0ffb879b1130");
+
+ // A dictionary with a CommunicationIdentifier as key might be used to store messages of a user.
+ var userMessages = new Dictionary<string, List<Message>>
+ {
+ { communicationUser.RawId, new List<Message>() },
+ { teamsUserUser.RawId, new List<Message>() },
+ };
+
+ // Retrieve messages for a user based on their Raw ID.
+ var messages = userMessages[communicationUser.RawId];
+}
+```
+
+As the hashing logic relies on the value of a Raw ID, you can use `CommunicationIdentifier` itself as a key in a dictionary directly:
+
+```csharp
+public void StoreMessagesForContosoUsers()
+{
+ // A dictionary with a CommunicationIdentifier as key might be used to store messages of a user.
+ var userMessages = new Dictionary<CommunicationIdentifier, List<Message>>
+ {
+ { new CommunicationUserIdentifier("8:acs:bbbcbc1e-9f06-482a-b5d8-20e3f26ef0cd_45ab2481-1c1c-4005-be24-0ffb879b1130"), new List<Message>() },
+ { new MicrosoftTeamsUserIdentifier("45ab2481-1c1c-4005-be24-0ffb879b1130"), new List<Message>() },
+ };
+
+ // Retrieve messages for a user based on their Raw ID.
+ var messages = userMessages[CommunicationIdentifier.FromRawId("8:acs:bbbcbc1e-9f06-482a-b5d8-20e3f26ef0cd_45ab2481-1c1c-4005-be24-0ffb879b1130")];
+}
+```
+
+Hashing logic that relies on the value of a Raw ID, also allows you to add `CommunicationIdentifier` objects to hash sets:
+```csharp
+public void StoreUniqueContosoUsers()
+{
+ // A hash set of unique users of a Contoso application.
+ var users = new HashSet<CommunicationIdentifier>
+ {
+ new PhoneNumberIdentifier("+14255550123"),
+ new UnknownIdentifier("28:45ab2481-1c1c-4005-be24-0ffb879b1130")
+ };
+
+ // Implement custom flow for a new communication user.
+ if (users.Contains(CommunicationIdentifier.FromRawId("4:+14255550123"))){
+ //...
+ }
+}
+```
+
+Another use case is using Raw IDs in mobile applications to identify participants. You can inject the participant view data for remote participant if you want to handle this information locally in the UI library without sending it to Azure Communication Services.
+This view data can contain a UIImage that represents the avatar to render, and a display name they can optionally display instead.
+Both the participant CommunicationIdentifier and Raw ID retrieved from it can be used to uniquely identify a remote participant.
+
+```swift
+callComposite.events.onRemoteParticipantJoined = { identifiers in
+ for identifier in identifiers {
+ // map identifier to displayName
+ let participantViewData = ParticipantViewData(displayName: "<DISPLAY_NAME>")
+ callComposite.set(remoteParticipantViewData: participantViewData,
+ for: identifier) { result in
+ switch result {
+ case .success:
+ print("Set participant view data succeeded")
+ case .failure(let error):
+ print("Set participant view data failed with \(error)")
+ }
+ }
+ }
+}
+```
+
+## Using Raw ID as key in REST API paths
+When designing a REST API, you can have endpoints that either accept a `CommunicationIdentifier` or a Raw ID string. If the identifier consists of several parts (like ObjectID, cloud name, etc. if you're using `MicrosoftTeamsUserIdentifier`), you might need to pass it in the request body. However, using Raw ID allows you to address the entity in the URL path instead of passing the whole composite object as a JSON in the body. So that you can have a more intuitive REST CRUD API.
+
+```csharp
+public async Task UseIdentifierInPath()
+{
+ CommunicationIdentifier user = GetFromDb("john@doe.com");
+
+ using HttpResponseMessage response = await client.GetAsync($"https://contoso.com/v1.0/users/{user.RawId}/profile");
+ response.EnsureSuccessStatusCode();
+}
+```
+
+## Extracting identifier details from Raw IDs.
+Consistent underlying Raw ID allows:
+- Deserializing to the right identifier type (based on which you can adjust the flow of your app).
+- Extracting details of identifiers (such as an oid for `MicrosoftTeamsUserIdentifier`).
+
+The example shows both benefits:
+- The type allows you to decide where to take the avatar from.
+- The decomposed details allow you to query the API in the right way.
+
+```csharp
+public void ExtractIdentifierDetails()
+{
+ ContosoUser user = GetFromDb("john@doe.com");
+
+ string rawId = user.CommunicationIdentifier;
+ CommunicationIdentifier teamsUser = CommunicationIdentifier.FromRawId(rawId);
+ switch (communicationIdentifier)
+ {
+ case MicrosoftTeamsUserIdentifier teamsUser:
+ string getPhotoUri = $"https://graph.microsoft.com/v1.0/users/{teamsUser.UserId}/photo/$value";
+ // ...
+ break;
+ case CommunicationIdentifier communicationUser:
+ string getPhotoUri = GetAvatarFromDB(communicationUser.Id);
+ // ...
+ break;
+ }
+}
+```
+
+You can access properties or methods for a specific *CommunicationIdentifier* type that is stored in a Contoso database in a form of a string (Raw ID).
+
+## Using Raw IDs as key in UI frameworks
+It's possible to use Raw ID of an identifier as a key in UI components to track a certain user and avoid unnecessary re-rendering and API calls. In the example, we're changing the order of how users are rendered in a list. In real world, we might want to show new users first or re-order users based on some condition (for example, hand raised). For the sake of simplicity, the following example just reverses the order in which the users are rendered.
+
+```javascript
+import { getIdentifierRawId } from '@azure/communication-common';
+
+function CommunicationParticipants() {
+ const [users, setUsers] = React.useState([{ id: getIdentifierRawId(userA), name: "John" }, { id: getIdentifierRawId(userB), name: "Jane" }]);
+ return (
+ <div>
+ {users.map((user) => (
+ // React uses keys as hints while rendering elements. Each list item should have a key that's unique among its siblings.
+ // Raw ID can be utilized as a such key.
+ <ListUser item={user} key={user.id} />
+ ))}
+ <button onClick={() => setUsers(users.slice().reverse())}>Reverse</button>
+ </div>
+ );
+}
+
+const ListUser = React.memo(function ListUser({ user }) {
+ console.log(`Render ${user.name}`);
+ return <div>{user.name}</div>;
+});
+```
+++
+## Next steps
+In this article, you learned how to:
+
+> [!div class="checklist"]
+> * Correctly identify use cases for choosing a Raw ID
+> * Convert between Raw ID and different types of a *CommunicationIdentifier*
+
+To learn more, you may want to explore the following quickstart guides:
+
+* [Understand identifier types](./identifiers.md)
+* [Reference documentation](reference.md)
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
Give your voice route a name, specify the number pattern using regular expressions, and select SBC for that pattern. Here are some examples of basic regular expressions: - `^\+\d+$` - matches a telephone number with one or more digits that start with a plus-- `^+1(\d[10])$` - matches a telephone number with a ten digits after a `+1`
+- `^\+1(\d{10})$` - matches a telephone number with a ten digits after a `+1`
- `^\+1(425|206)(\d{7})$` - matches a telephone number that starts with `+1425` or with `+1206` followed by seven digits - `^\+0?1234$` - matches both `+01234` and `+1234` telephone numbers.
communication-services Video Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-effects.md
++
+ Title: Azure Communication Services Calling video WebJS video effects
+
+description: In this document, you'll learn how to create video effects on an Azure Communication Services call.
+++ Last updated : 1/9/2023+++++
+# Adding visual effects to a video call
++
+>[!IMPORTANT]
+> The Calling Video effects are available starting on the public preview version [1.9.1-beta.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.9.1-beta.1) of the Calling SDK. Please ensure that you use this or a newer SDK when using video effects.
+
+> [!NOTE]
+> This API is provided as a preview ('beta') for developers and may change based on feedback that we receive.
+
+> [!NOTE]
+> This library cannot be used standalone and can only work when used with the Azure Communication Calling client library for WebJS (https://www.npmjs.com/package/@azure/communication-calling).
+
+The Azure Communication Calling SDK allows you to create video effects that other users on a call will be able to see. For example, for a user doing ACS calling using the WebJS SDK you can now enable that the user can turn on background blur. When background blur enabled a user can feel more comfortable in doing a video call that the output video will just show a user and all other content will be blurred.
+
+## Prerequisites
+### Install the Azure Communication Services Calling SDK
+- An Azure account with an active subscription is required. See [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) on how to create an Azure account.
+- [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended.
+- An active Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A User Access Token to instantiate a call client. Learn how to [create and manage user access tokens](../../quickstarts/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. (Need to grab connection string from the resource through Azure portal.)
+- Azure Communication Calling client library is properly set up and configured (https://www.npmjs.com/package/@azure/communication-calling).
+
+An example using the Azure CLI to
+```azurecli-interactive
+az communication identity token issue --scope voip --connection-string "yourConnectionString"
+```
+For details on using the CLI, see [Use Azure CLI to Create and Manage Access Tokens](../../quickstarts/access-tokens.md?pivots=platform-azcli).
+
+## Install the Calling effects SDK
+Use ΓÇÿnpm installΓÇÖ command to install the Azure Communication Calling Effects SDK for JavaScript.
+
+'npm install @azure/communication-calling-effects ΓÇôsave'
+
+## Supported video effects:
+Currently the video effects support the following ability:
+- Background blur
+- Replace the background with a custom image
+
+## Browser support:
+
+Currently creating video effects is only supported on Chrome Desktop Browser and Mac Safari Desktop.
+
+## Class model:
+
+| Name | Description |
+|||
+| BackgroundBlurEffect | The background blur effect class. |
+| BackgroundReplacementEffect | The background replacement with image effect class. |
+
+To use video effects with the Azure Communication Calling client library, once you've created a LocalVideoStream, you need to get the VideoEffects feature API of from the LocalVideoStream.
+
+### Code examples
+```js
+import * as AzureCommunicationCallingSDK from '@azure/communication-calling';
+import { BackgroundBlur, BackgroundReplacement } from '@azure/communication-calling-effects';
+
+/** Assuming you have initialized the Azure Communication Calling client library and have created a LocalVideoStream
+(reference <link to main SDK npm>)
+*/
+
+// Get the video effects feature api on the LocalVideoStream
+const videoEffectsFeatureApi = localVideoStream.features(AzureCommunicationCallingSDK.Features.VideoEffects);
+
+// Subscribe to useful events
+videoEffectsFeatureApi.on(ΓÇÿeffectsStartedΓÇÖ, () => {
+ // Effects started
+});
+
+
+videoEffectsFeatureApi.on(ΓÇÿeffectsStoppedΓÇÖ, () => {
+ // Effects stopped
+});
++
+videoEffectsFeatureApi.on(ΓÇÿeffectsErrorΓÇÖ, (error) => {
+ // Effects error
+});
+
+// Create the effect instance
+const backgroundBlurEffect = new BackgroundBlur();
+
+// Recommended: Check if backgroundBlur is supported
+const backgroundBlurSupported = await backgroundBlurEffect.isSupported();
+
+if (backgroundBlurSupported) {
+ // Use the video effects feature api we created to start/stop effects
+
+ await videoEffectsFeatureApi.startEffects(backgroundBlurEffect);
+
+}
+
+
+/**
+To create a background replacement with a custom image you need to provide the URL of the image you want as the background to this effect. The 'startEffects' method will fail if the URL is not of an image or is unreachable/unreadable.
+
+Supported image formats are ΓÇô png, jpg, jpeg, tiff, bmp.
+*/
+
+const backgroundImage = 'https://linkToImageFile';
+
+// Create the effect instance
+const backgroundReplacementEffect = new BackgroundReplacement({
+
+ backgroundImageUrl: backgroundImage
+
+});
+
+// Recommended: Check if background replacement is supported:
+const backgroundReplacementSupported = await backgroundReplacementEffect.isSupported();
+
+if (backgroundReplacementSupported) {
+ // Use the video effects feature api as before to start/stop effects
+ await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect);
+}
+
+You can change the image used for this effect by passing it in the a new configure method:
+
+const newBackgroundImage = 'https://linkToNewImageFile';
+await backgroundReplacementEffect.configure({
+
+ backgroundImageUrl: newBackgroundImage
+
+});
+
+You can switch the effects using the same method on the video effects feature api:
+
+// Switch to background blur
+await videoEffectsFeatureApi.startEffects(backgroundBlurEffect);
++
+// Switch to background replacement
+await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect);
+
+//To stop effects:
+await videoEffectsFeatureApi.stopEffects();
+
+```
communication-services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/chat-sdk/data-loss-prevention.md
+
+ Title: Integrate Azure Communication Services with Microsoft Teams Data Loss Prevention
+
+description: Learn how to integrate with Microsoft Teams Data Loss Prevention policies by subscribing to Real-time Chat Notifications
++ Last updated : 01/10/2023+++++
+# How to integrate with Microsoft Teams Data Loss Prevention policies by subscribing to real-time chat notifications
+
+Microsoft Teams administrator can configure policies for data loss prevention (DLP) to prevent leakage of sensitive information from Teams users in Teams meetings. Developers can integrate chat in Teams meetings with Azure Communication Services for Communication Services users via the Communication Services UI library or custom integration. This article describes how to incorporate data loss prevention without a UI library.
+
+You need to subscribe to real-time notifications and listen for message updates. If a chat message from a Teams user contains sensitive content, the message content is updated to blank. The Azure Communication Services user interface has to be updated to indicate that the message cannot be displayed, for example, "Message was blocked as it contains sensitive information.". There could be a delay of a couple of seconds before a policy violation is detected and the message content is updated. You can find an example of such code below.
+
+Data Loss Prevention policies only apply to messages sent by Teams users and aren't meant to protect Azure Communications users from sending out sensitive information.
+
+```javascript
+let endpointUrl = '<replace with your resource endpoint>';
+
+// The user access token generated as part of the pre-requisites
+let userAccessToken = '<USER_ACCESS_TOKEN>';
+
+let chatClient = new ChatClient(endpointUrl, new AzureCommunicationTokenCredential(userAccessToken));
+
+await chatClient.startRealtimeNotifications();
+chatClient.on("chatMessageEdited", (e) => {
+ if(e.messageBody == ΓÇ£ΓÇ¥ && e.sender.kind == "microsoftTeamsUser")
+ // Show UI message blocked
+});
+```
+
+## Next steps
+- [Learn how to enable Microsoft Teams Data Loss Prevention](/microsoft-365/compliance/dlp-microsoft-teams?view=o365-worldwide)
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
Sometimes the bot wouldn't be able to understand or answer a question or a custo
## Handling bot to bot communication
- There may be certain use cases where two bots need to be added to the same chat thread to provide different services. In such use cases, you may need to ensure that bots don't start sending automated replies to each other's messages. If not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. You can verify the Azure Communication Services user identity of the sender of a message from the activity's `From.Id` field to see if it belongs to another bot and take required action to prevent such a communication flow. If such a scenario results in high call volumes, then Azure Communication Services Chat channel will start throttling the requests, which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](/azure/communication-services/concepts/service-limits#chat).
+ There may be certain use cases where two bots need to be added to the same chat thread to provide different services. In such use cases, you may need to ensure that bots don't start sending automated replies to each other's messages. If not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. You can verify the Azure Communication Services user identity of the sender of a message from the activity's `From.Id` field to see if it belongs to another bot and take required action to prevent such a communication flow. If such a scenario results in high call volumes, then Azure Communication Services Chat channel will start throttling the requests, which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](../../concepts/service-limits.md#chat).
## Troubleshooting
Sometimes the bot wouldn't be able to understand or answer a question or a custo
## Next steps
-Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
+Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
confidential-computing Quick Create Confidential Vm Azure Cli Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli-amd.md
Create a VM with the [az vm create](/cli/azure/vm) command.
The following example creates a VM named *myVM* and adds a user account named *azureuser*. The `--generate-ssh-keys` parameter is used to automatically generate an SSH key, and put it in the default key location(*~/.ssh*). To use a specific set of keys instead, use the `--ssh-key-values` option. For `size`, select a confidential VM size. For more information, see [supported confidential VM families](virtual-machine-solutions-amd.md).
-Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](/azure/virtual-machines/trusted-launch). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
+Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
```azurecli-interactive az vm create \
az keyvault set-policy -n keyVaultName -g myResourceGroup --object-id $desIdenti
```azurecli-interactive $diskEncryptionSetID=(az disk-encryption-set show -n diskEncryptionSetName -g myResourceGroup --query [id] -o tsv) ```
-6. Create a VM with the [az vm create](/cli/azure/vm) command. Choose `DiskWithVMGuestState` for OS disk confidential encryption with a customer-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](/azure/virtual-machines/trusted-launch). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
+6. Create a VM with the [az vm create](/cli/azure/vm) command. Choose `DiskWithVMGuestState` for OS disk confidential encryption with a customer-managed key. Enabling secure boot is optional, but recommended. For more information, see [secure boot and vTPM](../virtual-machines/trusted-launch.md). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md).
```azurecli-interactive az vm create \
echo -n $JWT | cut -d "." -f 2 | base64 -d 2> | jq .
## Next steps > [!div class="nextstepaction"]
-> [Create a confidential VM on AMD with an ARM template](quick-create-confidential-vm-arm-amd.md)
+> [Create a confidential VM on AMD with an ARM template](quick-create-confidential-vm-arm-amd.md)
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-sftp-ssh.md
Title: Connect to SFTP using SSH from workflows
-description: Connect to your SFTP file server over SSH from workflows in Azure Logic Apps.
+ Title: Connect to an SFTP server from workflows
+description: Connect to your SFTP file server from workflows using Azure Logic Apps.
ms.suite: integration Previously updated : 08/19/2022 Last updated : 01/12/2023 tags: connectors
-# Connect to an SFTP file server using SSH from workflows in Azure Logic Apps
+# Connect to an SFTP file server from workflows in Azure Logic Apps
-To automate tasks that create and manage files on a [Secure File Transfer Protocol (SFTP)](https://www.ssh.com/ssh/sftp/) server using the [Secure Shell (SSH)](https://www.ssh.com/ssh/protocol/) protocol, you can create automated integration workflows by using Azure Logic Apps and the SFTP-SSH connector. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream.
-Here are some example tasks you can automate:
+This how-to guide shows how to access your [SSH File Transfer Protocol (SFTP)](https://www.ssh.com/ssh/sftp/) server from a workflow in Azure Logic Apps. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream and uses the [Secure Shell (SSH)](https://www.ssh.com/ssh/protocol/) protocol.
+
+In Consumption logic app workflows, you can use the **SFTP-SSH** *managed* connector, while in Standard logic app workflows, you can use the **SFTP** built-in connector or the **SFTP-SSH** managed connector. You can use these connector operations to create automated workflows that run when triggered by events in your SFTP server or in other systems and run actions to manage files on your SFTP server. Both the managed and built-in connectors use the SSH protocol.
+
+For example, your workflow can start with an SFTP trigger that monitors and responds to events on your SFTP server. The trigger makes the outputs available to subsequent actions in your workflow. Your workflow can run SFTP actions that get, create, and manage files through your SFTP server account. The following list includes more example tasks:
* Monitor when files are added or changed. * Get, create, copy, rename, update, list, and delete files.
Here are some example tasks you can automate:
* Get file content and metadata. * Extract archives to folders.
-In your workflow, you can use a trigger that monitors events on your SFTP server and makes output available to other actions. You can then use actions to perform various tasks on your SFTP server. You can also include other actions that use the output from SFTP-SSH actions. For example, if you regularly retrieve files from your SFTP server, you can send email alerts about those files and their content using the Office 365 Outlook connector or Outlook.com connector. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
-
-For differences between the SFTP-SSH connector and the SFTP connector, review the [Compare SFTP-SSH versus SFTP](#comparison) section later in this topic.
-
-## Limitations
-
-* The SFTP-SSH connector currently doesn't support these SFTP servers:
-
- * IBM DataPower
- * MessageWay
- * OpenText Secure MFT
- * OpenText GXS
- * Globalscape
- * SFTP for Azure Blob Storage
- * FileMage Gateway
- * VShell Secure File Transfer Server
-
-* The following SFTP-SSH actions support [chunking](../logic-apps/logic-apps-handle-large-messages.md):
-
- | Action | Chunking support | Override chunk size support |
- |--||--|
- | **Copy file** | No | Not applicable |
- | **Create file** | Yes | Yes |
- | **Create folder** | Not applicable | Not applicable |
- | **Delete file** | Not applicable | Not applicable |
- | **Extract archive to folder** | Not applicable | Not applicable |
- | **Get file content** | Yes | Yes |
- | **Get file content using path** | Yes | Yes |
- | **Get file metadata** | Not applicable | Not applicable |
- | **Get file metadata using path** | Not applicable | Not applicable |
- | **List files in folder** | Not applicable | Not applicable |
- | **Rename file** | Not applicable | Not applicable |
- | **Update file** | No | Not applicable |
- ||||
-
- SFTP-SSH actions that support chunking can handle files up to 1 GB, while SFTP-SSH actions that don't support chunking can handle files up to 50 MB. The default chunk size is 15 MB. However, this size can dynamically change, starting from 5 MB and gradually increasing to the 50-MB maximum. Dynamic sizing is based on factors such as network latency, server response time, and so on.
-
- > [!NOTE]
- > For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
- > this connector's ISE-labeled version requires chunking to use the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
-
- You can override this adaptive behavior when you [specify a constant chunk size](#change-chunk-size) to use instead. This size can range from 5 MB to 50 MB. For example, suppose you have a 45-MB file and a network that can that support that file size without latency. Adaptive chunking results in several calls, rather that one call. To reduce the number of calls, you can try setting a 50-MB chunk size. In different scenario, if your logic app workflow is timing out, for example, when using 15-MB chunks, you can try reducing the size to 5 MB.
-
- Chunk size is associated with a connection. This attribute means you can use the same connection for both actions that support chunking and actions that don't support chunking. In this case, the chunk size for actions that support chunking ranges from 5 MB to 50 MB.
-
-* SFTP-SSH triggers don't support message chunking. When triggers request file content, they select only files that are 15 MB or smaller. To get files larger than 15 MB, follow this pattern instead:
-
- 1. Use an SFTP-SSH trigger that returns only file properties. These triggers have names that include the description, **(properties only)**.
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create and edit logic app workflows:
- 1. Follow the trigger with the SFTP-SSH **Get file content** action. This action reads the complete file and implicitly uses message chunking.
+* Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
-* The SFTP-SSH managed or Azure-hosted connector can create a limited number of connections to the SFTP server, based on the connection capacity in the Azure region where your logic app resource exists. If this limit poses a problem in a Consumption logic app workflow, consider creating a Standard logic app workflow and use the SFTP-SSH built-in connector instead.
+* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
-<a name="comparison"></a>
+## Connector technical reference
-## Compare SFTP-SSH versus SFTP
+The SFTP connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-The following list describes key SFTP-SSH capabilities that differ from the SFTP connector:
+| Logic app type (plan) | Environment | Connector version |
+||-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label and built-in connector, which appears in the designer under the **Built-in** label and is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string. For more information, review the following documentation: <br><br>- [SFTP-SSH managed connector reference](/connectors/sftpwithssh/) <br>- [SFTP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sftp/) <br><br>- [Managed connectors in Azure Logic Apps](managed.md) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
-* Uses the [SSH.NET library](https://github.com/sshnet/SSH.NET), which is an open-source Secure Shell (SSH) library that supports .NET.
+## General limitations
-* Provides the **Create folder** action, which creates a folder at the specified path on the SFTP server.
+* Before you use the SFTP-SSH managed connector, review the known issues and limitations in the [SFTP-SSH managed connector reference](/connectors/sftpwithssh/).
-* Provides the **Rename file** action, which renames a file on the SFTP server.
+* Before you use the SFTP built-in connector, review the known issues and limitations in the [SFTP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sftp/).
-* Caches the connection to SFTP server *for up to 1 hour*. This capability improves performance and reduces how often the connector tries connecting to the server. To set the duration for this caching behavior, edit the [**ClientAliveInterval** property](https://man.openbsd.org/sshd_config#ClientAliveInterval) in the SSH configuration on your SFTP server.
+<a name="known-issues"></a>
-## How SFTP-SSH triggers work
+## Known issues
-<a name="polling-behavior"></a>
-### Polling behavior
+## Chunking
-SFTP-SSH triggers poll the SFTP file system and look for any file that changed since the last poll. Some tools let you preserve the timestamp when the files change. In these cases, you have to disable this feature so your trigger can work. Here are some common settings:
-
-| SFTP client | Action |
-|-|--|
-| Winscp | Go to **Options** > **Preferences** > **Transfer** > **Edit** > **Preserve timestamp** > **Disable** |
-| FileZilla | Go to **Transfer** > **Preserve timestamps of transferred files** > **Disable** |
-|||
-
-When a trigger finds a new file, the trigger checks that the new file is complete, and not partially written. For example, a file might have changes in progress when the trigger checks the file server. To avoid returning a partially written file, the trigger notes the timestamp for the file that has recent changes, but doesn't immediately return that file. The trigger returns the file only when polling the server again. Sometimes, this behavior might cause a delay that is up to twice the trigger's polling interval.
-
-<a name="trigger-recurrence-shift-drift"></a>
-
-## Trigger recurrence shift and drift (daylight saving time)
-
-Recurring connection-based triggers where you need to create a connection first, such as the managed SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
-
-To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#recurrence-for-connection-based-triggers).
+For more information about how the SFTP-SSH managed connector can handle large files exceeding default size limits, see [SFTP-SSH managed connector reference - Chunking](/connectors/sftpwithssh/#chunking).
## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Your SFTP server address and account credentials, so your workflow can access your SFTP account. You also need access to an SSH private key and the SSH private key password. To upload large files using chunking, you need both read and write access for the root folder on your SFTP server. Otherwise, you get a "401 Unauthorized" error.
-
- The SFTP-SSH connector supports both private key authentication and password authentication. However, the SFTP-SSH connector supports *only* the following private key formats, key exchange algorithms, encryption algorithms, and fingerprints:
-
- * **Private key formats**: RSA (Rivest Shamir Adleman) and DSA (Digital Signature Algorithm) keys in both OpenSSH and ssh.com formats. If your private key is in PuTTY (.ppk) file format, first [convert the key to the OpenSSH (.pem) file format](#convert-to-openssh).
- * **Key exchange algorithms**: Review [Key Exchange Method - SSH.NET](https://github.com/sshnet/SSH.NET#key-exchange-method).
- * **Encryption algorithms**: Review [Encryption Method - SSH.NET](https://github.com/sshnet/SSH.NET#encryption-method).
- * **Fingerprint**: MD5
-
- After you add an SFTP-SSH trigger or action to your workflow, you have to provide connection information for your SFTP server. When you provide your SSH private key for this connection, ***don't manually enter or edit the key***, which might cause the connection to fail. Instead, make sure that you ***copy the key*** from your SSH private key file, and ***paste*** that key into the connection details. For more information, see the [Connect to SFTP with SSH](#connect) section later this article.
-
-* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-
-* The logic app workflow where you want to access your SFTP account. To start with an SFTP-SSH trigger, [create a blank logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md). To use an SFTP-SSH action, start your workflow with another trigger, for example, the **Recurrence** trigger.
-
-## Considerations
-
-The following section describes considerations to review when you use this connector's triggers and actions.
-
-<a name="different-folders-trigger-processing-file-storage"></a>
+* Connection and authentication information to access your SFTP server, such as the server address, account credentials, access to an SSH private key, and the SSH private key password. For more information, see [SFTP-SSH managed connector reference - Authentication and permissions](/connectors/sftpwithssh/#authentication-and-permissions).
-### Use different SFTP folders for file upload and processing
+ > [!IMPORTANT]
+ >
+ > When you create your connection and enter your SSH private key in the **SSH private key** property, make sure to
+ > [follow the steps for providing the complete and correct value for this property](/connectors/sftpwithssh/#authentication-and-permissions).
+ > Otherwise, a non-valid key causes the connection to fail.
-On your SFTP server, use separate folders for storing uploaded files and for the trigger to monitor those files for processing. Otherwise, the trigger won't fire and behaves unpredictably, for example, skipping a random number of files that the trigger processes. However, this requirement means that you need a way to move files between those folders.
+* The logic app workflow where you want to access your SFTP account. To start with an SFTP-SSH trigger, you have to start with a blank workflow. To use an SFTP-SSH action, start your workflow with another trigger, such as the **Recurrence** trigger.
-If this trigger problem happens, remove the files from the folder that the trigger monitors, and use a different folder to store the uploaded files.
+<a name="add-sftp-trigger"></a>
-<a name="create-file"></a>
+## Add an SFTP trigger
-### Create file
+### [Consumption](#tab/consumption)
-To create a file on your SFTP server, you can use the SFTP-SSH **Create file** action. When this action creates the file, the Logic Apps service also automatically calls your SFTP server to get the file's metadata. However, if you move the newly created file before the Logic Apps service can make the call to get the metadata, you get a `404` error message, `'A reference was made to a file or folder which does not exist'`. To skip reading the file's metadata after file creation, follow the steps to [add and set the **Get all file metadata** property to **No**](#file-does-not-exist).
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-> [!IMPORTANT]
-> If you use chunking with SFTP-SSH operations that create files on your SFTP server,
-> these operations create temporary `.partial` and `.lock` files. These files help
-> the operations use chunking. Don't remove or change these files. Otherwise,
-> the file operations fail. When the operations finish, they delete the temporary files.
+1. On the designer, under the search box, select **Standard**. In the search box, enter **sftp**.
-<a name="convert-to-openssh"></a>
+1. From the triggers list, select the [SFTP-SSH trigger](/connectors/sftpwithssh/#triggers) that you want to use.
-## Convert PuTTY-based key to OpenSSH
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-The PuTTY format and OpenSSH format use different file name extensions. The PuTTY format uses the .ppk, or PuTTY Private Key, file name extension. The OpenSSH format uses the .pem, or Privacy Enhanced Mail, file name extension. If your private key is in PuTTY format, and you have to use OpenSSH format, first convert the key to the OpenSSH format by following these steps:
+1. After the trigger information box appears, provide the necessary details for your selected trigger. For more information, see [SFTP-SSH managed connector triggers reference](/connectors/sftpwithssh/#triggers).
-### Unix-based OS
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-1. If you don't have the PuTTY tools installed on your system, do that now, for example:
+### [Standard](#tab/standard)
- `sudo apt-get install -y putty`
+<a name="built-in-connector-trigger"></a>
-1. Run this command, which creates a file that you can use with the SFTP-SSH connector:
+#### Built-in connector trigger
- `puttygen <path-to-private-key-file-in-PuTTY-format> -O private-openssh -o <path-to-private-key-file-in-OpenSSH-format>`
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
- For example:
+1. On the designer, select **Choose an operation**. Under the search box, select **Built-in**.
- `puttygen /tmp/sftp/my-private-key-putty.ppk -O private-openssh -o /tmp/sftp/my-private-key-openssh.pem`
+1. In the search box, enter **sftp**. From the triggers list, select the [SFTP trigger](/azure/logic-apps/connectors/built-in/reference/sftp/#triggers) that you want to use.
-### Windows OS
+1. If prompted, provide the necessary [connection information](/azure/logic-apps/connectors/built-in/reference/sftp/#authentication). When you're done, select **Create**.
-1. If you haven't done so already, [download the latest PuTTY Generator (puttygen.exe) tool](https://www.puttygen.com), and then open the tool.
+1. After the trigger information box appears, provide the necessary details for your selected trigger. For more information, see [SFTP built-in connector triggers reference](/azure/logic-apps/connectors/built-in/reference/sftp/#triggers).
-1. In the PuTTY Key Generator tool (puttygen.exe), under **Actions**, select **Load**.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
- ![Screenshot showing the PuTTY Key Generator tool and the "Actions" section with "Load" selected.](./media/connectors-sftp-ssh/puttygen-load.png)
+<a name="managed-connector-trigger"></a>
-1. Browse to your private key file in PuTTY format, and select **Open**.
+#### Managed connector trigger
-1. From the **Conversions** menu, select **Export OpenSSH key**.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
- ![Screenshot showing the PuTTY Generator tool with the "Conversions" menu open and "Export OpenSSH key" selected.](./media/connectors-sftp-ssh/export-openssh-key.png)
+1. On the designer, select **Choose an operation**. Under the search box, select **Azure**.
-1. Save the private key file with the **.pem** file name extension.
+1. In the search box, enter **sftp**. From the triggers list, select the [SFTP-SSH trigger](/connectors/sftpwithssh/#triggers) that you want to use.
-## Find the MD5 fingerprint
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-The SFTP-SSH connector rejects a connection if both the SFTP server's fingerprint and expected fingerprint don't match. To get the MD5 fingerprint, which is a sequence with 16 pairs of hex digits delimited by colons, try the following options.
+1. After the trigger information box appears, provide the necessary details for your selected trigger. For more information, see [SFTP-SSH managed connector triggers reference](/connectors/sftpwithssh/#triggers).
-### You have the key
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-The MD5 key is a 47-character string delimited by colons. To get the MD5 fingerprint when you have the key, you can use tools such as `ssh-keygen`, for example:
-
-```bash
-ssh-keygen -l -f id_rsa.pub -E md5
-```
-
-### You don't have the key
-
-To get an MD5 fingerprint when you don't have a key, you can use the latest [Server and Protocol Information Dialog tool by WinSCP](https://winscp.net/eng/docs/ui_fsinfo), or you can use the PuTTY Configuration tool instead:
-
-1. In the PuTTY Configuration tool (putty.exe), in the **Category** window, open **Connection** > **SSH** > **Host keys**.
-
-1. Under **Host key algorithm preference**, in the **Algorithm selection policy** list, check that **RSA** appears at the top.
-
-1. If **RSA** doesn't appear at the top, select **RSA**, and then select **Up** until **RSA** moves to the top.
-
- ![Screenshot showing the PuTTY Configuration tool, "Connection" category expanded to show "Host keys" selected. On right pane, "RSA" and "Up" button appear selected.](media/connectors-sftp-ssh/putty-select-rsa-key.png)
-
-1. Connect to your SFTP server with PuTTY. After the connection is created, when the PUTTY security alert appears, select **More info**.
-
- ![Screenshot showing the PuTTY terminal and security alert with "More info" selected.](media/connectors-sftp-ssh/putty-security-alert-more-info.png)
-
- > [!TIP]
- >
- > If the security alert doesn't appear, try clearing the **SshHostKeys** entry. Open the Windows registry editor,
- > and browse to the following entry:
- >
- > **Computer\HKEY_CURRENT_USER\Software\SimonTatham\PuTTY\SshHostKeys**
-
-1. After the **PuTTY: information about the server's host key** box appears, find the **MD5 fingerprint** property, and copy the *47-character string value*, for example.
-
- ![Screenshot showing the more information box with the "MD5 fingerprint" property and the string with the last 47 characters selected for copying.](medi5-fingerprint-key.png)
-
-<a name="connect"></a>
-
-## Connect to SFTP with SSH
--
-1. Sign in to the [Azure portal](https://portal.azure.com), and open your logic app in Logic App Designer, if not open already.
-
-1. For blank logic apps, in the search box, enter `sftp ssh` as your filter. Under the triggers list, select the trigger you want.
-
- -or-
-
- For existing logic apps, under the last step where you want to add an action, select **New step**. In the search box, enter `sftp ssh` as your filter. Under the actions list, select the action you want.
-
- To add an action between steps, move your pointer over the arrow between steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
-
-1. Provide the necessary details for your connection.
-
- > [!IMPORTANT]
- >
- > When you enter your SSH private key in the **SSH private key** property, follow these additional steps, which help
- > make sure you provide the complete and correct value for this property. An invalid key causes the connection to fail.
-
- Although you can use any text editor, here are sample steps that show how to correctly copy and paste your key by using Notepad.exe as an example.
-
- 1. Open your SSH private key file in a text editor. These steps use Notepad as the example.
-
- 1. On the Notepad **Edit** menu, select **Select All**.
+
- 1. Select **Edit** > **Copy**.
+When you save your workflow, this step automatically publishes your updates to your deployed logic app, which is live in Azure. With only a trigger, your workflow just checks the FTP server based on your specified schedule. You have to [add an action](#add-sftp-action) that responds to the trigger and does something with the trigger outputs.
- 1. In the SFTP-SSH trigger or action, *paste the complete* copied key in the **SSH private key** property, which supports multiple lines. ***Don't manually enter or edit the key***.
+For example, the trigger named **When a file is added or modified** starts a workflow when a file is added or changed on an SFTP server. As a subsequent action, you can add a condition that checks whether the file content meets your specified criteria. If the content meets the condition, use the action named **Get file content** to get the file content, and then use another action to put that file content into a different folder on the SFTP server.
-1. After you finish entering the connection details, select **Create**.
+<a name="add-sftp-action"></a>
-1. Now provide the necessary details for your selected trigger or action and continue building your logic app's workflow.
+## Add an SFTP action
-<a name="change-chunk-size"></a>
+Before you can use an SFTP action, your workflow must already start with a trigger, which can be any kind that you choose. For example, you can use the generic **Recurrence** built-in trigger to start your workflow on specific schedule.
-## Override chunk size
+### [Consumption](#tab/consumption)
-To override the default adaptive behavior that chunking uses, you can specify a constant chunk size from 5 MB to 50 MB.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. In the action's upper-right corner, select the ellipses button (**...**), and then select **Settings**.
+1. Under the trigger or action where you want to add the action, select **New step**.
- ![Open SFTP-SSH settings](./media/connectors-sftp-ssh/sftp-ssh-connector-setttings.png)
+ Or, to add the action between steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-1. Under **Content Transfer**, in the **Chunk size** property, enter an integer value from `5` to `50`, for example:
+1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **sftp**.
- ![Specify chunk size to use instead](./media/connectors-sftp-ssh/specify-chunk-size-override-default.png)
+1. From the actions list, select the [SFTP-SSH action](/connectors/sftpwithssh/) that you want to use.
-1. After you finish, select **Done**.
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-## Examples
+1. After the action information box appears, provide the necessary details for your selected action. For more information, see [SFTP-SSH managed connector actions reference](/connectors/sftpwithssh/#actions).
-<a name="file-added-modified"></a>
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-### SFTP - SSH trigger: When a file is added or modified
+### [Standard](#tab/standard)
-This trigger starts a workflow when a file is added or changed on an SFTP server. As example follow-up actions, the workflow can use a condition to check whether the file content meets specified criteria. If the content meets the condition, the **Get file content** SFTP-SSH action can get the content, and then another SFTP-SSH action can put that file in a different folder on the SFTP server.
+<a name="built-in-connector-action"></a>
-**Enterprise example**: You can use this trigger to monitor an SFTP folder for new files that represent customer orders. You can then use an SFTP-SSH action such as **Get file content** so you get the order's contents for further processing and store that order in an orders database.
+#### Built-in connector action
-<a name="get-content"></a>
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-### SFTP - SSH action: Get file content using path
+1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**.
-This action gets the content from a file on an SFTP server by specifying the file path. So for example, you can add the trigger from the previous example and a condition that the file's content must meet. If the condition is true, the action that gets the content can run.
+ Or, to add an action between steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**.
-<a name="troubleshooting-errors"></a>
+1. On the **Add an action** pane, under the search box, select **Built-in**. In the search box, enter **sftp**.
-## Troubleshoot problems
+1. From the actions list, select the [SFTP action](/azure/logic-apps/connectors/built-in/reference/sftp/#actions) that you want to use.
-This section describes possible solutions to common errors or problems.
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-<a name="connection-attempt-failed"></a>
+1. After the action information box appears, provide the necessary details for your selected action. For more information, see [SFTP built-in connector actions reference](/azure/logic-apps/connectors/built-in/reference/sftp/#actions).
-### 504 error: "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond" or "Request to the SFTP server has taken more than '00:00:30' seconds"
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-This error can happen when your logic app can't successfully establish a connection with the SFTP server. There might be different reasons for this problem, so try these troubleshooting options:
+<a name="managed-connector-action"></a>
-* The connection timeout is 20 seconds. Check that your SFTP server has good performance and intermediate devices, such as firewalls, aren't adding overhead.
+#### Managed connector action
-* If you have a firewall set up, make sure that you add the **Managed connector IP** addresses for your region to the approved list. To find the IP addresses for your logic app's region, see [Managed connector outbound IPs - Azure Logic Apps](/connectors/common/outbound-ip-addresses).
+1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
-* If this error happens intermittently, change the **Retry policy** setting on the SFTP-SSH action to a retry count higher than the default four retries.
+1. Under the trigger or action where you want to add the action, select **New step**.
-* Check whether your SFTP server puts a limit on the number of connections from each IP address. Any such limit hinders communication between the connector and the SFTP server. Make sure to remove this limit.
+ Or, to add an action between steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-* To reduce connection establishment cost, in the SSH configuration for your SFTP server, increase the [**ClientAliveInterval**](https://man.openbsd.org/sshd_config#ClientAliveInterval) property to around one hour.
+1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **sftp**.
-* Review the SFTP server log to check whether the request from logic app reached the SFTP server. To get more information about the connectivity problem, you can also run a network trace on your firewall and your SFTP server.
+1. From the actions list, select the [SFTP-SSH action](/connectors/sftpwithssh/) that you want to use.
-<a name="file-does-not-exist"></a>
+1. If prompted, provide the necessary [connection information](/connectors/sftpwithssh/#creating-a-connection). When you're done, select **Create**.
-### 404 error: "A reference was made to a file or folder which does not exist"
+1. After the action information box appears, provide the necessary details for your selected action. For more information, see [SFTP-SSH managed connector actions reference](/connectors/sftpwithssh/#actions).
-This error can happen when your workflow creates a file on your SFTP server with the SFTP-SSH **Create file** action, but immediately moves that file before the Logic Apps service can get the file's metadata. When your workflow runs the **Create file** action, the Logic Apps service automatically calls your SFTP server to get the file's metadata. However, if your logic app moves the file, the Logic Apps service can no longer find the file so you get the `404` error message.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-If you can't avoid or delay moving the file, you can skip reading the file's metadata after file creation instead by following these steps:
+
-1. In the **Create file** action, open the **Add new parameter** list, select the **Get all file metadata** property, and set the value to **No**.
+For example, the action named **Get file content using path** gets the content from a file on an SFTP server by specifying the file path. You can use the trigger from the previous example and a condition that the file content must meet. If the condition is true, a subsequent action can get the content.
-1. If you need this file metadata later, you can use the **Get file metadata** action.
+
-## Connector reference
+## Troubleshooting
-For more technical details about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/sftpwithssh/).
+For more information, see the following documentation:
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version require chunking to use the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
+- [SFTP-SSH managed connector reference - Troubleshooting](/connectors/sftpwithssh/#troubleshooting)
+- [SFTP built-in connector reference - Troubleshooting](/azure/logic-apps/connectors/built-in/reference/sftp#troubleshooting)
## Next steps
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
cosmos-db How To Python Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-python-manage-databases.md
The preceding code snippet displays output similar to the following example cons
## Does database exist?
-The PyMongo driver for Python creates a database if it doesn't exist when you access it. However, we recommend that instead you use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. To create a new database if it doesn't exist, use the [create database extension](/azure/cosmos-db/mongodb/custom-commands#create-database) as shown in the following code snippet.
+The PyMongo driver for Python creates a database if it doesn't exist when you access it. However, we recommend that instead you use the [MongoDB extension commands](./custom-commands.md) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. To create a new database if it doesn't exist, use the [create database extension](./custom-commands.md#create-database) as shown in the following code snippet.
To see if the database already exists before using it, get the list of current databases with the [list_database_names](https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient.list_database_names) method.
The preceding code snippet displays output similar to the following example cons
## Get database object instance
-If a database doesn't exist, the PyMongo driver for Python creates it when you access it. However, we recommend that instead you use the [MongoDB extension commands](/azure/cosmos-db/mongodb/custom-commands) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. The pattern is shown above in the section [Does database exist?](#does-database-exist).
+If a database doesn't exist, the PyMongo driver for Python creates it when you access it. However, we recommend that instead you use the [MongoDB extension commands](./custom-commands.md) to manage data stored in Azure Cosmos DBΓÇÖs API for MongoDB. The pattern is shown above in the section [Does database exist?](#does-database-exist).
When working with PyMongo, you access databases using attribute style access on MongoClient instances. Once you have a database instance, you can use database level operations as shown below.
The preceding code snippet displays output similar to the following example cons
## See also -- [Get started with Azure Cosmos DB for MongoDB and Python](how-to-python-get-started.md)
+- [Get started with Azure Cosmos DB for MongoDB and Python](how-to-python-get-started.md)
cosmos-db How To Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-python-get-started.md
In your *app.py*:
:::code language="python" source="~/cosmos-db-nosql-python-samples/003-how-to/app_aad_default.py" id="credential"::: > [!IMPORTANT]
-> For details on how to add the correct role to enable `DefaultAzureCredential` to work, see [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account](/azure/cosmos-db/how-to-setup-rbac). In particular, see the section on creating roles and assigning them to a principal ID.
+> For details on how to add the correct role to enable `DefaultAzureCredential` to work, see [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account](../how-to-setup-rbac.md). In particular, see the section on creating roles and assigning them to a principal ID.
#### Create CosmosClient with a custom credential implementation
The following guides show you how to use each of these classes to build your app
|--|| | [Create a database](how-to-python-create-database.md) | Create databases | | [Create container](how-to-python-create-container.md) | Create containers |
-| [Item examples](/azure/cosmos-db/nosql/samples-python#item-examples) | Point read a specific item |
+| [Item examples](./samples-python.md#item-examples) | Point read a specific item |
## See also
The following guides show you how to use each of these classes to build your app
Now that you've connected to an API for NoSQL account, use the next guide to create and manage databases. > [!div class="nextstepaction"]
-> [Create a database in Azure Cosmos DB for NoSQL using Python](how-to-python-create-database.md)
+> [Create a database in Azure Cosmos DB for NoSQL using Python](how-to-python-create-database.md)
cosmos-db Kafka Connector Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/kafka-connector-sink.md
You can learn more about change feed in Azure Cosmo DB with the following docs:
* [Reading from change feed](read-change-feed.md) You can learn more about bulk operations in V4 Java SDK with the following docs:
-* [Perform bulk operations on Azure Cosmos DB data](/azure/cosmos-db/nosql/bulk-executor-java)
+* [Perform bulk operations on Azure Cosmos DB data](./bulk-executor-java.md)
cosmos-db Throughput Control Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/throughput-control-spark.md
The [Spark Connector](quickstart-spark.md) allows you to communicate with Azure Cosmos DB using [Apache Spark](https://spark.apache.org/). This article describes how the throughput control feature works. Check out our [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples) to get started using throughput control. > [!TIP]
-> This article documents the use of global throughput control groups in the Azure Cosmos DB Spark Connector, but the functionality is also available in the [Java SDK](/azure/cosmos-db/nosql/sdk-java-v4). In the SDK, you can also use Local Throughput Control groups to limit the RU consumption in the context of a single client connection instance. For example, you can apply this to different operations within a single microservice, or maybe to a single data loading program. Take a look at a code snippet [here](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos/src/samples/java/com/azure/cosmos/ThroughputControlCodeSnippet.java) for how to build a CosmosAsyncClient with both local and global control groups.
+> This article documents the use of global throughput control groups in the Azure Cosmos DB Spark Connector, but the functionality is also available in the [Java SDK](./sdk-java-v4.md). In the SDK, you can also use Local Throughput Control groups to limit the RU consumption in the context of a single client connection instance. For example, you can apply this to different operations within a single microservice, or maybe to a single data loading program. Take a look at a code snippet [here](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos/src/samples/java/com/azure/cosmos/ThroughputControlCodeSnippet.java) for how to build a CosmosAsyncClient with both local and global control groups.
## Why is throughput control important?
In each client record, the `loadFactor` attribute represents the load on the giv
* [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples). * [Manage data with Azure Cosmos DB Spark 3 OLTP Connector for API for NoSQL](quickstart-spark.md).
-* Learn more about [Apache Spark](https://spark.apache.org/).
+* Learn more about [Apache Spark](https://spark.apache.org/).
data-factory Choose The Right Integration Runtime Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/choose-the-right-integration-runtime-configuration.md
Previously updated : 01/10/2023 Last updated : 01/12/2023 # Choose the right integration runtime configuration for your scenario - The integration runtime is a very important part of the infrastructure for the data integration solution provided by Azure Data Factory. This requires you to fully consider how to adapt to the existing network structure and data source at the beginning of designing the solution, as well as consider performance, security and cost. ## Comparison of different types of integration runtimes
data-factory Concepts Data Flow Column Pattern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-column-pattern.md
Previously updated : 11/23/2021 Last updated : 01/11/2023 # Using column patterns in mapping data flow
Use the [expression builder](concepts-data-flow-expression-builder.md) to enter
:::image type="content" source="media/data-flow/edit-column-pattern.png" alt-text="Screenshot shows the Derived column's settings tab.":::
-The above column pattern matches every column of type double and creates one derived column per match. By stating `$$` as the column name field, each matched column is updated with the same name. The value of the each column is the existing value rounded to two decimal points.
+The above column pattern matches every column of type double and creates one derived column per match. By stating `$$` as the column name field, each matched column is updated with the same name. The value of each column is the existing value rounded to two decimal points.
To verify your matching condition is correct, you can validate the output schema of defined columns in the **Inspect** tab or get a snapshot of the data in the **Data preview** tab.
data-factory Concepts Data Flow Flowlet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-flowlet.md
Previously updated : 11/11/2021 Last updated : 01/11/2023 # Flowlets in mapping data flow
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-overview.md
Previously updated : 08/26/2021 Last updated : 01/11/2023 # Mapping data flows in Azure Data Factory
data-factory Concepts Data Flow Performance Sinks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-sinks.md
Previously updated : 10/06/2021 Last updated : 01/11/2023 # Optimizing sinks
data-factory Concepts Data Flow Performance Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-transformations.md
Previously updated : 09/29/2021 Last updated : 01/11/2023 # Optimizing transformations
Unlike merge join in tools like SSIS, the join transformation isn't a mandatory
## Window transformation performance
-The [Window transformation in mapping data flow](data-flow-window.md) partitions your data by value in columns that you select as part of the ```over()``` clause in the transformation settings. There are a number of very popular aggregate and analytical functions that are exposed in the Windows transformation. However, if your use case is to generate a window over your entire dataset for the purpose of ranking ```rank()``` or row number ```rowNumber()```, it is recommended that you instead use the [Rank transformation](data-flow-rank.md) and the [Surrogate Key transformation](data-flow-surrogate-key.md). Those transformation will perform better again full dataset operations using those functions.
+The [Window transformation in mapping data flow](data-flow-window.md) partitions your data by value in columns that you select as part of the ```over()``` clause in the transformation settings. There are a number of very popular aggregate and analytical functions that are exposed in the Windows transformation. However, if your use case is to generate a window over your entire dataset for the purpose of ranking ```rank()``` or row number ```rowNumber()```, it is recommended that you instead use the [Rank transformation](data-flow-rank.md) and the [Surrogate Key transformation](data-flow-surrogate-key.md). Those transformations will perform better again full dataset operations using those functions.
## Repartitioning skewed data
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connect-data-factory-to-azure-purview.md
Previously updated : 10/25/2021 Last updated : 01/11/2023 # Connect Data Factory to Microsoft Purview [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-[Microsoft Purview](../purview/overview.md) is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. You can connect your data factory to Microsoft Purview. That connection allows you to use Microsoft Purview for capturing lineage data, and to discover and explore Microsoft Purview assets.
+[Microsoft Purview](../purview/overview.md) is a unified data governance service that helps you manage and govern your on-premises, multicloud, and software-as-a-service (SaaS) data. You can connect your data factory to Microsoft Purview. That connection allows you to use Microsoft Purview for capturing lineage data, and to discover and explore Microsoft Purview assets.
## Connect Data Factory to Microsoft Purview
data-factory Connector Amazon S3 Compatible Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-s3-compatible-storage.md
Previously updated : 12/13/2021 Last updated : 01/11/2023 # Copy data from Amazon S3 Compatible Storage by using Azure Data Factory or Synapse Analytics
data-factory Connector Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-ftp.md
Previously updated : 11/29/2021 Last updated : 01/11/2023
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-cloud-storage.md
Previously updated : 12/13/2021 Last updated : 01/11/2023
data-factory Connector Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hdfs.md
Previously updated : 12/13/2021 Last updated : 01/11/2023
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md
Previously updated : 09/09/2021 Last updated : 01/11/2023
data-factory Connector Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-servicenow.md
Previously updated : 09/09/2021 Last updated : 01/11/2023 # Copy data from ServiceNow using Azure Data Factory or Synapse Analytics
data-factory Connector Troubleshoot Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-parquet.md
Previously updated : 10/13/2021 Last updated : 01/11/2023
data-factory Continuous Integration Delivery Hotfix Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-hotfix-environment.md
Previously updated : 09/24/2021 Last updated : 01/11/2023
data-factory Continuous Integration Delivery Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-linked-templates.md
Previously updated : 09/24/2021 Last updated : 01/11/2023
data-factory Copy Activity Preserve Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-preserve-metadata.md
Previously updated : 09/09/2021 Last updated : 01/11/2023
data-factory Industry Sap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-overview.md
Previously updated : 08/11/0222 Last updated : 01/11/2023 # SAP knowledge center overview
data-factory Tutorial Incremental Copy Change Data Capture Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md
Previously updated : 12/15/2022 Last updated : 01/11/2023 # Incrementally load data from Azure SQL Managed Instance to Azure Storage using change data capture (CDC)
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Create a data factory
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**:
-
- :::image type="content" source="./media/tutorial-incremental-copy-change-data-capture-feature-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the &quot;New&quot; pane":::
-
-2. In the **New data factory** page, enter **ADFTutorialDataFactory** for the **name**.
-
- :::image type="content" source="./media/tutorial-incremental-copy-change-data-capture-feature-portal/new-azure-data-factory.png" alt-text="New data factory page":::
-
- The name of the Azure data factory must be **globally unique**. If you receive the following error, change the name of the data factory (for example, yournameADFTutorialDataFactory) and try creating again. See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
-
- *Data factory name "ADFTutorialDataFactory" is not available.*
-3. Select **V2** for the **version**.
-4. Select your Azure **subscription** in which you want to create the data factory.
-5. For the **Resource Group**, do one of the following steps:
-
- 1. Select **Use existing**, and select an existing resource group from the drop-down list.
- 2. Select **Create new**, and enter the name of a resource group.
-
- To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-5. Select the **location** for the data factory. Only locations that are supported are displayed in the drop-down list. The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other regions.
-6. De-select **Enable GIT**.
-7. Click **Create**.
-8. Once the deployment is complete, click on **Go to resource**
-
- :::image type="content" source="./media/tutorial-incremental-copy-change-data-capture-feature-portal/data-factory-deploy-complete.png" alt-text="Screenshot shows a message that your deployment is complete and an option to go to resource.":::
-9. After the creation is complete, you see the **Data Factory** page as shown in the image.
-
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
-
-10. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
-11. In the home page, switch to the **Manage** tab in the left panel as shown in the following image:
-
- :::image type="content" source="media/doc-common-process/get-started-page-manage-button.png" alt-text="Screenshot that shows the Manage button.":::
+Follow the steps in the article [Quickstart: Create a data factory by using the Azure portal](quickstart-create-data-factory.md) to create a data factory if you don't already have one to work with.
## Create linked services You create linked services in a data factory to link your data stores and compute services to the data factory. In this section, you create linked services to your Azure Storage account and Azure SQL MI.
data-factory Tutorial Incremental Copy Change Tracking Feature Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-powershell.md
Previously updated : 02/18/2021 Last updated : 01/11/2023 # Incrementally load data from Azure SQL Database to Azure Blob Storage using change tracking information using PowerShell
data-factory Tutorial Transform Data Spark Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-portal.md
Previously updated : 06/07/2021 Last updated : 01/11/2023 # Transform data in the cloud by using a Spark activity in Azure Data Factory
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a data factory
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. Select **New** on the left menu, select **Data + Analytics**, and then select **Data Factory**.
-
- :::image type="content" source="./media/tutorial-transform-data-spark-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the &quot;New&quot; pane":::
-1. In the **New data factory** pane, enter **ADFTutorialDataFactory** under **Name**.
-
- :::image type="content" source="./media/tutorial-transform-data-spark-portal/new-azure-data-factory.png" alt-text="&quot;New data factory&quot; pane":::
-
- The name of the Azure data factory must be *globally unique*. If you see the following error, change the name of the data factory. (For example, use **&lt;yourname&gt;ADFTutorialDataFactory**). For naming rules for Data Factory artifacts, see the [Data Factory - naming rules](naming-rules.md) article.
-
- :::image type="content" source="./media/tutorial-transform-data-spark-portal/name-not-available-error.png" alt-text="Error when a name is not available":::
-1. For **Subscription**, select your Azure subscription in which you want to create the data factory.
-1. For **Resource Group**, take one of the following steps:
-
- - Select **Use existing**, and select an existing resource group from the drop-down list.
- - Select **Create new**, and enter the name of a resource group.
-
- Some of the steps in this quickstart assume that you use the name **ADFTutorialResourceGroup** for the resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-1. For **Version**, select **V2**.
-1. For **Location**, select the location for the data factory.
-
- For a list of Azure regions in which Data Factory is currently available, select the regions that interest you on the following page, and then expand **Analytics** to locate **Data Factory**: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). The data stores (like Azure Storage and Azure SQL Database) and computes (like HDInsight) that Data Factory uses can be in other regions.
-
-1. Select **Create**.
-
-1. After the creation is complete, you see the **Data factory** page. Select the **Author & Monitor** tile to start the Data Factory UI application on a separate tab.
-
- :::image type="content" source="./media/tutorial-transform-data-spark-portal/data-factory-home-page.png" alt-text="Home page for the data factory, with the &quot;Author & Monitor&quot; tile":::
+Follow the steps in the article [Quickstart: Create a data factory by using the Azure portal](quickstart-create-data-factory.md) to create a data factory if you don't already have one to work with.
## Create linked services You author two linked services in this section:
data-lake-analytics Data Lake Analytics Cicd Manage Assemblies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-manage-assemblies.md
Title: Manage U-SQL assemblies in a CI/CD pipeline - Azure Data Lake description: 'Learn the best practices for managing U-SQL C# assemblies in a CI/CD pipeline with Azure DevOps.'-- Last updated 10/30/2018
data-lake-analytics Data Lake Analytics Cicd Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-overview.md
Title: How to set up a CI/CD pipeline for Azure Data Lake Analytics description: Learn how to set up continuous integration and continuous deployment for Azure Data Lake Analytics.--- Last updated 09/14/2018
data-lake-analytics Data Lake Analytics Cicd Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-test.md
Title: How to test your Azure Data Lake Analytics code description: 'Learn how to add test cases for U-SQL and extended C# code for Azure Data Lake Analytics.'--- Last updated 08/30/2019
data-lake-analytics Data Lake Analytics Data Lake Tools Local Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-local-run.md
Title: Run Azure Data Lake U-SQL scripts on your local machine description: Learn how to use Azure Data Lake Tools for Visual Studio to run U-SQL jobs on your local machine.--- Last updated 07/03/2018
data-lake-analytics Data Lake Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-overview.md
Title: Overview of Azure Data Lake Analytics description: Data Lake Analytics lets you drive your business using insights gained in your cloud data at any scale.---
data-lake-analytics Data Lake Analytics U Sql Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-sdk.md
Title: Run U-SQL jobs locally - Azure Data Lake U-SQL SDK description: Learn how to run and test U-SQL jobs locally using the command line and programming interfaces on your local workstation. --- Last updated 03/01/2017
data-lake-analytics Data Lake Analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-whats-new.md
Title: Data Lake Analytics recent changes description: This article provides an ongoing list of recent changes that are made to Data Lake Analytics. - - Last updated 11/16/2022
data-lake-analytics Migrate Azure Data Lake Analytics To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md
Title: Migrate Azure Data Lake Analytics to Azure Synapse Analytics. description: This article describes how to migrate from Azure Data Lake Analytics to Azure Synapse Analytics.--
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics
description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 01/05/2023 --
databox-online Azure Stack Edge Gpu Manage Virtual Machine Network Interfaces Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md
Previously updated : 12/07/2022 Last updated : 01/12/2023 # Customer intent: As an IT admin, I need to understand how to manage network interfaces on an Azure Stack Edge Pro device so that I can use it to run applications using Edge compute before sending it to Azure.<!--Does "it" refer to the device or to the virtual NICs?-->
Follow these steps to add a network interface to a virtual machine deployed on y
||-| |Name | A unique name within the edge resource group. The name cannot be changed after the network interface is created. To manage multiple network interfaces easily, use the suggestions provided in the [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). | |Select an edge resource group |Select the edge resource group to add the network interface to.|
- |Virtual network| The virtual network associated with the virtual switch created on your device when you enabled compute on the network interface. There is only one virtual network associated with your device. |
+ |Virtual network| The virtual network associated with the virtual switch created on your device when you enabled compute on the network interface. |
|Subnet | A subnet within the selected virtual network. This field is automatically populated with the subnet associated with the network interface on which you enabled compute. | |IP address assignment | A static or a dynamic IP for your network interface. The static IP should be an available, free IP from the specified subnet range. Choose dynamic if a DHCP server exists in the environment. |
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Azure DDoS Network Protection, combined with application design best practices,
DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added > [!NOTE]
-> DDoS IP Protection is currently only available in the Azure Preview Portal.
+> DDoS IP Protection is currently only available in Azure Preview PowerShell.
DDoS IP Protection is currently available in the following regions.
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
Some examples of how to use suppression rule are:
You can apply suppression rules to management groups or to subscriptions. -- To suppress alerts for a management group, use [Azure Policy](/azure/governance/policy/overview).
+- To suppress alerts for a management group, use [Azure Policy](../governance/policy/overview.md).
- To suppress alerts for subscriptions, use the Azure portal or the [REST API](#create-and-manage-suppression-rules-with-the-api). Alert types that were never triggered on a subscription or management group before the rule was created won't be suppressed.
This article described the suppression rules in Microsoft Defender for Cloud tha
Learn more about security alerts: -- [Security alerts generated by Defender for Cloud](alerts-reference.md)
+- [Security alerts generated by Defender for Cloud](alerts-reference.md)
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
# Configure the Microsoft Security DevOps Azure DevOps extension
+> [!Note]
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension is retired. MSCA is replaced by the Microsoft Security DevOps Azure DevOps extension. MSCA customers should follow the instructions in this article to install and configure the extension.
+ Microsoft Security DevOps is a command line application that integrates static analysis tools into the development lifecycle. Microsoft Security DevOps installs, configures, and runs the latest versions of static analysis tools (including, but not limited to, SDL/security and compliance tools). Microsoft Security DevOps is data-driven with portable configurations that enable deterministic execution across multiple environments. The Microsoft Security DevOps uses the following Open Source tools:
defender-for-cloud Concept Easm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md
Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that
EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥). That data can be used by MDC CSPM (ΓÇ£inside-outΓÇ¥) to assist with internet-exposure validation and discovery capabilities to provide better visibility to customers. + ## Learn more You can learn more about [Defender EASM](../external-attack-surface-management/index.md), and learn about the [pricing](https://azure.microsoft.com/pricing/details/defender-external-attack-surface-management/) options available.
defender-for-cloud Concept Regulatory Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-regulatory-compliance.md
description: Learn about the Microsoft cloud security benchmark and the benefits
Previously updated : 09/21/2022 Last updated : 01/10/2023 # Microsoft cloud security benchmark in Defender for Cloud
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Some images may reuse tags from an image that was already scanned. For example,
Currently, Defender for Containers can scan images in Azure Container Registry (ACR) and AWS Elastic Container Registry (ECR) only. Docker Registry, Microsoft Artifact Registry/Microsoft Container Registry, and Microsoft Azure Red Hat OpenShift (ARO) built-in container image registry aren't supported.
-Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](/azure/container-registry/container-registry-import-images?tabs=azure-cli).
+Images should first be imported to ACR. Learn more about [importing container images to an Azure container registry](../container-registry/container-registry-import-images.md?tabs=azure-cli).
## Next steps
-Learn more about the [advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md).
+Learn more about the [advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md).
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
Defender for DevOps helps unify, strengthen and manage multi-pipeline DevOps sec
## Availability > [!Note]
- > During the preview, the maximum number of repositories that can be onboarded to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded.
+ > During the preview, the maximum number of GitHub repositories that can be onboarded to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 GitHub repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded.
>
- > If your organization is interested in onboarding more than 2,000 repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding).
+ > If your organization is interested in onboarding more than 2,000 GitHub repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding).
| Aspect | Details | |--|--|
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
When you enable **Microsoft Defender for Azure SQL**, all supported resources th
A vulnerability assessment service discovers, tracks, and helps you remediate potential database vulnerabilities. Assessment scans provide an overview of your SQL machines' security state, and details of any security findings. Defender for Azure SQL helps you identify and mitigate potential database vulnerabilities and detecting anomalous activities that could indicate threats to your databases.
-Learn more about [vulnerability assessment for Azure SQL Database](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview).
+Learn more about [vulnerability assessment for Azure SQL Database](./sql-azure-vulnerability-assessment-overview.md).
### Advanced threat protection
In this article, you learned about Microsoft Defender for Azure SQL. Now you can
- [Enable Microsoft Defender for Azure SQL](quickstart-enable-database-protections.md) - [How Microsoft Defender for Azure SQL can protect SQL servers anywhere](https://www.youtube.com/watch?v=V7RdB6RSVpc).-- [Set up email notifications for security alerts](configure-email-notifications.md)
+- [Set up email notifications for security alerts](configure-email-notifications.md)
defender-for-cloud Defender For Sql On Machines Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-on-machines-vulnerability-assessment.md
Last updated 11/09/2021
- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview) - [SQL Server running on Windows machines without Azure Arc](../azure-monitor/agents/agent-windows.md)
-The integrated [vulnerability assessment scanner](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview) discovers, tracks, and helps you remediate potential database vulnerabilities. Assessment scans findings provide an overview of your SQL machines' security state, and details of any security findings.
+The integrated [vulnerability assessment scanner](./sql-azure-vulnerability-assessment-overview.md) discovers, tracks, and helps you remediate potential database vulnerabilities. Assessment scans findings provide an overview of your SQL machines' security state, and details of any security findings.
> [!NOTE] > The scan is lightweight, safe, only takes a few seconds per database to run and is entirely read-only. It does not make any changes to your database.
You can specify the region where your SQL Vulnerability Assessment data will be
## Next steps
-Learn more about Defender for Cloud's protections for SQL resources in [Overview of Microsoft Defender for SQL](defender-for-sql-introduction.md).
+Learn more about Defender for Cloud's protections for SQL resources in [Overview of Microsoft Defender for SQL](defender-for-sql-introduction.md).
defender-for-cloud Episode Eighteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eighteen.md
Last updated 11/03/2022
## Recommended resources
-Learn more about [Enable Microsoft Defender for Azure Cosmos DB](/azure/defender-for-cloud/defender-for-databases-enable-cosmos-protections)
+Learn more about [Enable Microsoft Defender for Azure Cosmos DB](./defender-for-databases-enable-cosmos-protections.md)
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
Learn more about [Enable Microsoft Defender for Azure Cosmos DB](/azure/defender
## Next steps > [!div class="nextstepaction"]
-> [Defender for DevOps | Defender for Cloud in the field](episode-nineteen.md)
+> [Defender for DevOps | Defender for Cloud in the field](episode-nineteen.md)
defender-for-cloud Episode Nineteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nineteen.md
Last updated 11/08/2022
- [08:22](/shows/mdc-in-the-field/defender-for-devops#time=08m22s) - Demonstration ## Recommended resources
- - [Learn more](/azure/defender-for-cloud/defender-for-devops-introduction) about Defender for DevOps.
+ - [Learn more](./defender-for-devops-introduction.md) about Defender for DevOps.
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity) - For more about [Microsoft Security](https://msft.it/6002T9HQY)
Last updated 11/08/2022
## Next steps > [!div class="nextstepaction"]
-> [Cloud security explorer and attack path analysis](episode-twenty.md)
+> [Cloud security explorer and attack path analysis](episode-twenty.md)
defender-for-cloud Episode Twenty One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-one.md
Last updated 11/24/2022
## Recommended resources
- - [Learn more](/azure/defender-for-cloud/regulatory-compliance-dashboard) about improving your regulatory compliance.
+ - [Learn more](./regulatory-compliance-dashboard.md) about improving your regulatory compliance.
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity) - For more about [Microsoft Security](https://msft.it/6002T9HQY)
Last updated 11/24/2022
## Next steps > [!div class="nextstepaction"]
-> [Defender External Attack Surface Management (Defender EASM)](episode-twenty-two.md)
+> [Defender External Attack Surface Management (Defender EASM)](episode-twenty-two.md)
defender-for-cloud Episode Twenty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty.md
Last updated 11/24/2022
## Recommended resources
- - [Learn more](/azure/defender-for-cloud/concept-attack-path) about Attack path.
+ - [Learn more](./concept-attack-path.md) about Attack path.
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Join our [Tech Community](https://aka.ms/SecurityTechCommunity) - For more about [Microsoft Security](https://msft.it/6002T9HQY)
Last updated 11/24/2022
## Next steps > [!div class="nextstepaction"]
-> [Latest updates in the regulatory compliance dashboard](episode-twenty-one.md)
+> [Latest updates in the regulatory compliance dashboard](episode-twenty-one.md)
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
Title: Microsoft Defender for Cloud's main dashboard or 'overview' page description: Learn about the features of the Defender for Cloud overview page Previously updated : 09/20/2022- Last updated : 01/10/2023+
Microsoft Defender for Cloud's overview page is an interactive dashboard that pr
You can select any element on the page to get more detailed information. ## Features of the overview page ### Metrics
The **top menu bar** offers:
- **Subscriptions** - You can view and filter the list of subscriptions by selecting this button. Defender for Cloud will adjust the display to reflect the security posture of the selected subscriptions. - **What's new** - Opens the [release notes](release-notes.md) so you can keep up to date with new features, bug fixes, and deprecated functionality.-- **High-level numbers** for the connected cloud accounts, to show the context of the information in the main tiles below. As well as the number of assessed resources, active recommendations, and security alerts. Select the assessed resources number to access [Asset inventory](asset-inventory.md). Learn more about connecting your [AWS accounts](quickstart-onboard-aws.md) and your [GCP projects](quickstart-onboard-gcp.md).
+- **High-level numbers** for the connected cloud accounts, showing the context of the information in the main tiles, and the number of assessed resources, active recommendations, and security alerts. Select the assessed resources number to access [Asset inventory](asset-inventory.md). Learn more about connecting your [AWS accounts](quickstart-onboard-aws.md) and your [GCP projects](quickstart-onboard-gcp.md).
### Feature tiles
-In the center of the page are the **feature tiles**, each linking to a high profile feature or dedicated dashboard:
+The center of the page displays the **feature tiles**, each linking to a high profile feature or dedicated dashboard:
-- **Security posture** - Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level. [Learn more](secure-score-security-controls.md).-- **Workload protections** - This is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the enhanced security features](enhanced-security-features-overview.md).
+- **Security posture** - Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can understand, at a glance, your current security situation: the higher the score, the lower the identified risk level. [Learn more](secure-score-security-controls.md).
+- **Workload protections** - This tile is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the enhanced security features](enhanced-security-features-overview.md).
- **Regulatory compliance** - Defender for Cloud provides insights into your compliance posture based on continuous assessments of your Azure environment. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md). - **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
Title: Integrate security solutions in Microsoft Defender for Cloud
description: Learn about how Microsoft Defender for Cloud integrates with partners to enhance the overall security of your Azure resources. Previously updated : 07/14/2022 Last updated : 01/10/2023 # Integrate security solutions in Microsoft Defender for Cloud
This document helps you to manage security solutions already connected to Micros
Defender for Cloud makes it easy to enable integrated security solutions in Azure. Benefits include: - **Simplified deployment**: Defender for Cloud offers streamlined provisioning of integrated partner solutions. For solutions like antimalware and vulnerability assessment, Defender for Cloud can provision the agent on your virtual machines. For firewall appliances, Defender for Cloud can take care of much of the network configuration required.-- **Integrated detections**: Security events from partner solutions are automatically collected, aggregated, and displayed as part of Defender for Cloud alerts and incidents. These events also are fused with detections from other sources to provide advanced threat-detection capabilities.
+- **Integrated detections**: Security events from partner solutions are automatically collected, aggregated, and displayed as part of Defender for Cloud alerts and incidents. These events are also fused with detections from other sources to provide advanced threat-detection capabilities.
- **Unified health monitoring and management**: Customers can use integrated health events to monitor all partner solutions at a glance. Basic management is available, with easy access to advanced setup by using the partner solution. Currently, integrated security solutions include vulnerability assessment by [Qualys](https://www.qualys.com/public-cloud/#azure) and [Rapid7](https://www.rapid7.com/products/insightvm/).
Defender for Cloud also offers vulnerability analysis for your:
## How security solutions are integrated Azure security solutions that are deployed from Defender for Cloud are automatically connected. You can also connect other security data sources, including computers running on-premises or in other clouds. ## Manage integrated Azure security solutions and other data sources
The **Connected solutions** section includes security solutions that are current
![Connected solutions.](./media/partner-integration/connected-solutions.png)
-The status of a partner solution can be:
+The status of a security solution can be:
* **Healthy** (green) - no health issues. * **Unhealthy** (red) - there's a health issue that requires immediate attention.
Select **CONNECT** under a solution to integrate with Defender for Cloud and be
### Add data sources
-The **Add data sources** section includes other available data sources that can be connected. For instructions on adding data from any of these sources, click **ADD**.
+The **Add data sources** section includes other available data sources that can be connected. For instructions on adding data from any of these sources, select **ADD**.
![Data sources.](./media/partner-integration/add-data-sources.png)
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
The specific role required to deploy monitoring components depends on the extens
## Roles used to automatically provision agents and extensions
-To allow the Security Admin role to automatically provision agents and extensions used in Defender for Cloud plans, Defender for Cloud uses policy remediation in a similar way to [Azure Policy](/azure/governance/policy/how-to/remediate-resources). To use remediation, Defender for Cloud needs to create service principals, also called managed identities, that assign roles at the subscription level. For example, the service principals for the Defender for Containers plan are:
+To allow the Security Admin role to automatically provision agents and extensions used in Defender for Cloud plans, Defender for Cloud uses policy remediation in a similar way to [Azure Policy](../governance/policy/how-to/remediate-resources.md). To use remediation, Defender for Cloud needs to create service principals, also called managed identities, that assign roles at the subscription level. For example, the service principals for the Defender for Containers plan are:
| Service Principal | Roles | |:-|:-|
This article explained how Defender for Cloud uses Azure RBAC to assign permissi
- [Set security policies in Defender for Cloud](tutorial-security-policy.md) - [Manage security recommendations in Defender for Cloud](review-security-recommendations.md) - [Manage and respond to security alerts in Defender for Cloud](managing-and-responding-alerts.md)-- [Monitor partner security solutions](./partner-integration.md)
+- [Monitor partner security solutions](./partner-integration.md)
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
By connecting your GitHub repositories to Defender for Cloud, you'll extend Defe
- To use all advanced security capabilities provided by GitHub Connector in Defender for DevOps, you need to have GitHub Enterprise with GitHub Advanced Security (GHAS) enabled. ## Availability
+ > [!Note]
+ > During the preview, the maximum number of GitHub repositories that can be onboarded to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 GitHub repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded.
+ >
+ > If your organization is interested in onboarding more than 2,000 GitHub repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding).
| Aspect | Details | |--|--|
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Applications that are installed in virtual machines could often have vulnerabili
Azure Security Center's support for threat protection and vulnerability assessment for SQL DBs running on IaaS VMs is now in preview.
-[Vulnerability assessment](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview) is an easy to configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security posture as part of secure score and includes the steps to resolve security issues and enhance your database fortifications.
+[Vulnerability assessment](./sql-azure-vulnerability-assessment-overview.md) is an easy to configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security posture as part of secure score and includes the steps to resolve security issues and enhance your database fortifications.
[Advanced threat protection](/azure/azure-sql/database/threat-detection-overview) detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit your SQL server. It continuously monitors your database for suspicious activities and provides action-oriented security alerts on anomalous database access patterns. These alerts provide the suspicious activity details and recommended actions to investigate and mitigate the threat.
Azure Security Center (ASC) has launched new networking recommendations and impr
One of the biggest attack surfaces for workloads running in the public cloud are connections to and from the public Internet. Our customers find it hard to know which Network Security Group (NSG) rules should be in place to make sure that Azure workloads are only available to required source ranges. With this feature, Security Center learns the network traffic and connectivity patterns of Azure workloads and provides NSG rule recommendations, for Internet facing virtual machines. This helps our customer better configure their network access policies and limit their exposure to attacks.
-[Learn more about adaptive network hardening](adaptive-network-hardening.md).
+[Learn more about adaptive network hardening](adaptive-network-hardening.md).
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
To get to the list of recommendations:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Either:
- - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment that you want to improve.
+ - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment you want to improve.
- Go to **Recommendations** in the Defender for Cloud menu.
-You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations, and look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
+You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations. Look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
You can learn more by watching this video from the Defender for Cloud in the Field video series: - [Security posture management improvements](episode-four.md)
When you [remediate](implement-security-recommendations.md) all of the recommend
[Security teams can assign a recommendation](governance-rules.md) to a specific person and assign a due date to drive your organization towards increased security. If you have recommendations assigned to you, you're accountable to remediate the resources affected by the recommendations to help your organization be compliant with the security policy.
-Recommendations are listed as **On time** until their due date is passed, when they're changed to **Overdue**. Before the recommendation is overdue, the recommendation doesn't impact the secure score. The security team can also apply a grace period during which overdue recommendations continue to not impact the secure score.
+Recommendations are listed as **On time** until their due date is passed, when they're changed to **Overdue**. Before the recommendation is overdue, the recommendation doesn't affect the secure score. The security team can also apply a grace period during which overdue recommendations continue to not affect the secure score.
To help you plan your work and report on progress, you can set an ETA for the specific resources to show when you plan to have the recommendation resolved by for those resources. You can also change the owner of the recommendation for specific resources so that the person responsible for remediation is assigned to the resource.
To change the owner of resources and set the ETA for remediation of recommendati
1. In the filters for list of recommendations, select **Show my items only**. - The status column indicates the recommendations that are on time, overdue, or completed.
- - The insights column indicates the recommendations that are in a grace period, so they currently don't impact your secure score until they become overdue.
+ - The insights column indicates the recommendations that are in a grace period, so they currently don't affect your secure score until they become overdue.
1. Select an on time or overdue recommendation. 1. For the resources that are assigned to you, set the owner of the resource: 1. Select the resources that are owned by another person, and select **Change owner and set ETA**. 1. Select **Change owner**, enter the email address of the owner of the resource, and select **Save**.
- The owner of the resource gets a weekly email listing the recommendations that they're assigned to.
+
+ The owner of the resource gets a weekly email listing the recommendations that they're assigned.
+ 1. For resources that you own, set an ETA for remediation: 1. Select resources that you plan to remediate by the same date, and select **Change owner and set ETA**. 1. Select **Change ETA** and set the date by which you plan to remediate the recommendation for those resources.
The due date for the recommendation doesn't change, but the security team can se
## Review recommendation data in Azure Resource Graph Explorer (ARG)
-You can review recommendations in ARG both on the recommendations page or on an individual recommendation.
+You can review recommendations in ARG both on the Recommendations page or on an individual recommendation.
-The toolbar on the recommendation details page includes an **Open query** button to explore the details in [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that gives you the ability to query - across multiple subscriptions - Defender for Cloud's security posture data.
+The toolbar on the Recommendations page includes an **Open query** button to explore the details in [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that gives you the ability to query - across multiple subscriptions - Defender for Cloud's security posture data.
ARG is designed to provide efficient resource exploration with the ability to query at scale across your cloud environments with robust filtering, grouping, and sorting capabilities. It's a quick and efficient way to query information across Azure subscriptions programmatically or from within the Azure portal.
The Insights column of the page gives you more details for each recommendation.
Recommendations that aren't included in the calculations of your secure score, should still be remediated wherever possible, so that when the period ends they'll contribute towards your score instead of against it.
-## Download recommendations in a CSV report
+## Download recommendations to a CSV report
Recommendations can be downloaded to a CSV report from the Recommendations page.
To download a CSV report of your recommendations:
:::image type="content" source="media/review-security-recommendations/download-csv.png" alt-text="Screenshot showing you where to select the Download C S V report from.":::
-You'll know the report is being prepared by the pop-up.
+You'll know the report is being prepared when the pop-up appears.
When the report is ready, you'll be notified by a second pop-up. ## Learn more
You can check out the following blogs:
In this document, you were introduced to security recommendations in Defender for Cloud. For related information: -- [Remediate recommendations](implement-security-recommendations.md)--Learn how to configure security policies for your Azure subscriptions and resource groups.
+- [Remediate recommendations](implement-security-recommendations.md)-Learn how to configure security policies for your Azure subscriptions and resource groups.
- [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md).-- [Automate responses to Defender for Cloud triggers](workflow-automation.md)--Automate responses to recommendations
+- [Automate responses to Defender for Cloud triggers](workflow-automation.md)-Automate responses to recommendations
- [Exempt a resource from a recommendation](exempt-resource.md) - [Security recommendations - a reference guide](recommendations-reference.md)
defender-for-cloud Sql Azure Vulnerability Assessment Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-enable.md
To enable vulnerability assessment with a storage account, use the classic confi
:::image type="content" source="media/defender-for-sql-azure-vulnerability-assessment/sql-vulnerability-scan-settings.png" alt-text="Screenshot of configuring the SQL vulnerability assessment scans.":::
- 1. Configure a storage account where your scan results for all databases on the server or managed instance will be stored. For information about storage accounts, see [About Azure storage accounts](/azure/storage/common/storage-account-create).
+ 1. Configure a storage account where your scan results for all databases on the server or managed instance will be stored. For information about storage accounts, see [About Azure storage accounts](../storage/common/storage-account-create.md).
1. To configure vulnerability assessments to automatically run weekly scans to detect security misconfigurations, set **Periodic recurring scans** to **On**. The results are sent to the email addresses you provide in **Send scan reports to**. You can also send email notification to admins and subscription owners by enabling **Also send email notification to admins and subscription owners**.
Learn more about:
- [Microsoft Defender for Azure SQL](defender-for-sql-introduction.md) - [Data discovery and classification](/azure/azure-sql/database/data-discovery-and-classification-overview)-- [Storing scan results in a storage account behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage)
+- [Storing scan results in a storage account behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage)
defender-for-cloud Sql Azure Vulnerability Assessment Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md
Typical scenarios may include:
- Disable findings from benchmarks that aren't of interest for a defined scope > [!IMPORTANT]
-> - To disable specific findings, you need permissions to edit a policy in Azure Policy. Learn more in [Azure RBAC permissions in Azure Policy](/azure/governance/policy/overview#azure-rbac-permissions-in-azure-policy).
+> - To disable specific findings, you need permissions to edit a policy in Azure Policy. Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
> - Disabled findings will still be included in the weekly SQL vulnerability assessment email report. > - Disabled rules are shown in the "Not applicable" section of the scan results.
To handle Boolean types as true/false, set the baseline result with binary input
- Learn more about [Microsoft Defender for Azure SQL](defender-for-sql-introduction.md). - Learn more about [data discovery and classification](/azure/azure-sql/database/data-discovery-and-classification-overview).-- Learn more about [storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage).
+- Learn more about [storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
### [**Windows machines**](#tab/features-windows)
-| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
| | :--: | :-: | :-: | | [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes | | [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Yes |
The **tabs** below show the features of Microsoft Defender for Cloud that are av
### [**Linux machines**](#tab/features-linux)
-| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+| **Feature** | **Azure Virtual Machines and [Virtual Machine Scale Sets with Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
| | :--: | :-: | :-: | | [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | Γ£ö | Yes | | [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
For information about when recommendations are generated for each of these solut
- Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md#log-analytics-agent). - Learn how [Defender for Cloud manages and safeguards data](data-security.md).-- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
+- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-iot Concept Data Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-data-processing.md
Title: Data processing and residency description: Microsoft Defender for IoT data processing, and residency can occur in regions that are different than the IoT Hub's region. Previously updated : 12/19/2021 Last updated : 01/12/2023
defender-for-iot Concept Standalone Micro Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-standalone-micro-agent-overview.md
Title: Standalone micro agent overview description: The Microsoft Defender for IoT security agents allow you to build security directly into your new IoT devices and Azure IoT projects. Previously updated : 12/13/2021 Last updated : 01/12/2023
defender-for-iot How To Provision Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-provision-micro-agent.md
This article explains how to provision the standalone Microsoft Defender for IoT micro agent using [Azure IoT Hub Device Provisioning Service](../../iot-dps/about-iot-dps.md) with [X.509 certificate attestation](../../iot-dps/concepts-x509-attestation.md).
-To learn how to configure the Microsoft Defender for IoT micro agent for Edge devices see [Create and provision IoT Edge devices at scale](/azure/iot-edge/how-to-provision-devices-at-scale-linux-tpm)
+To learn how to configure the Microsoft Defender for IoT micro agent for Edge devices see [Create and provision IoT Edge devices at scale](../../iot-edge/how-to-provision-devices-at-scale-linux-tpm.md)
## Prerequisites
To learn how to configure the Microsoft Defender for IoT micro agent for Edge de
1. [Configure the micro agent to use the created module](tutorial-standalone-agent-binary-installation.md#authenticate-using-a-module-identity-connection-string) (note that the device does not have to exist yet).
-1. Navigate back to DPS and [provision the device through DPS](/azure/iot-dps/quick-create-simulated-device-x509).
+1. Navigate back to DPS and [provision the device through DPS](../../iot-dps/quick-create-simulated-device-x509.md).
1. Navigate to the configured device in the destination IoT Hub.
To learn how to configure the Microsoft Defender for IoT micro agent for Edge de
[Configure Microsoft Defender for IoT agent-based solution](tutorial-configure-agent-based-solution.md)
-[Configure pluggable Authentication Modules (PAM) to audit sign-in events (Preview)](configure-pam-to-audit-sign-in-events.md)
+[Configure pluggable Authentication Modules (PAM) to audit sign-in events (Preview)](configure-pam-to-audit-sign-in-events.md)
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/overview.md
Title: What is Microsoft Defender for IoT for device builders? description: Learn about how Microsoft Defender for IoT helps device builders to embed security into new IoT/OT devices. Previously updated : 12/19/2021 Last updated : 01/12/2023 #Customer intent: As a device builder, I want to understand how Defender for IoT can help secure my new IoT/OT initiatives.
defender-for-iot Dell Poweredge R340 Xl Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r340-xl-legacy.md
This article describes the Dell PowerEdge R340 XL appliance, supported for OT sensors and on-premises management consoles.
-Legacy appliances are certified but aren't currently offered as preconfigured appliances.
-
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
|Appliance characteristic | Description| |||
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
This article describes the HPE Edgeline EL300 appliance for OT sensors or on-premises management consoles.
-Legacy appliances are certified but aren't currently offered as preconfigured appliances.
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
| Appliance characteristic |Details |
defender-for-iot Hpe Proliant Dl20 Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-legacy.md
This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors in an enterprise deployment.
-Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
| Appliance characteristic |Details | |||
defender-for-iot Hpe Proliant Dl20 Smb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-smb-legacy.md
This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors for monitoring production lines.
-Legacy appliances are certified but are not currently offered as pre-configured appliances.
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
| Appliance characteristic |Details | |||
defender-for-iot Neousys Nuvo 5006Lp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/neousys-nuvo-5006lp.md
This article describes the Neousys Nuvo-5006LP appliance for OT sensors.
-Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+> [!NOTE]
+> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
| Appliance characteristic |Details | |||
defender-for-iot Concept Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-enterprise.md
The number of IoT devices continues to grow exponentially across enterprise netw
While the number of IoT devices continues to grow, they often lack the security safeguards that are common on managed endpoints like laptops and mobile phones. To bad actors, these unmanaged devices can be used as a point of entry for lateral movement or evasion, and too often, the use of such tactics leads to the exfiltration of sensitive information.
-[Microsoft Defender for IoT](/azure/defender-for-iot/organizations/) seamlessly integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data.
+[Microsoft Defender for IoT](./index.yml) seamlessly integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data.
> [!IMPORTANT] > The Enterprise IoT Network sensor is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Defender for IoT provides IoT security functionality across both the Microsoft 3
|Method |Description and requirements | Configure in ... | ||||
-|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) only** | Add an Enterprise IoT plan in Microsoft 365 Defender to view IoT-specific alerts, recommendations, and vulnerability data in Microsoft 365 Defender. <br><br>The extra security value is provided for IoT devices detected by Defender for Endpoint. <br><br>**Requires**: <br> - A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator)<br>- Azure access as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) | Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. |
-|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) plus an [Enterprise IoT sensor](#device-visibility-with-enterprise-iot-sensors-public-preview)** | Add an Enterprise IoT plan in Microsoft 365 Defender to add IoT-specific alerts, recommendations, and vulnerability data Microsoft 365 Defender, for IoT devices detected by Defender for Endpoint. <br><br>Register an Enterprise IoT sensor in Defender for IoT for more device visibility in both Microsoft 365 Defender and the Azure portal. An Enterprise IoT sensor also adds alerts and recommendations triggered by the sensor in the Azure portal.<br><br>**Requires**: <br>- A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator)<br>- Azure access as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner)<br>- A physical or VM appliance to use as a sensor |Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. <br><br>Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
-|**[An Enterprise IoT sensor only](#device-visibility-with-enterprise-iot-sensors-only)** | Register an Enterprise IoT sensor in Defender for IoT for Enterprise IoT device visibility, alerts, and recommendations in the Azure portal only. <br><br>Vulnerability data isn't currently available. <br><br>**Requires**: <br>- Azure access as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) <br>- A physical or VM appliance to use as a sensor | Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
+|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) only** | Add an Enterprise IoT plan in Microsoft 365 Defender to view IoT-specific alerts, recommendations, and vulnerability data in Microsoft 365 Defender. <br><br>The extra security value is provided for IoT devices detected by Defender for Endpoint. <br><br>**Requires**: <br> - A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator)<br>- Azure access as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) | Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. |
+|**[An Enterprise IoT plan](#security-value-in-microsoft-365-defender) plus an [Enterprise IoT sensor](#device-visibility-with-enterprise-iot-sensors-public-preview)** | Add an Enterprise IoT plan in Microsoft 365 Defender to add IoT-specific alerts, recommendations, and vulnerability data Microsoft 365 Defender, for IoT devices detected by Defender for Endpoint. <br><br>Register an Enterprise IoT sensor in Defender for IoT for more device visibility in both Microsoft 365 Defender and the Azure portal. An Enterprise IoT sensor also adds alerts and recommendations triggered by the sensor in the Azure portal.<br><br>**Requires**: <br>- A Microsoft Defender for Endpoint P2 license<br> - Microsoft 365 Defender access as a [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator)<br>- Azure access as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner)<br>- A physical or VM appliance to use as a sensor |Add your Enterprise IoT plan in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender. <br><br>Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
+|**[An Enterprise IoT sensor only](#device-visibility-with-enterprise-iot-sensors-only)** | Register an Enterprise IoT sensor in Defender for IoT for Enterprise IoT device visibility, alerts, and recommendations in the Azure portal only. <br><br>Vulnerability data isn't currently available. <br><br>**Requires**: <br>- Azure access as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) <br>- A physical or VM appliance to use as a sensor | Register an Enterprise IoT sensor in the **Getting started** > **Set up Enterprise IoT Security** page in Defender for IoT in the Azure portal. |
## Security value in Microsoft 365 Defender
The following image shows the architecture of an Enterprise IoT network sensor c
Start securing your Enterprise IoT network resources with by [onboarding to Defender for IoT from Microsoft 365 Defender](eiot-defender-for-endpoint.md). Then, add even more device visibility by [adding an Enterprise IoT network sensor](eiot-sensor.md) to Defender for IoT.
-For more information, see [Enterprise IoT networks frequently asked questions](faqs-eiot.md).
+For more information, see [Enterprise IoT networks frequently asked questions](faqs-eiot.md).
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
SecurityAlert
After you've installed the Microsoft Defender for IoT solution and deployed the [AD4IoT-AutoAlertStatusSync](iot-advanced-threat-monitoring.md#update-alert-statuses-in-defender-for-iot) playbook, alert status changes are synchronized from Microsoft Sentinel to Defender for IoT. Alert status changes are *not* synchronized from Defender for IoT to Microsoft Sentinel. > [!IMPORTANT]
-> We recommend that you manage your alert statuses together with the related incidents in Microsoft Sentinel. For more information, see [Work with incident tasks in Microsoft Sentinel](/azure/sentinel/work-with-tasks).
+> We recommend that you manage your alert statuses together with the related incidents in Microsoft Sentinel. For more information, see [Work with incident tasks in Microsoft Sentinel](../../sentinel/work-with-tasks.md).
> ### Defender for IoT incidents in Microsoft Sentinel
defender-for-iot Eiot Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-defender-for-endpoint.md
Make sure that you have:
|Identity management |Roles required | |||
- |**In Azure Active Directory** | [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator) for your Microsoft 365 tenant |
- |**In Azure RBAC** | [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) for the Azure subscription that you'll be using for the integration |
+ |**In Azure Active Directory** | [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) for your Microsoft 365 tenant |
+ |**In Azure RBAC** | [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) for the Azure subscription that you'll be using for the integration |
## Onboard a Defender for IoT plan
Make sure that you have:
1. Select the following options for your plan:
- - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+ - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription.
- **Price plan**: For the sake of this tutorial, select a **Trial** pricing plan. Microsoft Defender for IoT provides a [30-day free trial](billing.md#free-trial) for evaluation purposes. For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
Learn how to set up an Enterprise IoT network sensor (Public preview) and gain m
Customers that have set up an Enterprise IoT network sensor will be able to see all discovered devices in the **Device inventory** in either Microsoft 365 Defender, or Defender for IoT in the Azure portal. > [!div class="nextstepaction"]
-> [Enhance device discovery with an Enterprise IoT network sensor](eiot-sensor.md)
+> [Enhance device discovery with an Enterprise IoT network sensor](eiot-sensor.md)
defender-for-iot Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-sensor.md
Before you start registering an Enterprise IoT sensor:
If you only want to view data in the Azure portal, an Enterprise IoT plan isn't required. You can also onboard your Enterprise IoT plan from Microsoft 365 Defender after registering your network sensor to bring [extra device visibility and security value](concept-enterprise.md#security-value-in-microsoft-365-defender) to your organization. -- Make sure you can access the Azure portal as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).
+- Make sure you can access the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).
- Allocate a physical appliance or a virtual machine (VM) to use as your network sensor. Make sure that your machine has the following specifications:
Billing changes will take effect one hour after cancellation of the previous sub
- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md). For more information, see [Malware engine alerts](alert-engine-messages.md#malware-engine-alerts). -- [Enhance security posture with security recommendations](recommendations.md)
+- [Enhance security posture with security recommendations](recommendations.md)
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
This procedure describes how to add a trial Defender for IoT plan for OT network
1. In the **Plan settings** pane, define the following settings:
- - **Subscription**: Select the Azure subscription where you want to add a plan. You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the selected subscription.
+ - **Subscription**: Select the Azure subscription where you want to add a plan. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the selected subscription.
> [!TIP] > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner. Also make sure that you have the right subscriptions selected in your Azure settings > **Directories + subscriptions** page.
Your new plan is listed under the relevant subscription on the **Plans and prici
> [Understand Defender for IoT subscription billing](billing.md) > [!div class="nextstepaction"]
-> [Defender for IoT pricing](https://azure.microsoft.com/pricing/details/iot-defender/)
-
+> [Defender for IoT pricing](https://azure.microsoft.com/pricing/details/iot-defender/)
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
If your forwarding alert rules aren't working as expected, check the following d
## Next steps > [!div class="nextstepaction"]
-> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
+> [Microsoft Defender for IoT alerts](alerts.md)
> [!div class="nextstepaction"] > [View and manage alerts on your OT sensor](how-to-view-alerts.md) > [!div class="nextstepaction"]
-> [Accelerate alert workflows on an OT network sensor](how-to-accelerate-alert-incident-response.md)
+> [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
> [!div class="nextstepaction"] > [OT monitoring alert types and descriptions](alert-engine-messages.md)
-> [!div class="nextstepaction"]
-> [Forward alert information](how-to-forward-alert-information-to-partners.md)
-
-> [!div class="nextstepaction"]
-> [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
For more information, see [Azure user roles and permissions for Defender for IoT
| **Severity**| A predefined alert severity assigned by the sensor that you can [modify as needed](#manage-alert-severity-and-status). | | **Name** | The alert title. | | **Site** | The site associated with the sensor that detected the alert, as listed on the [Sites and sensors](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal) page.|
- | **Engine** | The [Defender for IoT detection engine](architecture.md#defender-for-iot-analytics-engines) that detected the activity and triggered the alert. <br><br>**Note**: A value of **Micro-agent** indicates that the event was triggered by the Defender for IoT [Device Builder](/azure/defender-for-iot/device-builders/) platform. |
+ | **Engine** | The [Defender for IoT detection engine](architecture.md#defender-for-iot-analytics-engines) that detected the activity and triggered the alert. <br><br>**Note**: A value of **Micro-agent** indicates that the event was triggered by the Defender for IoT [Device Builder](../device-builders/index.yml) platform. |
| **Last detection** | The last time the alert was detected. <br><br>- If an alert's status is **New**, and the same traffic is seen again, the **Last detection** time is updated for the same alert. <br>- If the alert's status is **Closed** and traffic is seen again, the **Last detection** time is *not* updated, and a new alert is triggered.| | **Status** | The alert status: *New*, *Active*, *Closed* <br><br>For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).| | **Source device** |The IP address, MAC address, or the name of the device where the traffic that triggered the alert originated. |
The file is generated, and you're prompted to save it locally.
> [OT monitoring alert types and descriptions](alert-engine-messages.md) > [!div class="nextstepaction"]
-> [Microsoft Defender for IoT alerts](alerts.md)
+> [Microsoft Defender for IoT alerts](alerts.md)
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
You'll need an SMTP mail server configured to enable email alerts about disconne
**Prerequisites**:
-Make sure you can reach the SMTP server from the [sensor's management port](/azure/defender-for-iot/organizations/best-practices/understand-network-architecture).
+Make sure you can reach the SMTP server from the [sensor's management port](./best-practices/understand-network-architecture.md).
**To configure an SMTP server on your sensor**:
For more information, see:
- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md) - [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Before performing the procedures in this article, make sure that you have:
- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/). -- A [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) user role for the Azure subscription that you'll be using for the integration
+- A [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user role for the Azure subscription that you'll be using for the integration
## Calculate committed devices for OT monitoring
This procedure describes how to add a Defender for IoT plan for OT networks to a
- **Subscription**. Select the subscription where you would like to add a plan.
- You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+ You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription.
> [!TIP] > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner.
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
You can configure a standalone sensor and a management console, with the sensors
To connect a standalone sensor to NTP: -- [See the CLI documentation](/azure/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands#sync-time-from-the-ntp-server).
+- [See the CLI documentation](./references-work-with-defender-for-iot-cli-commands.md).
To connect a sensor controlled by the management console to NTP:
defender-for-iot On Premises Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/on-premises-sentinel.md
You can [stream Microsoft Defender for IoT data into Microsoft Sentinel](../iot-
However, if you're working either in a hybrid environment, or completely on-premises, you might want to stream data in from your locally-managed sensors to Microsoft Sentinel. To do this, create forwarding rules on either your OT network sensor, or for multiple sensors from an on-premises management console.
-Stream data into Microsoft Sentinel whenever you want to use Microsoft Sentinel's advanced threat hunting, security analytics, and automation features when responding to security incidents and threats across your network. For more information, see [Microsoft Sentinel documentation](/azure/sentinel/).
+Stream data into Microsoft Sentinel whenever you want to use Microsoft Sentinel's advanced threat hunting, security analytics, and automation features when responding to security incidents and threats across your network. For more information, see [Microsoft Sentinel documentation](../../../sentinel/index.yml).
## Prerequisites
Before you start, make sure that you have the following prerequisites as needed:
- Access to the OT network sensor or on-premises management console as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md). -- A proxy machine prepared to send data to Microsoft Sentinel. For more information, see [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](/azure/sentinel/connect-common-event-format).
+- A proxy machine prepared to send data to Microsoft Sentinel. For more information, see [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](../../../sentinel/connect-common-event-format.md).
- If you want to encrypt the data you send to Microsoft Sentinel using TLS, make sure to generate a valid TLS certificate from the proxy server to use in your forwarding alert rule.
Select **Save** when you're done. Make sure to test the rule to make sure that i
> [Stream data from cloud-connected sensors](../iot-solution.md) > [!div class="nextstepaction"]
-> [Investigate in Microsoft Sentinel](/azure/sentinel/investigate-cases)
+> [Investigate in Microsoft Sentinel](../../../sentinel/investigate-cases.md)
defender-for-iot Send Cloud Data To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/send-cloud-data-to-partners.md
As more businesses convert OT systems to digital IT infrastructures, security op
We recommend using Microsoft Defender for IoT's out-of-the-box [data connector](../iot-solution.md) and [solution](../iot-advanced-threat-monitoring.md) to integrate with Microsoft Sentinel and bridge the gap between the IT and OT security challenge.
-However, if you have other security information and event management (SIEM) systems, you can also use Microsoft Sentinel to forward Defender for IoT cloud alerts on to that partner SIEM, via [Microsoft Sentinel](/azure/sentinel/) and [Azure Event Hubs](/azure/event-hubs/).
+However, if you have other security information and event management (SIEM) systems, you can also use Microsoft Sentinel to forward Defender for IoT cloud alerts on to that partner SIEM, via [Microsoft Sentinel](../../../sentinel/index.yml) and [Azure Event Hubs](../../../event-hubs/index.yml).
While this article uses Splunk as an example, you can use the process described below with any SIEM that supports Event Hub ingestion, such as IBM QRadar.
You'll need Azure Active Directory (Azure AD) defined as a service principal for
**To register an Azure AD application and define permissions**:
-1. In [Azure AD](/azure/active-directory/), register a new application. On the **Certificates & secrets** page, add a new client secret for the service principal.
+1. In [Azure AD](../../../active-directory/index.yml), register a new application. On the **Certificates & secrets** page, add a new client secret for the service principal.
- For more information, see [Register an application with the Microsoft identity platform](/azure/active-directory/develop/quickstart-register-app)
+ For more information, see [Register an application with the Microsoft identity platform](../../../active-directory/develop/quickstart-register-app.md)
1. In your app's **API permissions** page, grant API permissions to read data from your app.
You'll need Azure Active Directory (Azure AD) defined as a service principal for
1. Make sure that admin consent is required for your permission.
- For more information, see [Configure a client application to access a web API](/azure/active-directory/develop/quickstart-configure-app-access-web-apis#add-permissions-to-access-your-web-api)
+ For more information, see [Configure a client application to access a web API](../../../active-directory/develop/quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api)
1. From your app's **Overview** page, note the following values for your app:
Create an Azure event hub to use as a bridge between Microsoft Sentinel and your
In your event hub, make sure to define the **Partition Count** and **Message Retention** settings.
- For more information, see [Create an event hub using the Azure portal](/azure/event-hubs/event-hubs-create).
+ For more information, see [Create an event hub using the Azure portal](../../../event-hubs/event-hubs-create.md).
1. In your event hub namespace, select the **Access control (IAM)** page and add a new role assignment. Select to use the **Azure Event Hubs Data Receiver** role, and add the Azure AD service principle app that you'd created [earlier](#register-an-application-in-azure-active-directory) as a member.
- For more information, see: [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
+ For more information, see: [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
1. In your event hub namespace's **Overview** page, make a note of the namespace's **Host name** value.
In your rule, make sure to define the following settings:
- Configure the **Source** as **SecurityIncident** - Configure the **Destination** as **Event Type**, using the event hub namespace and event hub name you'd recorded earlier.
-For more information, see [Log Analytics workspace data export in Azure Monitor](/azure/azure-monitor/logs/logs-data-export?tabs=portal#create-or-update-a-data-export-rule).
+For more information, see [Log Analytics workspace data export in Azure Monitor](../../../azure-monitor/logs/logs-data-export.md?tabs=portal#create-or-update-a-data-export-rule).
## Configure Splunk to consume Microsoft Sentinel incidents
Once data starts getting ingested into Splunk from your event hub, query the dat
This article describes how to forward alerts generated by cloud-connected sensors only. If you're working on-premises, such as in air-gapped environments, you may be able to create a forwarding alert rule to forward alert data directly from an OT sensor or on-premises management console.
-For more information, see [Integrations with Microsoft and partner services](../integrate-overview.md).
+For more information, see [Integrations with Microsoft and partner services](../integrate-overview.md).
defender-for-iot Iot Advanced Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md
Last updated 09/18/2022
# Tutorial: Investigate and detect threats for IoT devices
-The integration between Microsoft Defender for IoT and [Microsoft Sentinel](/azure/sentinel/) enable SOC teams to efficiently and effectively detect and respond to security threats across your network. Enhance your security capabilities with the [Microsoft Defender for IoT solution](/azure/sentinel/sentinel-solutions-catalog#domain-solutions), a set of bundled content configured specifically for Defender for IoT data that includes analytics rules, workbooks, and playbooks.
+The integration between Microsoft Defender for IoT and [Microsoft Sentinel](../../sentinel/index.yml) enable SOC teams to efficiently and effectively detect and respond to security threats across your network. Enhance your security capabilities with the [Microsoft Defender for IoT solution](../../sentinel/sentinel-solutions-catalog.md#domain-solutions), a set of bundled content configured specifically for Defender for IoT data that includes analytics rules, workbooks, and playbooks.
In this tutorial, you:
In this tutorial, you:
Before you start, make sure you have: -- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](/azure/sentinel/roles).
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](../../sentinel/roles.md).
- Completed [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md). ## Install the Defender for IoT solution
-Microsoft Sentinel [solutions](/azure/sentinel/sentinel-solutions) can help you onboard Microsoft Sentinel security content for a specific data connector using a single process.
+Microsoft Sentinel [solutions](../../sentinel/sentinel-solutions.md) can help you onboard Microsoft Sentinel security content for a specific data connector using a single process.
The **Microsoft Defender for IoT** solution integrates Defender for IoT data with Microsoft Sentinel's security orchestration, automation, and response (SOAR) capabilities by providing out-of-the-box and optimized playbooks for automated response and prevention capabilities.
The **Microsoft Defender for IoT** solution integrates Defender for IoT data wit
1. When you're done, select **Review + Create** to install the solution.
-For more information, see [About Microsoft Sentinel content and solutions](/azure/sentinel/sentinel-solutions) and [Centrally discover and deploy out-of-the-box content and solutions](/azure/sentinel/sentinel-solutions-deploy).
+For more information, see [About Microsoft Sentinel content and solutions](../../sentinel/sentinel-solutions.md) and [Centrally discover and deploy out-of-the-box content and solutions](../../sentinel/sentinel-solutions-deploy.md).
## Detect threats out-of-the-box with Defender for IoT data
After youΓÇÖve [configured your Defender for IoT data to trigger new incidents i
> [!TIP] > To investigate the incident in Defender for IoT, select the **Investigate in Microsoft Defender for IoT** link at the top of the incident details pane.
-For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](/azure/sentinel/investigate-cases).
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md).
### Investigate further with IoT device entities
The IoT device entity page provides contextual device information, with basic de
:::image type="content" source="media/iot-solution/iot-device-entity-page.png" alt-text="Screenshot of the IoT device entity page.":::
-For more information on entity pages, see [Investigate entities with entity pages in Microsoft Sentinel](/azure/sentinel/entity-pages).
+For more information on entity pages, see [Investigate entities with entity pages in Microsoft Sentinel](../../sentinel/entity-pages.md).
You can also hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name: :::image type="content" source="media/iot-solution/entity-behavior-iot-devices-alerts.png" alt-text="Screenshot of IoT devices by number of alerts on entity behavior page.":::
-For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](/azure/sentinel/investigate-cases).
+For more information on how to investigate incidents and use the investigation graph, see [Investigate incidents with Microsoft Sentinel](../../sentinel/investigate-cases.md).
## Visualize and monitor Defender for IoT data
To visualize and monitor your Defender for IoT data, use the workbooks deployed
The Defenders for IoT workbooks provide guided investigations for OT entities based on open incidents, alert notifications, and activities for OT assets. They also provide a hunting experience across the MITRE ATT&CK® framework for ICS, and are designed to enable analysts, security engineers, and MSSPs to gain situational awareness of OT security posture.
-View workbooks in Microsoft Sentinel on the **Threat management > Workbooks > My workbooks** tab. For more information, see [Visualize collected data](/azure/sentinel/get-visibility).
+View workbooks in Microsoft Sentinel on the **Threat management > Workbooks > My workbooks** tab. For more information, see [Visualize collected data](../../sentinel/get-visibility.md).
The following table describes the workbooks included in the **Microsoft Defender for IoT** solution:
Before using the out-of-the-box playbooks, make sure to perform the prerequisite
For more information, see: -- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](/azure/sentinel/tutorial-respond-threats-playbook)-- [Automate threat response with playbooks in Microsoft Sentinel](/azure/sentinel/automate-responses-with-playbooks)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
+- [Automate threat response with playbooks in Microsoft Sentinel](../../sentinel/automate-responses-with-playbooks.md)
### Playbook prerequisites
This procedure describes how to configure a Microsoft Sentinel analytics rule to
1. Select **Run**. > [!TIP]
-> You can also manually run a playbook on demand. This can be useful in situations where you want more control over orchestration and response processes. For more information, see [Run a playbook on demand](/azure/sentinel/tutorial-respond-threats-playbook#run-a-playbook-on-demand).
+> You can also manually run a playbook on demand. This can be useful in situations where you want more control over orchestration and response processes. For more information, see [Run a playbook on demand](../../sentinel/tutorial-respond-threats-playbook.md#run-a-playbook-on-demand).
### Automatically close incidents
This playbook updates the incident severity according to the importance level of
## Next steps > [!div class="nextstepaction"]
-> [Visualize data](/azure/sentinel/get-visibility)
+> [Visualize data](../../sentinel/get-visibility.md)
> [!div class="nextstepaction"]
-> [Create custom analytics rules](/azure/sentinel/detect-threats-custom)
+> [Create custom analytics rules](../../sentinel/detect-threats-custom.md)
> [!div class="nextstepaction"]
-> [Investigate incidents](/azure/sentinel/investigate-cases)
+> [Investigate incidents](../../sentinel/investigate-cases.md)
> [!div class="nextstepaction"]
-> [Investigate entities](/azure/sentinel/entity-pages)
+> [Investigate entities](../../sentinel/entity-pages.md)
> [!div class="nextstepaction"]
-> [Use playbooks with automation rules](/azure/sentinel/tutorial-respond-threats-playbook)
-
-For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
+> [Use playbooks with automation rules](../../sentinel/tutorial-respond-threats-playbook.md)
+For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
defender-for-iot Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md
In this tutorial, you will learn how to:
Before you start, make sure you have the following requirements on your workspace: -- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](/azure/sentinel/roles).
+- **Read** and **Write** permissions on your Microsoft Sentinel workspace. For more information, see [Permissions in Microsoft Sentinel](../../sentinel/roles.md).
- **Contributor** or **Owner** permissions on the subscription you want to connect to Microsoft Sentinel. - A Defender for IoT plan on your Azure subscription with data streaming into Defender for IoT. For more information, see [Quickstart: Get started with Defender for IoT](getting-started.md). > [!IMPORTANT]
-> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](/azure/sentinel/data-connectors-reference#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
+> Currently, having both the Microsoft Defender for IoT and the [Microsoft Defender for Cloud](../../sentinel/data-connectors-reference.md#microsoft-defender-for-cloud) data connectors enabled on the same Microsoft Sentinel workspace simultaneously may result in duplicate alerts in Microsoft Sentinel. We recommend that you disconnect the Microsoft Defender for Cloud data connector before connecting to Microsoft Defender for IoT.
> ## Connect your data from Defender for IoT to Microsoft Sentinel
Start by enabling the **Defender for IoT** data connector to stream all your Def
If you've made any connection changes, it can take 10 seconds or more for the **Subscription** list to update.
-For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](/azure/sentinel/connect-azure-windows-microsoft-services).
+For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](../../sentinel/connect-azure-windows-microsoft-services.md).
## View Defender for IoT alerts
After you've connected a subscription to Microsoft Sentinel, you'll be able to v
> [!NOTE] > The **Logs** page in Microsoft Sentinel is based on Azure Monitor's Log Analytics. >
-> For more information, see [Log queries overview](/azure/azure-monitor/logs/log-query-overview) in the Azure Monitor documentation and the [Write your first KQL query](/training/modules/write-first-query-kusto-query-language/) Learn module.
+> For more information, see [Log queries overview](../../azure-monitor/logs/log-query-overview.md) in the Azure Monitor documentation and the [Write your first KQL query](/training/modules/write-first-query-kusto-query-language/) Learn module.
> ### Understand alert timestamps
For more information, see:
- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) - [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184) - [Microsoft Defender for IoT solution](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview)-- [Microsoft Defender for IoT data connector](/azure/sentinel/data-connectors-reference#microsoft-defender-for-iot)-
+- [Microsoft Defender for IoT data connector](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot)
defender-for-iot Manage Subscriptions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md
Before performing the procedures in this article, make sure that you have:
- The following user roles:
- - **In Azure Active Directory**: [Global administrator](/azure/active-directory/roles/permissions-reference#global-administrator) for your Microsoft 365 tenant
+ - **In Azure Active Directory**: [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) for your Microsoft 365 tenant
- - **In Azure RBAC**: [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) for the Azure subscription that you'll be using for the integration
+ - **In Azure RBAC**: [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) for the Azure subscription that you'll be using for the integration
## Calculate committed devices for Enterprise IoT monitoring
This procedure describes how to add an Enterprise IoT plan to your Azure subscri
1. Select the following options for your plan:
- - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription.
+ - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription.
> [!TIP] > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner.
For more information, see:
- [Create an additional Azure subscription](../../cost-management-billing/manage/create-subscription.md) -- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
+- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
defender-for-iot Manage Users Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-overview.md
Microsoft Defender for IoT provides tools both in the Azure portal and on-premis
## Azure users for Defender for IoT
-In the Azure portal, users are managed at the subscription level with [Azure Active Directory](/azure/active-directory/) and [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview). Azure subscription users can have one or more user roles, which determine the data and actions they can access from the Azure portal, including in Defender for IoT.
+In the Azure portal, users are managed at the subscription level with [Azure Active Directory](../../active-directory/index.yml) and [Azure role-based access control (RBAC)](../../role-based-access-control/overview.md). Azure subscription users can have one or more user roles, which determine the data and actions they can access from the Azure portal, including in Defender for IoT.
-Use the [portal](/azure/role-based-access-control/quickstart-assign-role-user-portal) or [PowerShell](/azure/role-based-access-control/tutorial-role-assignments-group-powershell) to assign your Azure subscription users with the specific roles they'll need to view data and take action, such as whether they'll be viewing alert or device data, or managing pricing plans and sensors.
+Use the [portal](../../role-based-access-control/quickstart-assign-role-user-portal.md) or [PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md) to assign your Azure subscription users with the specific roles they'll need to view data and take action, such as whether they'll be viewing alert or device data, or managing pricing plans and sensors.
For more information, see [Manage users on the Azure portal](manage-users-portal.md) and [Azure user roles for OT and Enterprise IoT monitoring](roles-azure.md)
For more information, see [Define global access permission for on-premises users
## Next steps -- [Manage Azure subscription users](/azure/role-based-access-control/quickstart-assign-role-user-portal)
+- [Manage Azure subscription users](../../role-based-access-control/quickstart-assign-role-user-portal.md)
- [Create and manage users on an OT network sensor](manage-users-sensor.md) - [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md) For more information, see: - [Azure user roles and permissions for Defender for IoT](roles-azure.md)-- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
defender-for-iot Manage Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-portal.md
Microsoft Defender for IoT provides tools both in the Azure portal and on-premises for managing user access across Defender for IoT resources.
-In the Azure portal, user management is managed at the *subscription* level with [Azure Active Directory](/azure/active-directory/) and [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview). Assign Azure Active Directory users with Azure roles at the subscription level so that they can add or update Defender for IoT pricing plans and access device data, manage sensors, and access device data across Defender for IoT.
+In the Azure portal, user management is managed at the *subscription* level with [Azure Active Directory](../../active-directory/index.yml) and [Azure role-based access control (RBAC)](../../role-based-access-control/overview.md). Assign Azure Active Directory users with Azure roles at the subscription level so that they can add or update Defender for IoT pricing plans and access device data, manage sensors, and access device data across Defender for IoT.
For OT network monitoring, Defender for IoT has the extra *site* level, which you can use to add granularity to your user management. For example, assign roles at the site level to apply different permissions for the same users across different sites.
For OT network monitoring, Defender for IoT has the extra *site* level, which yo
Manage user access for Defender for IoT using Azure RBAC, applying the roles to users or user groups as needed to access required functionality. -- [Grant a user access to Azure resources using the Azure portal](/azure/role-based-access-control/quickstart-assign-role-user-portal)-- [Grant a group access to Azure resources using Azure PowerShell](/azure/role-based-access-control/tutorial-role-assignments-group-powershell)
+- [Grant a user access to Azure resources using the Azure portal](../../role-based-access-control/quickstart-assign-role-user-portal.md)
+- [Grant a group access to Azure resources using Azure PowerShell](../../role-based-access-control/tutorial-role-assignments-group-powershell.md)
- [Azure user roles for OT and Enterprise IoT monitoring](roles-azure.md) ## Manage site-based access control (Public preview)
Sites and site-based access control is relevant only for OT monitoring sites, an
For more information, see: -- [Grant a user access to Azure resources using the Azure portal](/azure/role-based-access-control/quickstart-assign-role-user-portal)-- [List Azure role assignments using the Azure portal](/azure/role-based-access-control/role-assignments-list-portal)-- [Check access for a user to Azure resources](/azure/role-based-access-control/check-access)
+- [Grant a user access to Azure resources using the Azure portal](../../role-based-access-control/quickstart-assign-role-user-portal.md)
+- [List Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-list-portal.md)
+- [Check access for a user to Azure resources](../../role-based-access-control/check-access.md)
## Next steps
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can purchase any of the following appliances for your OT on-premises managem
||||| |**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) (4SFF) <br><br> [Dell PowerEdge R350](appliance-catalog/dell-poweredge-r350-e1800.md) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-For information about previously supported legacy appliances, see the [appliance catalog](/azure/defender-for-iot/organizations/appliance-catalog/).
+For information about previously supported legacy appliances, see the [appliance catalog](./appliance-catalog/index.yml).
## Next steps
Then, use any of the following procedures to continue:
- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console) - [Install software](how-to-install-software.md)
-Our OT monitoring appliance reference articles also include extra installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances. For more information, see [OT monitoring appliance reference](appliance-catalog/index.yml).
+Our OT monitoring appliance reference articles also include extra installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances. For more information, see [OT monitoring appliance reference](appliance-catalog/index.yml).
defender-for-iot Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/recommendations.md
The following recommendations are displayed for devices detected by OT and Enter
| **Enterprise IoT network sensors** | | | **Disable insecure administration protocol**| Devices with this recommendation are exposed to malicious threats because they use Telnet, which isn't a secured and encrypted communication protocol. <br><br>We recommend that you switch to a more secure protocol, such as SSH, disable the server altogether, or apply network access restrictions.|
-Other recommendations you may see in the **Recommendations** page are relevant for the [Defender for IoT micro agent](/azure/defender-for-iot/device-builders/).
+Other recommendations you may see in the **Recommendations** page are relevant for the [Defender for IoT micro agent](../device-builders/index.yml).
## Next steps > [!div class="nextstepaction"]
-> [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory)
-
+> [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory)
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
# Azure user roles and permissions for Defender for IoT
-Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](/azure/role-based-access-control/) to provide access to Enterprise IoT monitoring services and data on the Azure portal.
+Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](../../role-based-access-control/index.yml) to provide access to Enterprise IoT monitoring services and data on the Azure portal.
The built-in Azure [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), and [Owner](../../role-based-access-control/built-in-roles.md#owner) roles are relevant for use in Defender for IoT.
-This article provides a reference of Defender for IoT actions available for each role in the Azure portal. For more information, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
+This article provides a reference of Defender for IoT actions available for each role in the Azure portal. For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
## Roles and permissions reference
For more information, see:
- [Manage OT monitoring users on the Azure portal](manage-users-portal.md) - [On-premises user roles for OT monitoring with Defender for IoT](roles-on-premises.md) - [Create and manage users on an OT network sensor](manage-users-sensor.md)-- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
defender-for-iot Track User Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/track-user-activity.md
After you've set up your user access for the [Azure portal](manage-users-portal.
Use Azure Active Directory user auditing resources to audit Azure user activity across Defender for IoT. For more information, see: -- [Audit logs in Azure Active directory](/azure/active-directory/reports-monitoring/concept-audit-logs)-- [Azure AD audit activity reference](/azure/active-directory/reports-monitoring/reference-audit-activities)
+- [Audit logs in Azure Active directory](../../active-directory/reports-monitoring/concept-audit-logs.md)
+- [Azure AD audit activity reference](../../active-directory/reports-monitoring/reference-audit-activities.md)
## Audit user activity on an OT network sensor
For more information, see:
- [Azure user roles and permissions for Defender for IoT](roles-azure.md) - [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md) - [Create and manage users on an OT network sensor](manage-users-sensor.md)-- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
To store the personal access token you generated as a [key vault secret](../key-
| **Name** | Enter a name for the catalog. | | **Git clone URI** | Enter or paste the [clone URL](#get-the-clone-url-for-your-repository) for either your GitHub repository or your Azure DevOps repository.<br/>*Sample Catalog Example:* https://github.com/Azure/deployment-environments.git | | **Branch** | Enter the repository branch to connect to.<br/>*Sample Catalog Example:* main|
- | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments|
+ | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. </br> This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments|
| **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.|
- :::image type="content" source="media/how-to-configure-catalog/add-new-catalog-form.png" alt-text="Screenshot that shows how to add a catalog to a dev center.":::
+ :::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**.
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Previously updated : 10/26/2022 Last updated : 12/20/2022 # Quickstart: Create and configure a dev center This quickstart shows you how to create and configure a dev center in Azure Deployment Environments Preview.
-An enterprise development infrastructure team typically sets up a dev center, configures different entities within the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy applications.
-
-In this quickstart, you learn how to:
-
-> [!div class="checklist"]
->
-> - Create a dev center
-> - Attach an identity to your dev center
-> - Attach a catalog to your dev center
-> - Create environment types
+An enterprise development infrastructure team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy applications.
> [!IMPORTANT] > Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, review the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
In this quickstart, you learn how to:
- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner). ## Create a dev center- To create and configure a Dev center in Azure Deployment Environments by using the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com).
To create and configure a Dev center in Azure Deployment Environments by using t
|**Name**|Enter a name for the dev center.| |**Location**|Select the location or region where you want to create the dev center.|
-1. (Optional) Select the **Tags** tab and enter a **Name**:**Value** pair.
1. Select **Review + Create**. 1. On the **Review** tab, wait for deployment validation, and then select **Create**.
To create and configure a Dev center in Azure Deployment Environments by using t
:::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created.":::
-## Attach an identity to the dev center
+## Create a Key Vault
+You'll need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository.
+If you don't have an existing key vault, use the following steps to create one:
-After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity) you can attach:
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the Search box, enter *Key Vault*.
+1. From the results list, select **Key Vault**.
+1. On the Key Vault page, select **Create**.
+1. On the Create key vault page provide the following information:
-- System-assigned managed identity-- User-assigned managed identity
+ |Name |Value |
+ |-|--|
+ |**Name**|Enter a name for the key vault.|
+ |**Subscription**|Select the subscription in which you want to create the key vault.|
+ |**Resource group**|Either use an existing resource group or select **Create new** and enter a name for the resource group.|
+ |**Location**|Select the location or region where you want to create the key vault.|
+
+ Leave the other options at their defaults.
-You can use a system-assigned managed identity or a user-assigned managed identity. You don't have to use both. For more information, review [Configure a managed identity](how-to-configure-managed-identity.md).
+1. Select **Create**.
-> [!NOTE]
-> In Azure Deployment Environments Preview, if you add both a system-assigned identity and a user-assigned identity, only the user-assigned identity is used.
+## Create a personal access token
+Using an authentication token like a GitHub personal access token (PAT) enables you to share your repository securely.
+> [!TIP]
+> If you are attaching an Azure DevOps repository, use these steps: [Create a personal access token in Azure DevOps](how-to-configure-catalog.md#create-a-personal-access-token-in-azure-devops).
-### Attach a system-assigned managed identity
+1. In a new browser tab, sign into your [GitHub](https://github.com) account.
+1. On your profile menu, select **Settings**.
+1. On your account page, on the left menu, select **< >Developer Settings**.
+1. On the Developer settings page, select **Tokens (classic)**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-pat.png" alt-text="Screenshot that shows the GitHub Tokens (classic) option.":::
+
+ Fine grained and classic tokens work with Azure Deployment Environments.
-To attach a system-assigned managed identity to your dev center:
+1. On the New personal access token (classic) page:
+ - In the **Note** box, add a note describing the tokenΓÇÖs intended use.
+ - In **Select scopes**, select repo.
-1. Complete the steps to create a [system-assigned managed identity](how-to-configure-managed-identity.md#add-a-system-assigned-managed-identity-to-a-dev-center).
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/generate-git-hub-token.png" alt-text="Screenshot that shows the GitHub Tokens (classic) configuration page.":::
+
+1. Select **Generate token**.
+1. On the Personal access tokens (classic) page, copy the new token.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/copy-new-token.png" alt-text="Screenshot that shows the new GitHub token with the copy button highlighted.":::
+
+ > [!WARNING]
+ > You must copy the token now. You will not be able to access it again.
+
+1. Switch back to the **Key Vault ΓÇô Microsoft Azure** browser tab.
+1. In the Key Vault, on the left menu, select **Secrets**.
+1. On the Secrets page, select **Generate/Import**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/import-secret.png" alt-text="Screenshot that shows the key vault Secrets page with the generate/import button highlighted.":::
+
+1. On the Create a secret page:
+ - In the **Name** box, enter a descriptive name for your secret.
+ - In the **Secret value** box, paste the GitHub secret you copied in step 7.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-secret-in-key-vault.png" alt-text="Screenshot that shows the Create a secret page with the Name and Secret value text boxes highlighted.":::
+
+ - Select **Create**.
+1. Leave this tab open, youΓÇÖll need to come back to the Key Vault later.
+
+## Attach an identity to the dev center
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity.":::
+After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity or a user-assigned managed identity. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity).
-1. After you create a system-assigned managed identity, assign the Owner role to give the [identity access](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) on the subscriptions that will be used to configure [project environment types](concept-environments-key-concepts.md#project-environment-types).
+In this quickstart, you'll configure a system-assigned managed identity for your dev center.
- Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access your repository.
+## Attach a system-assigned managed identity
-### Attach an existing user-assigned managed identity
+To attach a system-assigned managed identity to your dev center:
-To attach a user-assigned managed identity to your dev center:
+1. In Dev centers, select your dev center.
+1. In the left menu under Settings, select **Identity**.
+1. Under **System assigned**, set **Status** to **On**, and then select **Save**.
-1. Complete the steps to attach a [user-assigned managed identity](how-to-configure-managed-identity.md#add-a-user-assigned-managed-identity-to-a-dev-center).
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity-on.png" alt-text="Screenshot that shows a system-assigned managed identity.":::
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/user-assigned-managed-identity.png" alt-text="Screenshot that shows a user-assigned managed identity.":::
+1. In the **Enable system assigned managed identity** dialog, select **Yes**.
-1. After you attach the identity, assign the Owner role to give the [identity access](how-to-configure-managed-identity.md#assign-a-subscription-role-assignment-to-the-managed-identity) on the subscriptions that will be used to configure [project environment types](how-to-configure-project-environment-types.md). Give the identity Reader access to all subscriptions that a project lives in.
+### Assign the system-assigned managed identity access to the key vault secret
+Make sure that the identity has access to the key vault secret that contains the personal access token to access your repository.
- Make sure that the identity has [access to the key vault secret](how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) that contains the personal access token to access the repository.
+Configure a key vault access policy:
+1. In the Azure portal, go to the key vault that contains the secret with the personal access token.
+2. In the left menu, select **Access policies**, and then select **Create**.
+3. In Create an access policy, enter or select the following information:
+ - On the Permissions tab, under **Secret permissions**, select **Select all**, and then select **Next**.
+ - On the Principal tab, select the identity that's attached to the dev center, and then select **Next**.
+ - Select **Review + create**, and then select **Create**.
-> [!NOTE]
-> The [identity](concept-environments-key-concepts.md#identities) that's attached to the dev center should be assigned the Owner role for access to the deployment subscription for each environment type.
## Add a catalog to the dev center
+Azure Deployment Environments Preview supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
-> [!NOTE]
-> Before you add a [catalog](concept-environments-key-concepts.md#catalogs), store the personal access token as a [key vault secret](../key-vault/secrets/quick-create-portal.md) in Azure Key Vault and copy the secret identifier. Ensure that the [identity](concept-environments-key-concepts.md#identities) that's attached to the dev center has [GET access to the secret](../key-vault/general/assign-access-policy.md).
+In this quickstart, you'll attach a GitHub repository that contains samples created and maintained by the Azure Deployment Environments team.
-To add a catalog to your dev center:
+To add a catalog to your dev center, you'll first need to gather some information.
-1. In the Azure portal, go to Azure Deployment Environments.
-1. In **Dev centers**, select your dev center.
+### Gather GitHub repo information
+To add a catalog, you must specify the GitHub repo URL, the branch, and the folder that contains your catalog items. You can gather this information before you begin the process of adding the catalog to the dev center, and paste it somewhere accessible, like notepad.
+
+> [!TIP]
+> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-of-an-azure-devops-repository).
+
+1. On your [GitHub](https://github.com) account page, select **<> Code**, and then select copy.
+1. Take a note of the branch that you're working in.
+1. Take a note of the folder that contains your catalog items.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-info.png" alt-text="Screenshot that shows the GitHub repo with Code, branch, and folder highlighted.":::
+
+### Gather the secret identifier
+You'll also need the path to the secret you created in the key vault.
+
+1. In the Azure portal, navigate to your key vault.
+1. On the key vault page, from the left menu, select **Secrets**.
+1. On the Secrets page, select the secret you created earlier.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-secrets-page.png" alt-text="Screenshot that shows the list of secrets in the key vault with one highlighted.":::
+
+1. On the versions page, select the **CURRENT VERSION**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-versions-page.png" alt-text="Screenshot that shows the current version of the select secret.":::
+
+1. On the current version page, for the **Secret identifier**, select copy.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-current-version-page.png" alt-text="Screenshot that shows the details current version of the select secret with the secret identifier copy button highlighted.":::
+
+### Add a catalog to your dev center
+1. Navigate to your dev center.
1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**. :::image type="content" source="media/quickstart-create-and-configure-devcenter/catalogs-page.png" alt-text="Screenshot that shows the Catalogs pane."::: 1. In the **Add catalog** pane, enter the following information, and then select **Add**.
- |Name |Value |
- ||-|
- |**Name**|Enter a name for your catalog.|
- |**Git clone URI**|Enter the URI to your GitHub or Azure DevOps repository.|
- |**Branch**|Enter the repository branch that you want to connect.|
- |**Folder path**|Enter the repository relative path where the [catalog item](concept-environments-key-concepts.md#catalog-items) exists.|
- |**Secret identifier**|Enter the secret identifier that contains your personal access token for the repository.|
+ | Name | Value |
+ | -- | -- |
+ | **Name** | Enter a name for the catalog. |
+ | **Git clone URI** | Enter or paste the clone URL for either your GitHub repository or your Azure DevOps repository.<br/>*Sample Catalog Example:* https://github.com/Azure/deployment-environments.git |
+ | **Branch** | Enter the repository branch to connect to.<br/>*Sample Catalog Example:* main|
+ | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. </br>This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments|
+ | **Secret identifier**| Enter the secret identifier that contains your personal access token for the repository.|
- :::image type="content" source="media/how-to-configure-catalog/add-new-catalog-form.png" alt-text="Screenshot that shows how to add a catalog to a dev center.":::
+ :::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
1. Confirm that the catalog is successfully added by checking your Azure portal notifications.
-1. Select the specific repository, and then select **Sync**.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/sync-catalog.png" alt-text="Screenshot that shows how to sync the catalog." :::
- ## Create an environment type Use an environment type to help you define the different types of environments your development teams can deploy. You can apply different settings for each environment type.
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
This quickstart shows you how to create a project in Azure Deployment Environmen
An enterprise development infrastructure team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [catalog items](concept-environments-key-concepts.md#catalog-items), connect to individual resources, and deploy applications.
-In this quickstart, you learn how to:
-
-> [!div class="checklist"]
->
-> - Create a project
-> - Configure a project
-> - Provide project access to the development team
- > [!IMPORTANT] > Azure Deployment Environments currently is in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
To create a project in your dev center:
|**Name**|Enter a name for the project. | |**Description** (Optional) |Enter any project-related details. |
-1. Select the **Tags** tab and enter a **Name**:**Value** pair.
- 1. On the **Review + Create** tab, wait for deployment validation, and then select **Create**. :::image type="content" source="media/quickstart-create-configure-projects/create-project-page-review-create.png" alt-text="Screenshot that shows selecting the Review + Create button to validate and create a project.":::
To create a project in your dev center:
:::image type="content" source="media/quickstart-create-configure-projects/created-project.png" alt-text="Screenshot that shows the project overview pane.":::
+### Assign a managed identity the owner role to the subscription
+Before you can create environment types, you must give the managed identity that represents your dev center access to the subscriptions where you'll configure the [project environment types](concept-environments-key-concepts.md#project-environment-types).
+
+In this quickstart you'll assign the Owner role to the system-assigned managed identity that you configured previously: [Attach a system-assigned managed identity](quickstart-create-and-configure-devcenter.md#attach-a-system-assigned-managed-identity).
+
+1. Navigate to your dev center.
+1. On the left menu under Settings, select **Identity**.
+1. Under System assigned > Permissions, select **Azure role assignments**.
+
+ :::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted.":::
+
+1. In Azure role assignments, select **Add role assignment (Preview)**, and then enter or select the following information:
+ - In **Scope**, select **Subscription**.
+ - In **Subscription**, select the subscription in which to use the managed identity.
+ - In **Role**, select **Owner**.
+ - Select **Save**.
## Configure a project To configure a project, add a [project environment type](how-to-configure-project-environment-types.md):
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
To create a network connection, you must have:
- An existing virtual network (vnet) and subnet. If you don't have a vnet and subnet available, follow the instructions here: [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md) to create them. - A configured and working Hybrid AD join or Azure AD join.
- - **Hybrid AD join:** To learn how to join your AD DS domain-joined computers to Azure AD from an on-premises Active Directory Domain Services (AD DS) environment, see [Plan your hybrid Azure Active Directory join deployment](/azure/active-directory/devices/hybrid-azuread-join-plan).
- - **Azure AD join:** To learn how to join devices directly to Azure Active Directory (Azure AD), see [Plan your Azure Active Directory join deployment](/azure/active-directory/devices/azureadjoin-plan).
+ - **Hybrid AD join:** To learn how to join your AD DS domain-joined computers to Azure AD from an on-premises Active Directory Domain Services (AD DS) environment, see [Plan your hybrid Azure Active Directory join deployment](../active-directory/devices/hybrid-azuread-join-plan.md).
+ - **Azure AD join:** To learn how to join devices directly to Azure Active Directory (Azure AD), see [Plan your Azure Active Directory join deployment](../active-directory/devices/azureadjoin-plan.md).
- If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network). Follow these steps to create a network connection:
In this quickstart, you created a dev box project and the resources necessary to
To learn about how to manage dev box projects, advance to the next quickstart: > [!div class="nextstepaction"]
-> [Configure a dev box project](./quickstart-configure-dev-box-project.md)
-
+> [Configure a dev box project](./quickstart-configure-dev-box-project.md)
devtest-labs Test App Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/test-app-azure.md
Azure DevTest Labs creates an Azure storage account when you create a lab. To cr
1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your lab. 1. Follow these steps to [select the storage account linked to your lab](./encrypt-storage.md#view-storage-account-contents).
-1. Follow these steps to [create a file share](/azure/storage/files/storage-how-to-create-file-share#create-a-file-share).
+1. Follow these steps to [create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share).
## Publish your app from Visual Studio
When the publish operation finishes, the application files are available on the
To access the application files in the Azure file share, you need to first mount the share to your lab VM.
-Follow these steps to [mount the Azure file share to your lab VM](/azure/storage/files/storage-how-to-use-files-windows#mount-the-azure-file-share).
+Follow these steps to [mount the Azure file share to your lab VM](../storage/files/storage-how-to-use-files-windows.md#mount-the-azure-file-share).
## Access the app on your lab VM
You can now run and test your app on your lab VM.
You've published an application directly from Visual Studio on your developer workstation into your lab VM. - Learn how you can [integrate the lab creation and application deployment into your CI/CD pipeline](./use-devtest-labs-build-release-pipelines.md).-- Learn more about [deploying an application to a folder with Visual Studio](/visualstudio/deployment/deploying-applications-services-and-components-resources#folder).
+- Learn more about [deploying an application to a folder with Visual Studio](/visualstudio/deployment/deploying-applications-services-and-components-resources#folder).
digital-twins How To Create App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration.md
description: Create an Azure Active Directory app registration that can access Azure Digital Twins resources. Previously updated : 5/25/2022 Last updated : 01/11/2023
Use these steps to create the role assignment for your registration.
| Setting | Value | | | | | Role | Select as appropriate |
- | Assign access to | User, group, or service principal |
- | Members | Search for the name or [client ID](#collect-client-id-and-tenant-id) of the app registration |
+ | Members > Assign access to | User, group, or service principal |
+ | Members > Members | **+ Select members**, then search for the name or [client ID](#collect-client-id-and-tenant-id) of the app registration |
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the 'Add role assignment' page." lightbox="../../includes/role-based-access-control/media/add-role-assignment-page.png":::
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the Roles tab in the Add role assignment page." lightbox="../../includes/role-based-access-control/media/add-role-assignment-page.png":::
+
+ :::image type="content" source="media/how-to-create-app-registration/add-role.png" alt-text="Screenshot of the Members tab in the Add role assignment page." lightbox="media/how-to-create-app-registration/add-role.png":::
+
+ Once the role has been selected, **Review + assign** it.
#### Verify role assignment
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Previously updated : 09/28/2021 Last updated : 01/05/2023 # What is Azure Database Migration Service?
For up-to-date info about the regional availability of Azure Database Migration
For up-to-date info about Azure Database Migration Service pricing, see [Azure Database Migration Service pricing](https://azure.microsoft.com/pricing/details/database-migration/). ++ ## Next steps * [Status of migration scenarios supported by Azure Database Migration Service](./resource-scenario-status.md)
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Previously updated : 10/19/2022 Last updated : 01/05/2023 # Known issues, limitations, and troubleshooting
-Known issues and limitations associated with the Azure SQL Migration extension for Azure Data Studio.
+Known issues and troubleshooting steps associated with the Azure SQL Migration extension for Azure Data Studio.
> [!NOTE] > When checking migration details using the Azure Portal, Azure Data Studio or PowerShell / Azure CLI you might see the following error: *Operation Id {your operation id} was not found*. This can either be because you provided an operationId as part of an api parameter in your get call that does not exist, or the migration details of your migration were deleted as part of a cleanup operation.
-### Error code: 2007 - CutoverFailedOrCancelled
+## Error code: 2007 - CutoverFailedOrCancelled
+ - **Message**: `Cutover failed or cancelled for database <DatabaseName>. Error details: The restore plan is broken because firstLsn <First LSN> of log backup <URL of backup in Azure Storage container>' is not <= lastLsn <last LSN> of Full backup <URL of backup in Azure Storage container>'. Restore to point in time.` - **Cause**: The error might occur due to the backups being placed incorrectly in the Azure Storage container. If the backups are placed in the network file share, this error could also occur due to network connectivity issues. - **Recommendation**: Ensure the database backups in your Azure Storage container are correct. If you're using network file share, there might be network-related issues and lags that are causing this error. Wait for the process to be completed.
-### Error code: 2009 - MigrationRestoreFailed
+## Error code: 2009 - MigrationRestoreFailed
+ - **Message**: `Migration for Database 'DatabaseName' failed with error cannot find server certificate with thumbprint.` - **Cause**: The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) hasn't been migrated to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine before migrating data.
Known issues and limitations associated with the Azure SQL Migration extension f
> [!NOTE] > For more information on general troubleshooting steps for Azure SQL Managed Instance errors, see [Known issues with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/doc-changes-updates-known-issues)
-### Error code: 2012 - TestConnectionFailed
+## Error code: 2012 - TestConnectionFailed
+ - **Message**: `Failed to test connections using provided Integration Runtime. Error details: 'Remote name could not be resolved.'` - **Cause**: The Self-Hosted Integration Runtime can't connect to the service back end. This issue is caused by network settings in the firewall.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: See [Troubleshoot Self-Hosted Integration Runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors.
-### Error code: 2014 - IntegrationRuntimeIsNotOnline
+## Error code: 2014 - IntegrationRuntimeIsNotOnline
+ - **Message**: `Integration Runtime <IR Name> in resource group <Resource Group Name> Subscription <SubscriptionID> isn't online.` - **Cause**: The Self-Hosted Integration Runtime isn't online.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: Make sure the Self-hosted Integration Runtime is registered and online. To perform the registration, you can use scripts from [Automating self-hosted integration runtime installation using local PowerShell scripts](../data-factory/self-hosted-integration-runtime-automation-scripts.md). Also, see [Troubleshoot self-hosted integration runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors.
-### Error code: 2030 - AzureSQLManagedInstanceNotReady
+## Error code: 2030 - AzureSQLManagedInstanceNotReady
+ - **Message**: `Azure SQL Managed Instance <Instance Name> isn't ready.` - **Cause**: Azure SQL Managed Instance not in ready state.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: Wait until the Azure SQL Managed Instance has finished deploying and is ready, then retry the process.
-### Error code: 2033 - SqlDataCopyFailed
+## Error code: 2033 - SqlDataCopyFailed
+ - **Message**: `Migration for Database <Database> failed in state <state>.` - **Cause**: ADF pipeline for data movement failed.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: Check the MigrationStatusDetails page for more detailed error information.
-### Error code: 2038 - MigrationCompletedDuringCancel
+## Error code: 2038 - MigrationCompletedDuringCancel
+ - **Message**: `Migration cannot be canceled as Migration was completed during the cancel process. Target server: <Target server> Target database: <Target database>.` - **Cause**: A cancellation request was received, but the migration was completed successfully before the cancellation was completed.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: No action required migration succeeded.
-### Error code: 2039 - MigrationRetryNotAllowed
+## Error code: 2039 - MigrationRetryNotAllowed
+ - **Message**: `Migration isn't in a retriable state. Migration must be in state WaitForRetry. Current state: <State>, Target server: <Target Server>, Target database: <Target database>.` - **Cause**: A retry request was received when the migration wasn't in a state allowing retrying.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: No action required migration is ongoing or completed.
-### Error code: 2040 - MigrationTimeoutWaitingForRetry
+## Error code: 2040 - MigrationTimeoutWaitingForRetry
+ - **Message**: `Migration retry timeout limit of 8 hours reached. Target server: <Target Server>, Target database: <Target Database>.` - **Cause**: Migration was idle in a failed, but retrievable state for 8 hours and was automatically canceled.
Known issues and limitations associated with the Azure SQL Migration extension f
- **Recommendation**: No action is required; the migration was canceled.
-### Error code: 2041 - DataCopyCompletedDuringCancel
+## Error code: 2041 - DataCopyCompletedDuringCancel
+ - **Message**: `Data copy finished successfully before canceling completed. Target schema is in bad state. Target server: <Target Server>, Target database: <Target Database>.` - **Cause**: Cancel request was received, and the data copy was completed successfully, but the target database schema hasn't been returned to its original state.
WHERE STEP in (5,7,8) ORDER BY STEP DESC;
```
-### Error code: 2042 - PreCopyStepsCompletedDuringCancel
+## Error code: 2042 - PreCopyStepsCompletedDuringCancel
+ - **Message**: `Pre Copy steps finished successfully before canceling completed. Target database Foreign keys and temporal tables have been altered. Schema migration may be required again for future migrations. Target server: <Target Server>, Target database: <Target Database>.` - **Cause**: Cancel request was received and the steps to prepare the target database for copy were completed successfully. The target database schema hasn't been returned to its original state.
WHERE STEP in (3,4,6);
```
-### Error code: 2043 - CreateContainerFailed
+## Error code: 2043 - CreateContainerFailed
+ - **Message**: `Create container <ContainerName> failed with error Error calling the endpoint '<URL>'. Response status code: 'NA - Unknown'. More details: Exception message: 'NA - Unknown [ClientSideException] Invalid Url:<URL>.` - **Cause**: The request failed due to an underlying issue such as network connectivity, a DNS failure, a server certificate validation, or a timeout. - **Recommendation**: For more troubleshooting steps, see [Troubleshoot Azure Data Factory and Synapse pipelines](../data-factory/data-factory-troubleshoot-guide.md#error-code-2108).
+## Azure SQL Database limitations
+
+Migrating to Azure SQL Database by using the Azure SQL extension for Azure Data Studio has the following limitations:
++
+## Azure SQL Managed Instance limitations
+
+Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
++
+## SQL Server on Azure VMs limitations
+
+Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
-## Database Migration Service issues
-Migrations that were completed before early December 2022 may be missing migration details. This action doesn't have a negative effect on new or ongoing migrations.
-
-## Azure SQL Database Migration limitations
-
-The Azure SQL Database offline migration (Preview) utilizes Azure Data Factory (ADF) pipelines for data movement and thus abides by ADF limitations. A corresponding ADF is created when a database migration service is also created. Thus factory limits apply per service.
- The machine where the SHIR is installed acts as the compute for migration. Make sure this machine can handle the cpu and memory load of the data copy. To learn more, review [SHIR recommendations](/azure/data-factory/create-self-hosted-integration-runtime).
-- 100,000 table per database limit. -- 10,000 concurrent database migrations per service. -- Migration speed heavily depends on the target Azure SQL Database SKU and the self-hosted Integration Runtime host. -- Azure SQL Database migration scales poorly with table numbers due to ADF overhead in starting activities. If a database has thousands of tables, there will be a couple of seconds of startup time for each, even if they're composed of one row with 1 bit of data. -- Azure SQL Database table names with double-byte characters currently aren't supported for migration. Mitigation is to rename tables before migration; they can be changed back to their original names after successful migration.-- Tables with large blob columns may fail to migrate due to timeout.-- Database names with SQL Server reserved are currently not supported.-- Database names that include semicolons are currently not supported.-- Computed columns don't get migrated.-
-## Azure SQL Managed Instance known issues and limitations
--- If migrating a single database, the database backups must be placed in a flat-file structure inside a database folder (including the container root folder), and the folders can't be nested, as it's not supported.-- If migrating multiple databases using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container. -- Overwriting existing databases using DMS in your target Azure SQL Managed Instance isn't supported.-- Configuring high availability and disaster recovery on your target to match source topology isn't supported by DMS.-- The following server objects aren't supported:
- - Logins
- - SQL Server Agent jobs
- - Credentials
- - SSIS packages
- - Server roles
- - Server audit
-- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL migration extension in Azure Data Studio and can be reused for further database migrations.-
-## SQL Server on Azure Virtual Machine known issues and limitations
--- If migrating a single database, the database backups must be placed in a flat-file structure inside a database folder (including the container root folder), and the folders can't be nested, as it's not supported.-- If migrating multiple databases using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container. -- Overwriting existing databases using DMS in your target SQL Server on Azure Virtual Machine isn't supported.-- Configuring high availability and disaster recovery on your target to match source topology isn't supported by DMS.-- The following server objects aren't supported:
- - Logins
- - SQL Server Agent jobs
- - Credentials
- - SSIS packages
- - Server roles
- - Server audit
--- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL migration extension in Azure Data Studio and can be reused for further database migrations.-- VM with SQL Server 2008 and below as target versions aren't supported when migrating to SQL Server on Azure Virtual Machines.-- If you're using VM with SQL Server 2012 or SQL Server 2014, you need to store your source database backup files on an Azure Storage Blob Container instead of using the network share option. Store the backup files as page blobs since block blobs are only supported in SQL 2016 and after.-- You must make sure the [SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management) in the target Azure Virtual Machine is in **Full** mode instead of **Lightweight** mode.-- [SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management)only supports management of **Default Server Server Instance** or **Single Named Instance**, -- There's a temporary limit of 80 databases per target Azure Virtual Machine. A workaround to break the limit (reset the counter) is to **Uninstall** and **Reinstall** [SQL IaaS Agent Extension](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management) in the target Azure Virtual Machine.-- Apart from configuring the Networking/Firewall of your Storage Account to allow your VM to access backup files, you also need to configure the Networking/Firewall of your VM to allow outbound connection to your storage account. ## Next steps
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
Previously updated : 09/28/2022 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to Azure SQL Database (preview) offline in Azure Data Studio
You've completed the migration to Azure SQL Database. We encourage you to go t
> [!IMPORTANT] > Be sure to take advantage of the advanced cloud-based features of Azure SQL Database. The features include [built-in high availability](/azure/azure-sql/database/high-availability-sla), [threat detection](/azure/azure-sql/database/azure-defender-for-sql), and [monitoring and tuning your workload](/azure/azure-sql/database/monitor-tune-overview).
+## Limitations
++ ## Next steps - Complete a quickstart to [create an Azure SQL Database instance](/azure/azure-sql/database/single-database-create-quickstart). - Learn more about [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview). - Learn how to [connect apps to Azure SQL Database](/azure/azure-sql/database/connect-query-content-reference-guide).
+- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Previously updated : 10/05/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio
After all database backups are restored on the instance of Azure SQL Managed Ins
> [!IMPORTANT] > After the migration, the availability of SQL Managed Instance with Business Critical service tier might take significantly longer than the General Purpose tier because three secondary replicas have to be seeded for an Always On High Availability group. The duration of this operation depends on the size of the data. For more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
+## Limitations
+
+Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
+++ ## Next steps - Complete a quickstart to [migrate a database to SQL Managed Instance by using the T-SQL RESTORE command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart). - Learn more about [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). - Learn how to [connect apps to SQL Managed Instance](/azure/azure-sql/managed-instance/connect-application-instance).
+- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Previously updated : 10/05/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online by using Azure Data Studio with DMS
During the cutover process, the migration status changes from *in progress* to *
> [!IMPORTANT] > After the cutover, availability of SQL Managed Instance with Business Critical service tier only can take significantly longer than General Purpose as three secondary replicas have to be seeded for Always On High Availability group. This operation duration depends on the size of data, for more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
+## Limitations
+
+Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
++ ## Next steps * For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart). * For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). * For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
+* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
Previously updated : 08/20/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS
+> [!NOTE]
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md).
+ You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) with minimal downtime. For additional methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database from an on-premises instance of SQL Server to a SQL Managed Instance with minimal downtime by using Azure Database Migration Service.
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
Previously updated : 01/03/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to Azure SQL Database using DMS
+> [!NOTE]
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline-ads.md).
++ You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service. You will learn how to:
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
Previously updated : 08/16/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS
+> [!NOTE]
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-offline-ads.md).
++ You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). For additional methods that may require some manual effort, see the article [SQL Server to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database from an on-premises instance of SQL Server to a SQL Managed Instance by using Azure Database Migration Service.
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Previously updated : 10/05/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio
In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configurat
After all database backups are restored on the instance of SQL Server on Azure Virtual Machines, an automatic migration cutover is initiated by Database Migration Service to ensure that the migrated database is ready to use. The migration status changes from **In progress** to **Succeeded**.
+## Limitations
+
+Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
+++ ## Next steps - Complete a quickstart to [migrate a database to SQL Server on Azure Virtual Machines by using the T-SQL RESTORE command](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide). - Learn more about [SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview).--Learn how to [connect apps to SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
+- Learn how to [connect apps to SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
+- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Previously updated : 10/05/2021 Last updated : 01/12/2023 # Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio with DMS
To complete the cutover:
During the cutover process, the migration status changes from *in progress* to *completing*. The migration status changes to *succeeded* when the cutover process is completed. The database migration is successful and that the migrated database is ready for use.
+## Limitations
+
+Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
+++ ## Next steps * How to migrate a database to SQL Server on Azure Virtual Machines using the T-SQL RESTORE command, see [Migrate a SQL Server database to SQL Server on a virtual machine](/azure/azure-sql/virtual-machines/windows/migrate-to-vm-from-sql-server). * For information about SQL Server on Azure Virtual Machines, see [Overview of SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview). * For information about connecting apps to SQL Server on Azure Virtual Machines, see [Connect applications](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
+* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
In this article, you will learn how to convert SEG-Y formatted data to the ZGY f
empty: none ```
-8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](/azure/energy-data-services/tutorial-seismic-ddms-sdutil). See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
+8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tree/azure/stable#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](./tutorial-seismic-ddms-sdutil.md). See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
> [!NOTE] > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
OSDU&trade; is a trademark of The Open Group.
## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
+> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Microsoft Energy Data Services is updated on an ongoing basis. To stay up to dat
- Deprecated functionality - Plans for changes - <hr width=100%>
+## December 2022
+
+### Lockbox
+
+Most operations, support, and troubleshooting performed by Microsoft personnel do not require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Microsoft Energy Data Services provides you an interface to review and approve or reject data access requests. Microsoft Energy Data Services now supports Lockbox. [Learn more](../security/fundamentals/customer-lockbox-overview.md).
+++
+<hr width=100%>
## October 20, 2022
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
Title: Send or receive events from Azure Event Hubs using .NET (latest)
-description: This article provides a walkthrough to create a .NET Core application that sends/receives events to/from Azure Event Hubs by using the latest Azure.Messaging.EventHubs package.
+ Title: 'Quickstart: Send or receive events using .NET'
+description: A quickstart to create a .NET Core application that sends/receives events to/from Azure Event Hubs by using the Azure.Messaging.EventHubs package.
Last updated 02/28/2022 ms.devlang: csharp
-# Send events to and receive events from Azure Event Hubs - .NET (Azure.Messaging.EventHubs)
-This quickstart shows how to send events to and receive events from an event hub using the **Azure.Messaging.EventHubs** .NET library.
+# Quickstart: Send events to and receive events from Azure Event Hubs - .NET (Azure.Messaging.EventHubs)
+In this quickstart, you will learn how to send events to and receive events from an event hub using the **Azure.Messaging.EventHubs** .NET library.
> [!NOTE] > You can find all .NET samples for Event Hubs in our [.NET SDK repository on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventhub/).
Task ProcessErrorHandler(ProcessErrorEventArgs eventArgs)
:::image type="content" source="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal-2.png" alt-text="Image of the Azure portal page to verify that the event hub sent events to the receiving app" lightbox="./media/getstarted-dotnet-standard-send-v2/verify-messages-portal-2.png":::
+## Clean up resources
+Delete the resource group that has the Event Hubs namespace or delete only the namespace if you want to keep the resource group.
-## Next steps
-This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of events to an event hub and then receiving them. For more samples on other and advanced scenarios, check out the following samples on GitHub.
+## Samples
+This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of events to an event hub and then receiving them. For more samples, select the following links.
- [Event Hubs samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples) - [Event processor samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples) - [Azure role-based access control (Azure RBAC) sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Azure.Messaging.EventHubs/ManagedIdentityWebApp)+
+## Next steps
+See the following tutorial:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Visualize data anomalies in real-time events sent to Azure Event Hubs](event-hubs-tutorial-visualize-anomalies.md)
event-hubs Event Hubs Python Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-python-get-started-send.md
Title: Send or receive events from Azure Event Hubs using Python (latest)
-description: This article provides a walkthrough for creating a Python application that sends/receives events to/from Azure Event Hubs using the latest azure-eventhub package.
+ Title: Send or receive events from Azure Event Hubs using Python
+description: This article provides a walkthrough for creating a Python application that sends/receives events to/from Azure Event Hubs.
Previously updated : 10/10/2022 Last updated : 01/08/2023 ms.devlang: python-+
-# Send events to or receive events from event hubs by using Python (azure-eventhub)
+# Send events to or receive events from event hubs by using Python
This quickstart shows how to send events to and receive events from an event hub using the **azure-eventhub** Python package. ## Prerequisites
If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md
To complete this quickstart, you need the following prerequisites: -- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).-- Python 2.7 or 3.6 or later, with PIP installed and updated.-- The Python package for Event Hubs.
+- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, sign up for a [free trial](https://azure.microsoft.com/free/).
+- Python 3.7 or later, with pip installed and updated.
+- Visual Studio Code (recommended) or any other integrated development environment (IDE).
+- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create an Event Hubs namespace, and obtain the management credentials that your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md).
- To install the package, run this command in a command prompt that has Python in its path:
+### Install the packages to send events
- ```cmd
- pip install azure-eventhub
- ```
+To install the Python packages for Event Hubs, open a command prompt that has Python in its path. Change the directory to the folder where you want to keep your samples.
- Install the following package for receiving the events using Azure Blob storage as the checkpoint store:
+## [Passwordless (Recommended)](#tab/passwordless)
+
+```shell
+pip install azure-eventhub
+pip install azure-identity
+pip install aiohttp
+```
+
+## [Connection String](#tab/connection-string)
+
+```shell
+pip install azure-eventhub
+```
+++
+### Authenticate the app to Azure
+
- ```cmd
- pip install azure-eventhub-checkpointstoreblob-aio
- ```
-- **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create an Event Hubs namespace, and obtain the management credentials that your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You'll use the connection string later in this quickstart. ## Send events+ In this section, create a Python script to send events to the event hub that you created earlier. 1. Open your favorite Python editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-2. Create a script called *send.py*. This script sends a batch of events to the event hub that you created earlier.
-3. Paste the following code into *send.py*:
+1. Create a script called *send.py*. This script sends a batch of events to the event hub that you created earlier.
+1. Paste the following code into *send.py*:
+
+ ## [Passwordless (Recommended)](#tab/passwordless)
+
+ In the code, use real values to replace the following placeholders:
+
+ * `EVENT_HUB_FULLY_QUALIFIED_NAMESPACE`
+ * `EVENT_HUB_NAME`
```python import asyncio
- from azure.eventhub.aio import EventHubProducerClient
+
from azure.eventhub import EventData
+ from azure.eventhub.aio import EventHubProducerClient
+ from azure.identity import DefaultAzureCredential
+
+ EVENT_HUB_FULLY_QUALIFIED_NAMESPACE = "EVENT_HUB_FULLY_QUALIFIED_NAMESPACE"
+ EVENT_HUB_NAME = "EVENT_HUB_NAME"
+
+ credential = DefaultAzureCredential()
+
+ async def run():
+ # Create a producer client to send messages to the event hub.
+ # Specify a credential that has correct role assigned to access
+ # event hubs namespace and the event hub name.
+ producer = EventHubProducerClient(
+ fully_qualified_namespace=EVENT_HUB_FULLY_QUALIFIED_NAMESPACE,
+ eventhub_name=EVENT_HUB_NAME,
+ credential=credential,
+ )
+ async with producer:
+ # Create a batch.
+ event_data_batch = await producer.create_batch()
+
+ # Add events to the batch.
+ event_data_batch.add(EventData("First event "))
+ event_data_batch.add(EventData("Second event"))
+ event_data_batch.add(EventData("Third event"))
+
+ # Send the batch of events to the event hub.
+ await producer.send_batch(event_data_batch)
+
+ # Close credential when no longer needed.
+ await credential.close()
+
+ asyncio.run(run())
+ ```
+ ## [Connection String](#tab/connection-string)
+
+ In the code, use real values to replace the following placeholders:
+
+ * `EVENT_HUB_CONNECTION_STR`
+ * `EVENT_HUB_NAME`
+
+ ```python
+ import asyncio
+
+ from azure.eventhub import EventData
+ from azure.eventhub.aio import EventHubProducerClient
+
+ EVENT_HUB_CONNECTION_STR = "EVENT_HUB_CONNECTION_STR"
+ EVENT_HUB_NAME = "EVENT_HUB_NAME"
+
async def run(): # Create a producer client to send messages to the event hub. # Specify a connection string to your event hubs namespace and # the event hub name.
- producer = EventHubProducerClient.from_connection_string(conn_str="EVENT HUBS NAMESPACE - CONNECTION STRING", eventhub_name="EVENT HUB NAME")
+ producer = EventHubProducerClient.from_connection_string(
+ conn_str=EVENT_HUB_CONNECTION_STR, eventhub_name=EVENT_HUB_NAME
+ )
async with producer: # Create a batch. event_data_batch = await producer.create_batch()-
+
# Add events to the batch.
- event_data_batch.add(EventData('First event '))
- event_data_batch.add(EventData('Second event'))
- event_data_batch.add(EventData('Third event'))
-
+ event_data_batch.add(EventData("First event "))
+ event_data_batch.add(EventData("Second event"))
+ event_data_batch.add(EventData("Third event"))
+
# Send the batch of events to the event hub. await producer.send_batch(event_data_batch)-
- loop = asyncio.get_event_loop()
- loop.run_until_complete(run())
-
+
+ asyncio.run(run())
```-
+
> [!NOTE]
- > For the complete source code, including informational comments, go to the [GitHub send_async.py page](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub/samples/async_samples/send_async.py).
+ > For examples of other options for sending events to Event Hub asynchronously using a connection string, see the [GitHub send_async.py page](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub/samples/async_samples/send_async.py). The patterns shown there are also applicable to sending events passwordless.
## Receive events+ This quickstart uses Azure Blob storage as a checkpoint store. The checkpoint store is used to persist checkpoints (that is, the last read positions).
This quickstart uses Azure Blob storage as a checkpoint store. The checkpoint st
> > For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see the [synchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob/samples/receive_events_using_checkpoint_store_storage_api_version.py) and [asynchronous](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub-checkpointstoreblob-aio/samples/receive_events_using_checkpoint_store_storage_api_version_async.py) samples on GitHub. - ### Create an Azure storage account and a blob container Create an Azure storage account and a blob container in it by doing the following steps: 1. [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal)
-2. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
-3. [Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+2. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).
+3. Authenticate to the blob container.
Be sure to record the connection string and container name for later use in the receive code.
+## [Passwordless (Recommended)](#tab/passwordless)
++
+## [Connection String](#tab/connection-string)
+
+[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+++
+### Install the packages to receive events
+
+For the receiving side, you need to install one or more packages. In this quickstart, you use Azure Blob storage to persist checkpoints so that the program doesn't read the events that it has already read. It performs metadata checkpoints on received messages at regular intervals in a blob. This approach makes it easy to continue receiving messages later from where you left off.
+
+## [Passwordless (Recommended)](#tab/passwordless)
+
+```shell
+pip install azure-eventhub-checkpointstoreblob-aio
+pip install azure-identity
+```
+
+## [Connection String](#tab/connection-string)
+
+```shell
+pip install azure-eventhub-checkpointstoreblob-aio
+```
++ ### Create a Python script to receive events In this section, you create a Python script to receive events from your event hub: 1. Open your favorite Python editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-2. Create a script called *recv.py*.
-3. Paste the following code into *recv.py*:
+1. Create a script called *recv.py*.
+1. Paste the following code into *recv.py*:
+
+ ## [Passwordless (Recommended)](#tab/passwordless)
+
+ In the code, use real values to replace the following placeholders:
+
+ * `BLOB_STORAGE_ACCOUNT_URL`
+ * `BLOB_CONTAINER_NAME`
+ * `EVENT_HUB_FULLY_QUALIFIED_NAMESPACE`
+ * `EVENT_HUB_NAME`
```python import asyncio
+
from azure.eventhub.aio import EventHubConsumerClient
- from azure.eventhub.extensions.checkpointstoreblobaio import BlobCheckpointStore
+ from azure.eventhub.extensions.checkpointstoreblobaio import (
+ BlobCheckpointStore,
+ )
+ from azure.identity.aio import DefaultAzureCredential
+
+ BLOB_STORAGE_ACCOUNT_URL = "BLOB_STORAGE_ACCOUNT_URL"
+ BLOB_CONTAINER_NAME = "BLOB_CONTAINER_NAME"
+ EVENT_HUB_FULLY_QUALIFIED_NAMESPACE = "EVENT_HUB_FULLY_QUALIFIED_NAMESPACE"
+ EVENT_HUB_NAME = "EVENT_HUB_NAME"
+
+ credential = DefaultAzureCredential()
+
+ async def on_event(partition_context, event):
+ # Print the event data.
+ print(
+ 'Received the event: "{}" from the partition with ID: "{}"'.format(
+ event.body_as_str(encoding="UTF-8"), partition_context.partition_id
+ )
+ )
+
+ # Update the checkpoint so that the program doesn't read the events
+ # that it has already read when you run it next time.
+ await partition_context.update_checkpoint(event)
+
+
+ async def main():
+ # Create an Azure blob checkpoint store to store the checkpoints.
+ checkpoint_store = BlobCheckpointStore(
+ blob_account_url=BLOB_STORAGE_ACCOUNT_URL,
+ container_name=BLOB_CONTAINER_NAME,
+ credential=credential,
+ )
+
+ # Create a consumer client for the event hub.
+ client = EventHubConsumerClient(
+ fully_qualified_namespace=EVENT_HUB_FULLY_QUALIFIED_NAMESPACE,
+ eventhub_name=EVENT_HUB_NAME,
+ consumer_group="$Default",
+ checkpoint_store=checkpoint_store,
+ credential=credential,
+ )
+ async with client:
+ # Call the receive method. Read from the beginning of the partition
+ # (starting_position: "-1")
+ await client.receive(on_event=on_event, starting_position="-1")
+
+ # Close credential when no longer needed.
+ await credential.close()
+
+ if __name__ == "__main__":
+ # Run the main method.
+ asyncio.run(main())
+ ```
+ ## [Connection String](#tab/connection-string)
+ In the code, use real values to replace the following placeholders:
+
+ * `BLOB_STORAGE_CONNECTION_STRING`
+ * `BLOB_CONTAINER_NAME`
+ * `EVENT_HUB_CONNECTION_STR`
+ * `EVENT_HUB_NAME`
+
+ ```python
+ import asyncio
+
+ from azure.eventhub.aio import EventHubConsumerClient
+ from azure.eventhub.extensions.checkpointstoreblobaio import (
+ BlobCheckpointStore,
+ )
+
+ BLOB_STORAGE_CONNECTION_STRING = "BLOB_STORAGE_CONNECTION_STRING"
+ BLOB_CONTAINER_NAME = "BLOB_CONTAINER_NAME"
+ EVENT_HUB_CONNECTION_STR = "EVENT_HUB_CONNECTION_STR"
+ EVENT_HUB_NAME = "EVENT_HUB_NAME"
+
+
async def on_event(partition_context, event): # Print the event data.
- print("Received the event: \"{}\" from the partition with ID: \"{}\"".format(event.body_as_str(encoding='UTF-8'), partition_context.partition_id))
-
+ print(
+ 'Received the event: "{}" from the partition with ID: "{}"'.format(
+ event.body_as_str(encoding="UTF-8"), partition_context.partition_id
+ )
+ )
+
# Update the checkpoint so that the program doesn't read the events # that it has already read when you run it next time. await partition_context.update_checkpoint(event)-
+
async def main(): # Create an Azure blob checkpoint store to store the checkpoints.
- checkpoint_store = BlobCheckpointStore.from_connection_string("AZURE STORAGE CONNECTION STRING", "BLOB CONTAINER NAME")
-
+ checkpoint_store = BlobCheckpointStore.from_connection_string(
+ BLOB_STORAGE_CONNECTION_STRING, BLOB_CONTAINER_NAME
+ )
+
# Create a consumer client for the event hub.
- client = EventHubConsumerClient.from_connection_string("EVENT HUBS NAMESPACE CONNECTION STRING", consumer_group="$Default", eventhub_name="EVENT HUB NAME", checkpoint_store=checkpoint_store)
+ client = EventHubConsumerClient.from_connection_string(
+ EVENT_HUB_CONNECTION_STR,
+ consumer_group="$Default",
+ eventhub_name=EVENT_HUB_NAME,
+ checkpoint_store=checkpoint_store,
+ )
async with client:
- # Call the receive method. Read from the beginning of the partition (starting_position: "-1")
- await client.receive(on_event=on_event, starting_position="-1")
-
- if __name__ == '__main__':
+ # Call the receive method. Read from the beginning of the
+ # partition (starting_position: "-1")
+ await client.receive(on_event=on_event, starting_position="-1")
+
+ if __name__ == "__main__":
loop = asyncio.get_event_loop() # Run the main method.
- loop.run_until_complete(main())
+ loop.run_until_complete(main())
```
+
+ > [!NOTE]
- > For the complete source code, including additional informational comments, go to the [GitHub recv_with_checkpoint_store_async.py
-page](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub/samples/async_samples/recv_with_checkpoint_store_async.py).
+ > For examples of other options for receiving events from Event Hub asynchronously using a connection string, see the [GitHub recv_with_checkpoint_store_async.py
+page](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub/samples/async_samples/recv_with_checkpoint_store_async.py). The patterns shown there are also applicable to receiving events passwordless.
### Run the receiver app
To run the script, open a command prompt that has Python in its path, and then r
python send.py ```
-The receiver window should display the messages that were sent to the event hub.
+The receiver window should display the messages that were sent to the event hub.
+### Troubleshooting
+
+If you don't see events in the receiver window or the code reports an error, try the following troubleshooting tips:
+
+* If you don't see results from *recy.py*, run *send.py* several times.
+
+* If you see errors about "coroutine" when using the passwordless code (with credentials), make sure you're using importing from `azure.identity.aio`.
+
+* If you see "Unclosed client session" with passwordless code (with credentials), make sure you close the credential when finished. For more information, see [Async credentials](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true#async-credentials).
+
+* If you see authorization errors with *recv.py* when accessing storage, make sure you followed the steps in [Create an Azure storage account and a blob container](#create-an-azure-storage-account-and-a-blob-container) and assigned the **Storage Blob Data Contributor** role to the service principal.
+
+* If you receive events with different partition IDs, this result is expected. Partitions are a data organization mechanism that relates to the downstream parallelism required in consuming applications. The number of partitions in an event hub directly relates to the number of concurrent readers you expect to have. For more information, see [Learn more about partitions](/azure/event-hubs/event-hubs-features#partitions).
## Next steps+ In this quickstart, you've sent and received events asynchronously. To learn how to send and receive events synchronously, go to the [GitHub sync_samples page](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples/sync_samples). For all the samples (both synchronous and asynchronous) on GitHub, go to [Azure Event Hubs client library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples).
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
See [ExpressRoute partners and locations](expressroute-locations.md) for informa
Yes. Microsoft 365 service endpoints are reachable through the Internet, even though ExpressRoute has been configured for your network. Check with your organization's networking team if the network at your location is configured to connect to Microsoft 365 services through ExpressRoute. ### How can I plan for high availability for Microsoft 365 network traffic on Azure ExpressRoute?
-See the recommendation for [High availability and failover with Azure ExpressRoute](/azure/expressroute/designing-for-high-availability-with-expressroute)
+See the recommendation for [High availability and failover with Azure ExpressRoute](./designing-for-high-availability-with-expressroute.md)
### Can I access Office 365 US Government Community (GCC) services over an Azure US Government ExpressRoute circuit?
You can associate a single ExpressRoute Direct circuit with multiple ExpressRout
### Does the ExpressRoute service store customer data?
-No.
+No.
expressroute Expressroute Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-prerequisites.md
If you plan to enable Microsoft 365 on ExpressRoute, review the following docume
* [Azure ExpressRoute for Microsoft 365](/microsoft-365/enterprise/azure-expressroute) * [Routing with ExpressRoute for Microsoft 365](/microsoft-365/enterprise/azure-expressroute)
-* [High availability and failover with ExpressRoute](/azure/expressroute/designing-for-high-availability-with-expressroute)
+* [High availability and failover with ExpressRoute](./designing-for-high-availability-with-expressroute.md)
* [Microsoft 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges) * [Network planning and performance tuning for Microsoft 365](/microsoft-365/enterprise/network-planning-and-performance) * [Network and migration planning for Microsoft 365](/microsoft-365/enterprise/network-and-migration-planning)
If you plan to enable Microsoft 365 on ExpressRoute, review the following docume
* Configure your ExpressRoute connection. * [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) * [Configure routing](expressroute-howto-routing-arm.md)
- * [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
+ * [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
external-attack-surface-management Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/index.md
Microsoft Defender External Attack Surface Management contains both global data
For security purposes, Microsoft collects users' IP addresses when they log in. This data is stored for up to 30 days but may be stored longer if needed to investigate potential fraudulent or malicious use of the product.
-In the case of a region down scenario, customers should see no downtime as Defender EASM uses technologies that replicate data to a backup region.
+In the case of a region down scenario, customers should see no downtime as Defender EASM uses technologies that replicate data to a backup region. Defender EASM processes customer data. By default, customer data is replicated to the paired region.
-
-Defender EASM processes customer data. By default, customer data is replicated to the paired region.
+The Microsoft compliance framework requires that all customer data be deleted within 180 days in accordance with [Azure subscription states](https://learn.microsoft.com/azure/cost-management-billing/manage/subscription-states) handling.  This also includes storage of customer data in offline locations, such as database backups. 
## Next Steps
external-attack-surface-management Using And Managing Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/using-and-managing-discovery.md
Microsoft has preemptively configured the attack surfaces of many organizations,
When first accessing your Defender EASM instance, select ΓÇ£Getting StartedΓÇ¥ in the ΓÇ£GeneralΓÇ¥ section to search for your organization in the list of automated attack surfaces. Then select your organization from the list and click ΓÇ£Build my Attack SurfaceΓÇ¥.
-![Screenshot of pre-configured attack surface selection screen](media/Discovery_1.png)
At this point, the discovery will be running in the background. If you selected a pre-configured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Review these dashboard insights to become familiar with your Attack Surface as you wait for additional assets to be discovered and populated in your inventory. See the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
Custom discoveries are organized into Discovery Groups. They are independent see
1. Select the **Discovery** panel under the **Manage** section in the left-hand navigation column.
- ![Screenshot of EASM instance from overview page with manage section highlighted](media/Discovery_2.png)
+ :::image type="content" source="media/Discovery_2.png" alt-text="Screenshot of EASM instance from overview page with manage section highlighted.":::
2. This Discovery page shows your list of Discovery Groups by default. This list will be empty when you first access the platform. To run your first discovery, click **Add Discovery Group**.
- ![Screenshot of Discovery screen with ΓÇ£add disco groupΓÇ¥ highlighted](media/Discovery_3.png)
+ :::image type="content" source="media/Discovery_3.png" alt-text="Screenshot of Discovery screen with ΓÇ£add disco groupΓÇ¥ highlighted.":::
3. First, name your new discovery group and add a description. The **Recurring Frequency** field allows you to schedule discovery runs for this group, scanning for new assets related to the designated seeds on a continuous basis. The default recurrence selection is **Weekly**; Microsoft recommends this cadence to ensure that your organizationΓÇÖs assets are routinely monitored and updated. For a single, one-time discovery run, select **Never**. However, we recommend that users keep the **Weekly** default cadence and instead turn off historical monitoring within their Discovery Group settings if they later decide to discontinue recurrent discovery runs. Select **Next: Seeds >**
- ![Screenshot of first page of disco group setup](media/Discovery_4.png)
+ :::image type="content" source="media/Discovery_4.png" alt-text="Screenshot of first page of disco group setup.":::
4. Next, select the seeds that youΓÇÖd like to use for this Discovery Group. Seeds are known assets that belong to your organization; the Defender EASM platform scans these entities, mapping their connections to other online infrastructure to create your Attack Surface.
- ![Screenshot of seed selection page of disco group setup](media/Discovery_5.png)
+ :::image type="content" source="media/Discovery_5.png" alt-text="Screenshot of seed selection page of disco group setup.":::
The **Quick Start** option lets you search for your organization in a list of pre-populated Attack Surfaces. You can quickly create a Discovery Group based on the known assets belonging to your organization.
- ![Screenshot of pre-baked attack surface selection page, then output in seed list](media/Discovery_6.png)
+ :::image type="content" source="media/Discovery_6.png" alt-text="Screenshot of pre-baked attack surface selection page, then output in seed list.":::
- ![Screenshot of pre-baked attack surface selection page.](media/Discovery_7.png)
+ :::image type="content" source="media/Discovery_7.png" alt-text="Screenshot of pre-baked attack surface selection page..":::
Alternatively, users can manually input their seeds. Defender EASM accepts organization names, domains, IP blocks, hosts, email contacts, ASNs, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
Custom discoveries are organized into Discovery Groups. They are independent see
5. Review your group information and seed list, then select **Create & Run**.
- ![Screenshot of review + create screen](media/Discovery_8.png)
+ :::image type="content" source="media/Discovery_8.png" alt-text="Screenshot of review + create screen.":::
You will then be taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you will see new assets added to your Confirmed Inventory.
Custom discoveries are organized into Discovery Groups. They are independent see
Users can manage their discovery groups from the main ΓÇ£DiscoveryΓÇ¥ page. The default view displays a list of all your discovery groups and some key data about each one. From the list view, you can see the number of seeds, recurrence schedule, last run date and created date for each group.
-![Screenshot of discovery groups screen](media/Discovery_9.png)
Click on any discovery group to view more information, edit the group, or immediately kickstart a new discovery process.
The discovery group details page contains the run history for the group. Once ex
Run history is organized by the seed assets scanned during the discovery run. To see a list of the applicable seeds, click ΓÇ£DetailsΓÇ¥. This opens a right-hand pane that lists all the seeds and exclusions by kind and name.
-![Screenshot of run history for disco group screen](media/Discovery_10.png)
### Viewing seeds and exclusions
The Discovery page defaults to a list view of Discovery Groups, but users can al
The seed list view displays seed values with three columns: type, source name, and discovery group. The ΓÇ£type" field displays the category of the seed asset; the most common seeds are domains, hosts and IP blocks, but you can also use email contacts, ASNs, certificate common names or WhoIs organizations. The source name is simply the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups that use the seed; each value is clickable, taking you to the details page for that discovery group.
-![Screenshot of seeds view of discovery page](media/Discovery_11.png)
### Exclusions
firewall-manager Secure Cloud Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-cloud-network.md
Previously updated : 01/26/2022 Last updated : 01/12/2023
In this tutorial, you learn how to:
> The procedure in this tutorial uses Azure Firewall Manager to create a new Azure Virtual WAN secured hub. > You can use Firewall Manager to upgrade an existing hub, but you can't configure Azure **Availability Zones** for Azure Firewall. > It is also possible to convert an existing hub to a secured hub using the Azure portal, as described in [Configure Azure Firewall in a Virtual WAN hub](../virtual-wan/howto-firewall.md). But like Azure Firewall Manager, you can't configure **Availability Zones**.
-> To upgrade an existing hub and specify **Availability Zones** for Azure Firewall (recommended) you must follow the upgrade procedure in [Tutorial: Secure your virtual hub using Azure PowerShell](secure-cloud-network-powershell.md). secure-cloud-network-powershell).
+> To upgrade an existing hub and specify **Availability Zones** for Azure Firewall (recommended) you must follow the upgrade procedure in [Tutorial: Secure your virtual hub using Azure PowerShell](secure-cloud-network-powershell.md).
## Prerequisites
The two virtual networks will each have a workload server in them and will be pr
3. For **Subscription**, select your subscription. 4. For **Resource group**, select **Create new**, and type **fw-manager-rg** for the name and select **OK**. 5. For **Name**, type **Spoke-01**.
-6. For **Region**, select **(US) East US**.
+6. For **Region**, select **East US**.
7. Select **Next: IP Addresses**.
-8. For **Address space**, type **10.0.0.0/16**.
+8. For **Address space**, accept the default **10.0.0.0/16**.
9. Select **Add subnet**. 10. For **Subnet name**, type **Workload-01-SN**. 11. For **Subnet address range**, type **10.0.1.0/24**.
The two virtual networks will each have a workload server in them and will be pr
13. Select **Review + create**. 14. Select **Create**.
-Repeat this procedure to create another similar virtual network:
+Repeat this procedure to create another similar virtual network in the **fw-manager-rg** resource group:
Name: **Spoke-02**<br> Address space: **10.1.0.0/16**<br>
Create your secured virtual hub using Firewall Manager.
:::image type="content" source="./media/secure-cloud-network/1-create-new-secured-virtual-hub.jpg" alt-text="Screenshot of creating a new secured virtual hub." lightbox="./media/secure-cloud-network/1-create-new-secured-virtual-hub.jpg":::
+1. Select your **Subscription**.
5. For **Resource group**, select **fw-manager-rg**. 6. For **Region**, select **East US**. 7. For the **Secured virtual hub name**, type **Hub-01**. 8. For **Hub address space**, type **10.2.0.0/16**.
+10. Select **New vWAN**.
9. For the new virtual WAN name, type **Vwan-01**.
-10. Select **New vWAN** and select **Standard** for "Type"
-11. Leave the **Include VPN gateway to enable Trusted Security Partners** check box cleared.
+1. For **Type** Select **Standard**.
+1. Leave the **Include VPN gateway to enable Trusted Security Partners** check box cleared.
:::image type="content" source="./media/secure-cloud-network/2-create-new-secured-virtual-hub.png" alt-text="Screenshot of creating a new virtual hub with properties." lightbox="./media/secure-cloud-network/2-create-new-secured-virtual-hub.png":::
Create your secured virtual hub using Firewall Manager.
:::image type="content" source="./media/secure-cloud-network/3-azure-firewall-parameters-with-zones.png" alt-text="Screenshot of configuring Azure Firewall parameters." lightbox="./media/secure-cloud-network/3-azure-firewall-parameters-with-zones.png":::
-16. Select the **Firewall Policy** to apply at the new Azure Firewall instance. Select **Default Deny Policy**, you will refine your settings later in this article.
-17. Select **Next: Trusted Security Partner**.
+16. Select the **Firewall Policy** to apply at the new Azure Firewall instance. Select **Default Deny Policy**, you'll refine your settings later in this article.
+17. Select **Next: Security Partner Provider**.
:::image type="content" source="./media/secure-cloud-network/4-trusted-security-partner.png" alt-text="Screenshot of configuring Trusted Partners parameters." lightbox="./media/secure-cloud-network/4-trusted-security-partner.png":::
You can get the firewall public IP address after the deployment completes.
1. Open **Firewall Manager**. 2. Select **Virtual hubs**. 3. Select **hub-01**.
-4. Select **Public IP configuration**.
+4. Under **Azure Firewall**, select **Public IP configuration**.
5. Note the public IP address to use later. ### Connect the hub and spoke virtual networks
Now you can peer the hub and spoke virtual networks.
7. Select **Spoke-01** for the virtual network and select **Workload-01-SN** for the subnet. 8. For **Public IP**, select **None**. 9. Accept the other defaults and select **Next: Management**.
-10. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
-11. Review the settings on the summary page, and then select **Create**.
+1. Select **Next:Monitoring**.
+1. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+1. Review the settings on the summary page, and then select **Create**.
Use the information in the following table to configure another virtual machine named **Srv-Workload-02**. The rest of the configuration is the same as the **Srv-workload-01** virtual machine.
A firewall policy defines collections of rules to direct traffic on one or more
9. For **Destination Ports**, type **3389**. 10. For **Destination Type**, select **IP Address**. 11. For **Destination**, type the firewall public IP address that you noted previously.
- 12. For **Translated address**, type the private IP address for **Srv-Workload-01** that you noted previously.
- 13. For **Translated port**, type **3389**.
- 14. Select **Add**.
+ 1. For **Translated type**, select **IP Address**.
+ 1. For **Translated address**, type the private IP address for **Srv-Workload-01** that you noted previously.
+ 1. For **Translated port**, type **3389**.
+ 1. Select **Add**.
22. Add a **Network rule** so you can connect a remote desktop from **Srv-Workload-01** to **Srv-Workload-02**.
A firewall policy defines collections of rules to direct traffic on one or more
11. For **Destination Type**, select **IP Address**. 12. For **Destination**, type the **Srv-Workload-02** private IP address that you noted previously. 13. Select **Add**.
- 14. Select **Review + create**.
- 15. Select **Create**.
-23. In the **IDPS** page, click on **Next: Threat Intelligence**
+
+1. Select **Next: IDPS**.
+23. On the **IDPS** page, select **Next: Threat Intelligence**
:::image type="content" source="./media/secure-cloud-network/6-create-azure-firewall-policy-idps7.png" alt-text="Screenshot of configuring IDPS settings." lightbox="./media/secure-cloud-network/6-create-azure-firewall-policy-idps7.png":::
-24. In the **Threat Intelligence** page, accept defaults and click on **Review and Create**:
+24. In the **Threat Intelligence** page, accept defaults and select **Review and Create**:
:::image type="content" source="./media/secure-cloud-network/7a-create-azure-firewall-policy-threat-intelligence7.png" alt-text="Screenshot of configuring Threat Intelligence settings." lightbox="./media/secure-cloud-network/7a-create-azure-firewall-policy-threat-intelligence7.png":::
-25. Review and confirm your selection clicking on **Create** button.
+25. Review to confirm your selection and then select **Create**.
## Associate policy
firewall-manager Vhubs And Vnets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/vhubs-and-vnets.md
Previously updated : 09/14/2020 Last updated : 01/11/2023
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
Previously updated : 08/01/2022 Last updated : 01/11/2023 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
The resource group contains all the resources used in this procedure.
1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). 2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Create**. 4. For **Subscription**, select your subscription.
-1. For **Resource group name**, type **Test-FW-RG**.
-1. For **Resource group location**, select a location. All other resources that you create must be in the same location.
+1. For **Resource group** name, type **Test-FW-RG**.
+1. For **Region**, select a region. All other resources that you create must be in the same region.
1. Select **Review + create**. 1. Select **Create**.
This VNet will have two subnets.
> [!NOTE] > The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-1. Select **Networking** > **Virtual network**.
+1. On the Azure portal menu or from the **Home** page, search for **Virtual networks**.
+1. Select **Virtual networks** in the result pane.
+1. Select **Create**.
1. For **Subscription**, select your subscription. 1. For **Resource group**, select **Test-FW-RG**. 1. For **Name**, type **Test-FW-VN**.
-1. For **Region**, select the same location that you used previously.
1. Select **Next: IP addresses**.
-1. For **IPv4 Address space**, accept the default **10.0.0.0/16**.
-1. Under **Subnet name**, select **default**.
-1. For **Subnet name** change it to **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
-1. For **Address range**, change it to **10.0.1.0/26**.
+1. For **Address space**, accept the default **10.0.0.0/16**.
+1. Under **Subnet name**, select **default** and change it to **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
+1. For **Subnet address range**, change it to **10.0.1.0/26**.
1. Select **Save**. + Next, create a subnet for the workload server. 1. Select **Add subnet**.
-4. For **Subnet name**, type **Workload-SN**.
-5. For **Subnet address range**, type **10.0.2.0/24**.
-6. Select **Add**.
-7. Select **Review + create**.
-8. Select **Create**.
+1. For **Subnet name**, type **Workload-SN**.
+1. For **Subnet address range**, type **10.0.2.0/24**.
+1. Select **Add**.
+1. Select **Review + create**.
+1. Select **Create**.
### Create a virtual machine
Now create the workload virtual machine, and place it in the **Workload-SN** sub
8. Make sure that **Test-FW-VN** is selected for the virtual network and the subnet is **Workload-SN**. 9. For **Public IP**, select **None**. 11. Accept the other defaults and select **Next: Management**.
-12. For **Boot diagnostics**, select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
-13. Review the settings on the summary page, and then select **Create**.
-1. After the deployment is complete, select **Srv-Work** and note the private IP address that you'll need to use later.
+1. Accept the defaults and select **Next: Monitoring**.
+1. For **Boot diagnostics**, select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+1. Review the settings on the summary page, and then select **Create**.
+1. After the deployment is complete, select **Go to resource** and note the **Srv-Work** private IP address that you'll need to use later.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
Deploy the firewall into the VNet.
|Resource group |**Test-FW-RG** | |Name |**Test-FW01**| |Region |Select the same location that you used previously|
- |Firewall tier|**Standard**|
+ |Firewall SKU|**Standard**|
|Firewall management|**Use Firewall rules (classic) to manage this firewall**| |Choose a virtual network |**Use existing**: **Test-FW-VN**| |Public IP address |**Add new**<br>**Name**: **fw-pip**|
Deploy the firewall into the VNet.
6. Review the summary, and then select **Create** to create the firewall. This will take a few minutes to deploy.
-7. After deployment completes, go to the **Test-FW-RG** resource group, and select the **Test-FW01** firewall.
+7. After deployment completes, select the **Go to resource**.
8. Note the firewall private and public IP addresses. You'll use these addresses later. ## Create a default route When creating a route for outbound and inbound connectivity through the firewall, a default route to 0.0.0.0/0 with the virtual appliance private IP as a next hop is sufficient. This will take care of any outgoing and incoming connections to go through the firewall. As an example, if the firewall is fulfilling a TCP-handshake and responding to an incoming request, then the response is directed to the IP address who sent the traffic. This is by design.
-As a result, there is no need create an additional UDR to include the AzureFirewallSubnet IP range. This may result in dropped connections. The original default route is sufficient.
+As a result, there is no need create an additional user defined route to include the AzureFirewallSubnet IP range. This may result in dropped connections. The original default route is sufficient.
For the **Workload-SN** subnet, configure the outbound default route to go through the firewall.
-1. On the Azure portal menu, select **Create a resource**.
-2. Under **Networking**, select **Route table**.
-5. For **Subscription**, select your subscription.
-6. For **Resource group**, select **Test-FW-RG**.
-7. For **Region**, select the same location that you used previously.
-4. For **Name**, type **Firewall-route**.
+1. On the Azure portal search for **Route tables**.
+1. Select **Route tables** in the results pane.
+1. Select **Create**.
+1. For **Subscription**, select your subscription.
+1. For **Resource group**, select **Test-FW-RG**.
+1. For **Region**, select the same location that you used previously.
+1. For **Name**, type **Firewall-route**.
1. Select **Review + create**. 1. Select **Create**. After deployment completes, select **Go to resource**. 1. On the **Firewall-route** page, select **Subnets** and then select **Associate**.
-1. Select **Virtual network** > **Test-FW-VN**.
+1. For **Virtual network**, select **Test-FW-VN**.
1. For **Subnet**, select **Workload-SN**. Make sure that you select only the **Workload-SN** subnet for this route, otherwise your firewall won't work correctly. 13. Select **OK**.
For testing purposes, configure the server's primary and secondary DNS addresses
2. Select the network interface for the **Srv-Work** virtual machine. 3. Under **Settings**, select **DNS servers**. 4. Under **DNS servers**, select **Custom**.
-5. Type **209.244.0.3** in the **Add DNS server** text box, and **209.244.0.4** in the next text box.
+5. Type **209.244.0.3** and press Enter in the **Add DNS server** text box, and **209.244.0.4** in the next text box.
6. Select **Save**. 7. Restart the **Srv-Work** virtual machine.
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
You can define the destination path to use in the rewrite. The destination path
Preserve unmatched path allows you to append the remaining path after the source pattern to the new path. For example, if I set **Preserve unmatched path to Yes**.
-* If the incoming request is `www.contoso.com/sub/1.jpg`, the source pattern gets set to `/`, the destination get set to `/foo/`, and the content get served from `/foo/sub/1`.jpg from the origin.
+* If the incoming request is `www.contoso.com/sub/1.jpg`, the source pattern gets set to `/`, the destination gets set to `/foo/`, and the content gets served from `/foo/sub/1.jpg` from the origin.
-* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/`, the content get served from `/foo/image/1.jpg` from the origin.
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination gets set to `/foo/`, and the content gets served from `/foo/image/1.jpg` from the origin.
For example, if I set **Preserve unmatched path to No**.
-* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/2.jpg`, the content will always be served from `/foo/2.jpg` from the origin no matter what paths followed in `wwww.contoso.com/sub/`.
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination gets set to `/foo/2.jpg`, and the content will always be served from `/foo/2.jpg` from the origin no matter what paths followed in `wwww.contoso.com/sub/`.
::: zone-end
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/iso27001-ase-sql-workload/control-mapping.md
appropriate separation of duties.
## A.8.2.1 Classification of information
-Azure's [SQL Vulnerability Assessment service](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview)
+Azure's [SQL Vulnerability Assessment service](../../../../defender-for-cloud/sql-azure-vulnerability-assessment-overview.md)
can help you discover sensitive data stored in your databases and includes recommendations to classify that data. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition to audit that vulnerabilities identified during SQL Vulnerability Assessment scan are remediated.
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/iso27001-shared/control-mapping.md
appropriate separation of duties.
## A.8.2.1 Classification of information Azure's
-[SQL Vulnerability Assessment service](/azure/defender-for-cloud/sql-azure-vulnerability-assessment-overview)
+[SQL Vulnerability Assessment service](../../../../defender-for-cloud/sql-azure-vulnerability-assessment-overview.md)
can help you discover sensitive data stored in your databases and includes recommendations to classify that data. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition to audit that vulnerabilities identified during SQL Vulnerability Assessment scan are remediated.
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Machine Configuration Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-assignments.md
Title: Understand machine configuration assignment resources description: Machine configuration creates extension resources named machine configuration assignments that map configurations to machines. Previously updated : 07/15/2022 Last updated : 01/12/2023
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
Title: Understand Azure Automanage Machine Configuration
+ Title: Understand Azure Automanage Machine Configuration
description: Learn how Azure Policy uses the machine configuration feature to audit or configure settings inside virtual machines. Last updated 01/03/2023
servers because it's included in the Arc Connected Machine agent.
> manage Azure virtual machines. To deploy the extension at scale across many machines, assign the policy initiative
-`Deploy prerequisites to enable machine configuration policies on virtual machines`
+`Deploy prerequisites to enable guest configuration policies on virtual machines`
to a management group, subscription, or resource group containing the machines that you plan to manage.
scope of the policy assignment are automatically included.
## Managed identity requirements
-Policy definitions in the initiative _Deploy prerequisites to enable guest
-configuration policies on virtual machines_ enable a system-assigned managed
+Policy definitions in the initiative `Deploy prerequisites to enable guest configuration policies on virtual machines` enable a system-assigned managed
identity, if one doesn't exist. There are two policy definitions in the initiative that manage identity creation. The IF conditions in the policy definitions ensure the correct behavior based on the current state of the
hdinsight Ambari Web Ui Auto Logout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/ambari-web-ui-auto-logout.md
To disable the auto logout feature,
**Next steps**
-* [Optimize clusters with Apache Ambari in Azure HDInsight](/azure/hdinsight/hdinsight-changing-configs-via-ambari)
-
+* [Optimize clusters with Apache Ambari in Azure HDInsight](./hdinsight-changing-configs-via-ambari.md)
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
HDInsight uses safe deployment practices, which involve gradual region deploymen
* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4 * HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
-For workload specific versions, see [here.](/azure/hdinsight/hdinsight-40-component-versioning)
+For workload specific versions, see [here.](./hdinsight-40-component-versioning.md)
![Icon showing new features with text.](media/hdinsight-release-notes/new-icon-for-new-feature.png) * **Log Analytics** - Customers can enable classic monitoring to get the latest OMS version 14.19. To remove old versions, disable and enable classic monitoring.
-* **Ambari** user auto UI logout due to inactivity. For more information, see [here](/azure/hdinsight/ambari-web-ui-auto-logout)
+* **Ambari** user auto UI logout due to inactivity. For more information, see [here](./ambari-web-ui-auto-logout.md)
* **Spark** - A new and optimized version of Spark 3.1.3 is included in this release. We tested Apache Spark 3.1.2(previous version) and Apache Spark 3.1.3(current version) using the TPC-DS benchmark. The test was carried out using E8 V3  SKU, for Apache Spark on 1-TB workload. Apache Spark 3.1.3 (current version) outperformed Apache Spark 3.1.2 (previous version) by over 40% in total query runtime for TPC-DS queries using the same hardware specs. The Microsoft Spark team added optimizations available in Azure Synapse with Azure HDInsight. For more information, please refer to [ Speed up your data workloads with performance updates to Apache Spark 3.1.2 in Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/speed-up-your-data-workloads-with-performance-updates-to-apache/ba-p/2769467) ![Icon showing new regions added with text.](media/hdinsight-release-notes/new-icon-for-new-regions-added.png)
For workload specific versions, see [here.](/azure/hdinsight/hdinsight-40-compon
HDInsight will implement TLS1.2 going forward, and earlier versions will be updated on the platform. If you're running any applications on top of HDInsight and they use TLS 1.0 and 1.1, upgrade to TLS 1.2 to avoid any disruption in services.
-For more information, see [How to enable Transport Layer Security (TLS)](https://learn.microsoft.com/mem/configmgr/core/plan-design/security/enable-tls-1-2-client)
+For more information, see [How to enable Transport Layer Security (TLS)](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client)
![Icon showing end of support with text.](media/hdinsight-release-notes/new-icon-for-end-of-support.png)
healthcare-apis Deploy New Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-arm.md
When deployment is completed, the following resources and access roles are creat
- An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](/azure/event-hubs/authorize-access-shared-access-signature).
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
- A Health Data Services workspace.
To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy New Bicep Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-bicep-powershell-cli.md
In this quickstart, you'll learn how to:
> - Use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. > [!TIP]
-> To learn more about Bicep, see [What is Bicep?](/azure/azure-resource-manager/bicep/overview?tabs=bicep)
+> To learn more about Bicep, see [What is Bicep?](../../azure-resource-manager/bicep/overview.md?tabs=bicep)
## Prerequisites
To begin your deployment and complete the quickstart, you must have the followin
- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). - [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.
- - For Azure PowerShell, you'll also need to install [Bicep CLI](/azure/azure-resource-manager/bicep/install#windows) to deploy the Bicep file used in this quickstart.
+ - For Azure PowerShell, you'll also need to install [Bicep CLI](../../azure-resource-manager/bicep/install.md#windows) to deploy the Bicep file used in this quickstart.
When you have these prerequisites, you're ready to deploy the Bicep file.
Complete the following five steps to deploy the MedTech service using Azure Powe
Connect-AzAccount ```
-2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id).
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
```azurepowershell Set-AzContext <AzureSubscriptionId>
Complete the following five steps to deploy the MedTech service using the Azure
az login ```
-2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id).
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
```azurecli az account set <AzureSubscriptionId>
When deployment is completed, the following resources and access roles are creat
- An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](/azure/event-hubs/authorize-access-shared-access-signature).
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
- A Health Data Services workspace.
To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy New Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-powershell-cli.md
Complete the following five steps to deploy the MedTech service using Azure Powe
Connect-AzAccount ```
-2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id).
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
```azurepowershell Set-AzContext <AzureSubscriptionId>
Complete the following five steps to deploy the MedTech service using the Azure
az login ```
-2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](/azure/azure-portal/get-subscription-tenant-id).
+2. Set your Azure subscription deployment context using your subscription ID. To learn how to get your subscription ID, see [Get subscription and tenant IDs in the Azure portal](../../azure-portal/get-subscription-tenant-id.md).
```azurecli az account set <AzureSubscriptionId>
When deployment is completed, the following resources and access roles are creat
- An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](/azure/event-hubs/authorize-access-shared-access-signature).
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md).
- A Health Data Services workspace.
To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
When deployment is completed, the following resources and access roles are creat
- An event hub consumer group. In this deployment, the consumer group is named *$Default*.
- - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](/azure/event-hubs/authorize-access-shared-access-signature). The Azure Event Hubs Data Sender role isn't used in this tutorial.
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). The Azure Event Hubs Data Sender role isn't used in this tutorial.
- An Azure IoT Hub with [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) configured to send device messages to the device message event hub.
To learn about other methods for deploying the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
# How to enable diagnostic settings for the MedTech service
-In this article, you'll learn how to enable the diagnostic settings for the MedTech service to export logs and metrics to different destinations (for example: to an [Azure Log Analytics workspace](/azure/azure-monitor/logs/log-analytics-workspace-overview) or an [Azure storage account](../../storage/index.yml) or an [Azure event hub](../../event-hubs/index.yml)) for audit, analysis, backup, or troubleshooting of your MedTech service.
+In this article, you'll learn how to enable the diagnostic settings for the MedTech service to export logs and metrics to different destinations (for example: to an [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md) or an [Azure storage account](../../storage/index.yml) or an [Azure event hub](../../event-hubs/index.yml)) for audit, analysis, backup, or troubleshooting of your MedTech service.
## Create a diagnostic setting for the MedTech service
If you choose to include your Log Analytics workspace as a destination option fo
:::image type="content" source="media/how-to-enable-diagnostic-settings/clean-query-result-post-error-fix.png" alt-text="Screenshot of query after fixing error." lightbox="media/how-to-enable-diagnostic-settings/clean-query-result-post-error-fix.png"::: > [!TIP]
-> To learn about how to use the Log Analytics workspace, see [Azure Log Analytics workspace](/azure/azure-monitor/logs/log-analytics-workspace-overview).
+> To learn about how to use the Log Analytics workspace, see [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md).
> > To learn about how to troubleshoot the MedTech service error messages and conditions, see [Troubleshoot the MedTech service error messages and conditions](troubleshoot-error-messages-and-conditions.md).
In this article, you learned how to enable the diagnostics settings for the MedT
To learn about the MedTech service frequently asked questions (FAQs), see > [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
healthcare-apis How To Use Monitoring Tab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-tab.md
In this article, you'll learn how to use the MedTech service monitoring tab in t
:::image type="content" source="media\how-to-use-monitoring-tab\pin-metrics-to-dashboard.png" alt-text="Screenshot the MedTech service monitoring tile with red box around the pin icon." lightbox="media\how-to-use-monitoring-tab\pin-metrics-to-dashboard.png"::: > [!TIP]
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)
## Available metrics for the MedTech service
To learn how to enable the MedTech service diagnostic settings, see
> [!div class="nextstepaction"] > [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md
+
+ Title: Connect an NXP MIMXRT1060-EVK to Azure IoT Hub quickstart
+description: Use Azure RTOS embedded software to connect an NXP MIMXRT1060-EVK device to Azure IoT Hub and send telemetry.
+++
+ms.devlang: c
+ Last updated : 01/11/2022++
+# Quickstart: Connect an NXP MIMXRT1060-EVK Evaluation kit to IoT Hub
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
+**Total completion time**: 45 minutes
+
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1060-EVK)
+
+In this quickstart, you use Azure RTOS to connect the NXP MIMXRT1060-EVK evaluation kit (from now on, the NXP EVK) to Azure IoT.
+
+You'll complete the following tasks:
+
+* Install a set of embedded development tools for programming the NXP EVK in C
+* Build an image and flash it onto the NXP EVK
+* Use Azure CLI to create and manage an Azure IoT hub that the NXP EVK will securely connect to
+* Use Azure IoT Explorer to register a device with your IoT hub, view device properties, view device telemetry, and call direct commands on the device
+
+## Prerequisites
+
+* A PC running Windows 10 or Windows 11
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Hardware
+ * The [NXP MIMXRT1060-EVK](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/mimxrt1060-evk-i-mx-rt1060-evaluation-kit:MIMXRT1060-EVK) (NXP EVK)
+ * USB 2.0 A male to Micro USB male cable
+ * Wired Ethernet access
+ * Ethernet cable
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prepare the development environment
+
+To set up your development environment, first you clone a GitHub repo that contains all the assets you need for the quickstart. Then you install a set of programming tools.
+
+### Clone the repo for the quickstart
+
+Clone the following repo to download all sample device code, setup scripts, and offline versions of the documentation. If you previously cloned this repo in another quickstart, you don't need to do it again.
+
+To clone the repo, run the following command:
+
+```shell
+git clone --recursive https://github.com/azure-rtos/getting-started.git
+```
+
+### Install the tools
+
+The cloned repo contains a setup script that installs and configures the required tools. If you installed these tools in another embedded device quickstart, you don't need to do it again.
+
+> [!NOTE]
+> The setup script installs the following tools:
+> * [CMake](https://cmake.org): Build
+> * [ARM GCC](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm): Compile
+> * [Termite](https://www.compuphase.com/software_termite.htm): Monitor serial port output for connected devices
+
+To install the tools:
+
+1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
+
+ *getting-started\tools\get-toolchain.bat*
+
+1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
+1. Run the following code to confirm that CMake version 3.14 or later is installed.
+
+ ```shell
+ cmake --version
+ ```
+
+## Create the cloud components
+
+### Create an IoT hub
+
+You can use Azure CLI to create an IoT hub that handles events and messaging for your device.
+
+To create an IoT hub:
+
+1. Launch your CLI app. To run the CLI commands in the rest of this quickstart, copy the command syntax, paste it into your CLI app, edit variable values, and press Enter.
+ - If you're using Cloud Shell, right-click the link for [Cloud Shell](https://shell.azure.com/bash), and select the option to open in a new tab.
+ - If you're using Azure CLI locally, start your CLI console app and sign in to Azure CLI.
+
+1. Run [az extension add](/cli/azure/extension#az-extension-add) to install or upgrade the *azure-iot* extension to the current version.
+
+ ```azurecli-interactive
+ az extension add --upgrade --name azure-iot
+ ```
+
+1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *centralus* region.
+
+ > [!NOTE]
+ > You can optionally set an alternate `location`. To see available locations, run [az account list-locations](/cli/azure/account#az-account-list-locations).
+
+ ```azurecli
+ az group create --name MyResourceGroup --location centralus
+ ```
+
+1. Run the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
+
+ *YourIotHubName*. Replace this placeholder in the code with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your unique IoT hub name.
+
+ The `--sku F1` parameter creates the IoT hub in the Free tier. Free tier hubs have a limited feature set and are used for proof of concept applications. For more information on IoT Hub tiers, features, and pricing, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
+
+ ```azurecli
+ az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName} --sku F1 --partition-count 2
+ ```
+
+1. After the IoT hub is created, view the JSON output in the console, and copy the `hostName` value to use in a later step. The `hostName` value looks like the following example:
+
+ `{Your IoT hub name}.azure-devices.net`
+
+### Configure IoT Explorer
+
+In the rest of this quickstart, you'll use IoT Explorer to register a device to your IoT hub, to view the device properties and telemetry, and to send commands to your device. In this section, you configure IoT Explorer to connect to the IoT hub you created, and to read plug and play models from the public model repository.
+
+To add a connection to your IoT hub:
+
+1. In your CLI app, run the [az iot hub connection-string show](/cli/azure/iot/hub/connection-string#az-iot-hub-connection-string-show) command to get the connection string for your IoT hub.
+
+ ```azurecli
+ az iot hub connection-string show --hub-name {YourIoTHubName}
+ ```
+
+1. Copy the connection string without the surrounding quotation characters.
+1. In Azure IoT Explorer, select **IoT hubs** on the left menu.
+1. Select **+ Add connection**.
+1. Paste the connection string into the **Connection string** box.
+1. Select **Save**.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-add-connection.png" alt-text="Screenshot of adding a connection in IoT Explorer.":::
+
+If the connection succeeds, IoT Explorer switches to the **Devices** view.
+
+To add the public model repository:
+
+1. In IoT Explorer, select **Home** to return to the home view.
+1. On the left menu, select **IoT Plug and Play Settings**, then select **+Add** and select **Public repository** from the drop-down menu.
+1. An entry appears for the public model repository at `https://devicemodels.azure.com`.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-add-public-repository.png" alt-text="Screenshot of adding the public model repository in IoT Explorer.":::
+
+1. Select **Save**.
+
+### Register a device
+
+In this section, you create a new device instance and register it with the IoT hub you created. You'll use the connection information for the newly registered device to securely connect your physical device in a later section.
+
+To register a device:
+
+1. From the home view in IoT Explorer, select **IoT hubs**.
+1. The connection you previously added should appear. Select **View devices in this hub** below the connection properties.
+1. Select **+ New** and enter a device ID for your device; for example, `mydevice`. Leave all other properties the same.
+1. Select **Create**.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-device-created.png" alt-text="Screenshot of Azure IoT Explorer device identity.":::
+
+1. Use the copy buttons to copy the **Device ID** and **Primary key** fields.
+
+Before continuing to the next section, save each of the following values retrieved from earlier steps, to a safe location. You use these values in the next section to configure your device.
+
+* `hostName`
+* `deviceId`
+* `primaryKey`
+
+## Prepare the device
+
+To connect the NXP EVK to Azure, you'll modify a configuration file for Azure IoT settings, rebuild the image, and flash the image to the device.
+
+### Add configuration
+
+1. Open the following file in a text editor:
+
+ *getting-started\NXP\MIMXRT1060-EVK\app\azure_config.h*
+
+1. Comment out the following line near the top of the file as shown:
+
+ ```c
+ // #define ENABLE_DPS
+ ```
+
+1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
+
+ |Constant name|Value|
+ |-|--|
+ | `IOT_HUB_HOSTNAME` | {*Your host name value*} |
+ | `IOT_HUB_DEVICE_ID` | {*Your Device ID value*} |
+ | `IOT_DEVICE_SAS_KEY` | {*Your Primary key value*} |
+
+1. Save and close the file.
+
+### Build the image
+
+1. In your console or in File Explorer, run the script *rebuild.bat* at the following path to build the image:
+
+ *getting-started\NXP\MIMXRT1060-EVK\tools\rebuild.bat*
+
+2. After the build completes, confirm that the binary file was created in the following path:
+
+ *getting-started\NXP\MIMXRT1060-EVK\build\app\mimxrt1060_azure_iot.bin*
+
+### Flash the image
+
+1. On the NXP EVK, locate the **Reset** button, the Micro USB port, and the Ethernet port. You use these components in the following steps. All three are highlighted in the following picture:
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/nxp-evk-board.png" alt-text="Photo showing the NXP EVK board.":::
+
+1. Connect the Micro USB cable to the Micro USB port on the NXP EVK, and then connect it to your computer. After the device powers up, a solid green LED shows the power status.
+1. Use the Ethernet cable to connect the NXP EVK to an Ethernet port.
+1. In File Explorer, find the binary file that you created in the previous section.
+1. Copy the binary file *mimxrt1060_azure_iot.bin*
+1. In File Explorer, find the NXP EVK device connected to your computer. The device appears as a drive on your system with the drive label **RT1060-EVK**.
+1. Paste the binary file into the root folder of the NXP EVK. Flashing starts automatically and completes in a few seconds.
+
+ > [!NOTE]
+ > During the flashing process, a red LED blinks rapidly on the NXP EVK.
+
+### Confirm device connection details
+
+You can use the **Termite** app to monitor communication and confirm that your device is set up correctly.
+
+1. Start **Termite**.
+ > [!TIP]
+ > If you have issues getting your device to initialize or connect after flashing, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
+1. Select **Settings**.
+1. In the **Serial port settings** dialog, check the following settings and update if needed:
+ * **Baud rate**: 115,200
+ * **Port**: The port that your NXP EVK is connected to. If there are multiple port options in the dropdown, you can find the correct port to use. Open Windows **Device Manager**, and view **Ports** to identify which port to use.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/termite-settings.png" alt-text="Screenshot of serial port settings in the Termite app.":::
+
+1. Select OK.
+1. Press the **Reset** button on the device. The button is labeled on the device and located near the Micro USB connector.
+1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
+
+ ```output
+ Initializing DHCP
+ MAC: **************
+ IP address: 192.168.0.56
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
+ SUCCESS: DHCP initialized
+
+ Initializing DNS client
+ DNS address: 192.168.0.1
+ SUCCESS: DNS client initialized
+
+ Initializing SNTP time sync
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Jan 11, 2023 20:37:37.90 UTC
+ SUCCESS: SNTP initialized
+
+ Initializing Azure IoT Hub client
+ Hub hostname: **************.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsg;2
+ SUCCESS: Connected to IoT Hub
+
+ Receive properties: {"desired":{"$version":1},"reported":{"$version":1}}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=3{"deviceInformation":{"__t":"c","manufacturer":"NXP","model":"MIMXRT1060-EVK","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"Arm Cortex M7","processorManufacturer":"NXP","totalStorage":8192,"totalMemory":768}}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=5{"ledState":false}
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=7{"telemetryInterval":{"ac":200,"av":1,"value":10}}
+
+ Starting Main loop
+ Telemetry message sent: {"temperature":40.61}.
+ ```
+
+Keep Termite open to monitor device output in the following steps.
+
+## View device properties
+
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you'll use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the NXP EVK. These capabilities rely on the device model published for the NXP EVK in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+
+To access IoT Plug and Play components for the device in IoT Explorer:
+
+1. From the home view in IoT Explorer, select **IoT hubs**, then select **View devices in this hub**.
+1. Select your device.
+1. Select **IoT Plug and Play components**.
+1. Select **Default component**. IoT Explorer displays the IoT Plug and Play components that are implemented on your device.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-default-component-view.png" alt-text="Screenshot of the device's default component in IoT Explorer.":::
+
+1. On the **Interface** tab, view the JSON content in the device model **Description**. The JSON contains configuration details for each of the IoT Plug and Play components in the device model.
+
+ Each tab in IoT Explorer corresponds to one of the IoT Plug and Play components in the device model.
+
+ | Tab | Type | Name | Description |
+ |||||
+ | **Interface** | Interface | `Getting Started Guide` | Example model for the Azure RTOS Getting Started Guides |
+ | **Properties (read-only)** | Property | `ledState` | Whether the led is on or off |
+ | **Properties (writable)** | Property | `telemetryInterval` | The interval that the device sends telemetry |
+ | **Commands** | Command | `setLedState` | Turn the LED on or off |
+
+To view device properties using Azure IoT Explorer:
+
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent.
+1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-set-telemetry-interval.png" alt-text="Screenshot of setting telemetry interval on the device in IoT Explorer.":::
+
+1. IoT Explorer responds with a notification. You can also observe the update in Termite.
+1. Set the telemetry interval back to 10.
+
+To use Azure CLI to view device properties:
+
+1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
+
+ ```azurecli
+ az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. Inspect the properties for your device in the console output.
+
+## View telemetry
+
+With Azure IoT Explorer, you can view the flow of telemetry from your device to the cloud. Optionally, you can do the same task using Azure CLI.
+
+To view telemetry in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Telemetry** tab. Confirm that **Use built-in event hub** is set to *Yes*.
+1. Select **Start**.
+1. View the telemetry as the device sends messages to the cloud.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-device-telemetry.png" alt-text="Screenshot of device telemetry in IoT Explorer.":::
+
+ > [!NOTE]
+ > You can also monitor telemetry from the device by using the Termite app.
+
+1. Select the **Show modeled events** checkbox to view the events in the data format specified by the device model.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-show-modeled-events.png" alt-text="Screenshot of modeled telemetry events in IoT Explorer.":::
+
+1. Select **Stop** to end receiving events.
+
+To use Azure CLI to view device telemetry:
+
+1. Run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. Use the names that you created previously in Azure IoT for your device and IoT hub.
+
+ ```azurecli
+ az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. View the JSON output in the console.
+
+ ```json
+ {
+ "event": {
+ "origin": "mydevice",
+ "module": "",
+ "interface": "dtmi:azurertos:devkit:gsg;2",
+ "component": "",
+ "payload": {
+ "temperature": 41.77
+ }
+ }
+ }
+ ```
+
+1. Select CTRL+C to end monitoring.
++
+## Call a direct method on the device
+
+You can also use Azure IoT Explorer to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that turns an LED on or off. Optionally, you can do the same task using Azure CLI.
+
+To call a method in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Commands** tab.
+1. For the **setLedState** command, set the **state** to **true**.
+1. Select **Send command**. You should see a notification in IoT Explorer. There will be no change on the device as there isn't an available LED to toggle. However, you can view the output in Termite to monitor the status of the methods.
+
+ :::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub/iot-explorer-invoke-method.png" alt-text="Screenshot of calling the setLedState method in IoT Explorer.":::
+
+1. Set the **state** to **false**, and then select **Send command**. The LED should turn off.
+1. Optionally, you can view the output in Termite to monitor the status of the methods.
+
+To use Azure CLI to call a method:
+
+1. Run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command, and specify the method name and payload. For this method, setting `method-payload` to `true` would turn an LED on. There will be no change on the device as there isn't an available LED to toggle. However, you can view the output in Termite to monitor the status of the methods.
++
+ ```azurecli
+ az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
+ ```
+
+ The CLI console shows the status of your method call on the device, where `204` indicates success.
+
+ ```json
+ {
+ "payload": {},
+ "status": 200
+ }
+ ```
+
+1. Check your device to confirm the LED state.
+
+1. View the Termite terminal to confirm the output messages:
+
+ ```output
+ Received command: setLedState
+ Payload: true
+ LED is turned ON
+ Sending property: $iothub/twin/PATCH/properties/reported/?$rid=15{"ledState":true}
+ ```
+
+## Troubleshoot and debug
+
+If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
+
+For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
+
+## Clean up resources
+
+If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+
+To delete a resource group by name:
+
+1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created.
+
+ ```azurecli-interactive
+ az group delete --name MyResourceGroup
+ ```
+
+1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted.
+
+ ```azurecli-interactive
+ az group list
+ ```
++
+## Next steps
+
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the NXP EVK device. You connected the NXP EVK to Azure IoT Hub, and carried out tasks such as viewing telemetry and calling a method on the device.
+
+As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [!div class="nextstepaction"]
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
+
+> [!IMPORTANT]
+> Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-dps How To Troubleshoot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-troubleshoot-dps.md
Use this table to understand and resolve common errors.
* For a 429 error, follow the retry pattern of IoT Hub that has exponential backoff with a random jitter. You can follow the retry-after header provided by the SDK.
-* For 500-series server errors, retry your [connection](/azure/iot-dps/concepts-deploy-at-scale#iot-hub-connectivity-considerations) using cached credentials or a [Device Registration Status Lookup API](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup#deviceregistrationresult) call.
+* For 500-series server errors, retry your [connection](./concepts-deploy-at-scale.md#iot-hub-connectivity-considerations) using cached credentials or a [Device Registration Status Lookup API](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup#deviceregistrationresult) call.
-For related best practices, such as retrying operations, see [Best practices for large-scale IoT device deployments](/azure/iot-dps/concepts-deploy-at-scale).
+For related best practices, such as retrying operations, see [Best practices for large-scale IoT device deployments](./concepts-deploy-at-scale.md).
## Next Steps
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/development-environment.md
If you prefer to develop with other editors or from the CLI, the Azure IoT Edge
The Azure IoT Edge extension for Visual Studio Code provides IoT Edge module templates built on programming languages including C, C#, Java, Node.js, and Python. Templates for Azure functions in C# are also included.
-For more information and to download, see [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+For more information and to download, see [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
In addition to the IoT Edge extensions, you may find it helpful to install additional extensions for developing. For example, you can use [Docker Support for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=PeterJausovec.vscode-docker) to manage your images, containers, and registries. Additionally, all the major supported languages have extensions for Visual Studio Code that can help when you're developing modules.
+The [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension is useful as a companion for the Azure IoT Edge extension.
+ #### Prerequisites The module templates for some languages and services have prerequisites that are necessary to build the project folders on your development machine with Visual Studio Code.
iot-edge How To Deploy Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md
There are several ways to deploy modules to an IoT Edge device and all of them w
## Prerequisites - An [IoT hub](../iot-hub/iot-hub-create-through-portal.md) in your Azure subscription.+ - An IoT Edge device. If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md). -- [Visual Studio Code](https://code.visualstudio.com/) and the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) if deploying from Visual Studio Code.
+- [Visual Studio Code](https://code.visualstudio.com/).
+
+- The [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension and the [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension if deploying from Visual Studio Code.
## Deploy from the Azure portal
iot-edge How To Deploy Modules Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-vscode.md
Title: Deploy modules from Visual Studio Code - Azure IoT Edge
-description: Use Visual Studio Code with the Azure IoT Tools to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest.
+description: Use Visual Studio Code with Azure IoT Edge for Visual Studio Code to push an IoT Edge module from your IoT Hub to your IoT Edge device, as configured by a deployment manifest.
This article shows how to create a JSON deployment manifest, then use that file
If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md). * [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools#overview) for Visual Studio Code.
+* [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
## Configure a deployment manifest
iot-edge How To Deploy Vscode At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-vscode-at-scale.md
In this article, you set up Visual Studio Code and the IoT extension. You then l
If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md). * [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools#overview) for Visual Studio Code.
+* [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
+* [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
## Sign in to access your IoT hub
iot-edge How To Monitor Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-module-twins.md
If you see the message "A module identity doesn't exist for this module", this e
To review and edit a module twin:
-1. If not already installed, install the [Azure IoT Tools Extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) for Visual Studio Code.
+1. If not already installed, install the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
1. In the **Explorer**, expand the **Azure IoT Hub**, and then expand the device with the module you want to monitor. 1. Right-click the module and select **Edit Module Twin**. A temporary file of the module twin is downloaded to your computer and displayed in Visual Studio Code.
iot-edge How To Use Create Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-use-create-options.md
The IoT Edge deployment manifest accepts create options formatted as JSON. For e
This edgeHub example uses the **HostConfig.PortBindings** parameter to map exposed ports on the container to a port on the host device.
-If you use the Azure IoT Tools extensions for Visual Studio or Visual Studio Code, you can write the create options in JSON format in the **deployment.template.json** file. Then, when you use the extension to build the IoT Edge solution or generate the deployment manifest, it will stringify the JSON for you in the format that the IoT Edge runtime expects. For example:
+If you use the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension for Visual Studio or Visual Studio Code, you can write the create options in JSON format in the **deployment.template.json** file. Then, when you use the extension to build the IoT Edge solution or generate the deployment manifest, it will stringify the JSON for you in the format that the IoT Edge runtime expects. For example:
```json "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Install [Visual Studio Code](https://code.visualstudio.com/) first and then add
::: zone pivot="iotedge-dev-ext" -- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
+- [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension.
+- [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension.
::: zone-end
After solution creation, there are four items within the solution:
::: zone pivot="iotedge-dev-ext"
-Use Visual Studio Code and the Azure IoT Tools. You start by creating a solution, and then generating the first module in that solution. Each solution can contain multiple modules.
+Use Visual Studio Code and the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension. You start by creating a solution, and then generating the first module in that solution. Each solution can contain multiple modules.
1. Select **View** > **Command Palette**. 1. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge Solution**.
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
The following steps illustrate how to pull a Docker image of **edgeAgent** and *
For more information, see:
-* [Configure the IoT Edge agent](/azure/iot-edge/how-to-configure-proxy-support#configure-the-iot-edge-agent)
+* [Configure the IoT Edge agent](./how-to-configure-proxy-support.md#configure-the-iot-edge-agent)
* [Azure IoT Edge Agent](https://hub.docker.com/_/microsoft-azureiotedge-agent) * [Azure IoT Edge Hub](https://hub.docker.com/_/microsoft-azureiotedge-hub)
These constraints can be applied to individual modules by using create options i
## Next steps * Learn more about [IoT Edge automatic deployment](module-deployment-monitoring.md).
-* See how IoT Edge supports [Continuous integration and continuous deployment](how-to-continuous-integration-continuous-deployment.md).
+* See how IoT Edge supports [Continuous integration and continuous deployment](how-to-continuous-integration-continuous-deployment.md).
iot-edge Tutorial C Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-c-module.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module in C, install the following prerequisites on your development machine:
Installing the Azure IoT C SDK isn't required for this tutorial, but can provide
## Create a module project
-The following steps create an IoT Edge module project for C by using Visual Studio Code and the Azure IoT Tools extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
+The following steps create an IoT Edge module project for C by using Visual Studio Code and the Azure IoT Edge extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
### Create a new project
In the previous section, you created an IoT Edge solution and added code to the
## Deploy modules to device
-Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
+Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
Make sure that your IoT Edge device is up and running.
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-csharp-module.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To complete these tutorials, prepare the following additional prerequisites on your development machine:
To complete these tutorials, prepare the following additional prerequisites on y
## Create a module project
-The following steps create an IoT Edge module project for C# by using Visual Studio Code and the Azure IoT Tools extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
+The following steps create an IoT Edge module project for C# by using Visual Studio Code and the Azure IoT Edge extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
### Create a new project
In the previous section, you created an IoT Edge solution and added code to the
## Deploy and run the solution
-Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
+Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
Make sure that your IoT Edge device is up and running.
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-custom-vision.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and
+[Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module with the Custom Vision service, install the following additional prerequisites on your development machine:
First, build and push your solution to your container registry.
## Deploy modules to device
-Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
+Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
Make sure that your IoT Edge device is up and running.
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-function.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and
+[Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module in with Azure Functions, install the following additional prerequisites on your development machine:
To develop an IoT Edge module in with Azure Functions, install the following add
## Create a function project
-The Azure IoT Tools for Visual Studio Code that you installed in the prerequisites provides management capabilities as well as some code templates. In this section, you use Visual Studio Code to create an IoT Edge solution that contains an Azure Function.
+The Azure IoT Edge for Visual Studio Code that you installed in the prerequisites provides management capabilities as well as some code templates. In this section, you use Visual Studio Code to create an IoT Edge solution that contains an Azure Function.
### Create a new project
Visual Studio Code outputs a success message when your container image is pushed
## Deploy and run the solution
-You can use the Azure portal to deploy your Function module to an IoT Edge device like you did in the quickstarts. You can also deploy and monitor modules from within Visual Studio Code. The following sections use the Azure IoT Tools for VS Code that was listed in the prerequisites. Install the extension now, if you didn't already.
+You can use the Azure portal to deploy your Function module to an IoT Edge device like you did in the quickstarts. You can also deploy and monitor modules from within Visual Studio Code. The following sections use the Azure IoT Edge and IoT Hub for VS Code that was listed in the prerequisites. Install the extension now, if you didn't already.
1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
The following table lists the supported development scenarios for **Linux contai
| **Linux device architecture** | Linux AMD64 <br> Linux ARM32 <br> Linux ARM64 | Linux AMD64 <br> Linux ARM32 <br> Linux ARM64 | | **Azure services** | Azure Functions <br> Azure Stream Analytics <br> Azure Machine Learning | | | **Languages** | C <br> C# <br> Java <br> Node.js <br> Python | C <br> C# |
-| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) | [Azure IoT Edge Tools for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) <br> [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) |
+| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) <br> [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)| [Azure IoT Edge Tools for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools) <br> [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) |
## Install container engine
Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These
2. Once the installation is finished, select **View** > **Extensions**.
-3. Search for **Azure IoT Tools**, which is actually a collection of extensions that help you interact with IoT Hub and IoT devices, as well as developing IoT Edge modules.
+3. Search for **Azure IoT Edge** and **Azure IoT Hub**, which are extensions that help you interact with IoT Hub and IoT devices, as well as developing IoT Edge modules.
4. Select **Install**. Each included extension installs individually.
Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These
## Create a new module project
-The Azure IoT Tools extension provides project templates for all supported IoT Edge module languages in Visual Studio Code. These templates have all the files and code that you need to deploy a working module to test IoT Edge, or give you a starting point to customize the template with your own business logic.
+The Azure IoT Edge extension provides project templates for all supported IoT Edge module languages in Visual Studio Code. These templates have all the files and code that you need to deploy a working module to test IoT Edge, or give you a starting point to customize the template with your own business logic.
For this tutorial, we use the C# module template because it is the most commonly used template.
You verified that the built container images are stored in your container regist
## View messages from device
-The SampleModule code receives messages through its input queue and passes them along through its output queue. The deployment manifest declared routes that passed messages to SampleModule from SimulatedTemperatureSensor, and then forwarded messages from SampleModule to IoT Hub. The Azure IoT tools for Visual Studio Code allow you to see messages as they arrive at IoT Hub from your individual devices.
+The SampleModule code receives messages through its input queue and passes them along through its output queue. The deployment manifest declared routes that passed messages to SampleModule from SimulatedTemperatureSensor, and then forwarded messages from SampleModule to IoT Hub. The Azure IoT Edge and Azure IoT Hub extensions allow you to see messages as they arrive at IoT Hub from your individual devices.
1. In the Visual Studio Code explorer, right-click the IoT Edge device that you want to monitor, then select **Start Monitoring Built-in Event Endpoint**.
iot-edge Tutorial Develop For Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-windows.md
The following table lists the supported development scenarios for **Windows cont
| - | | | | **Azure services** | Azure Functions <br> Azure Stream Analytics | | | **Languages** | C# (debugging not supported) | C <br> C# |
-| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) | [Azure IoT Edge Tools for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools)<br>[Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) |
+| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) <br> [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) | [Azure IoT Edge Tools for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vsiotedgetools)<br>[Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) |
## Install container engine
iot-edge Tutorial Java Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-java-module.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module in Java, install the following additional prerequisites on your development machine:
To develop an IoT Edge module in Java, install the following additional prerequi
## Create a module project
-The following steps create an IoT Edge module project that's based on the Azure IoT Edge maven template package and Azure IoT Java device SDK. You create the project by using Visual Studio Code and the Azure IoT Tools.
+The following steps create an IoT Edge module project that's based on the Azure IoT Edge maven template package and Azure IoT Java device SDK. You create the project by using Visual Studio Code and the Azure IoT Edge extension.
### Create a new project
In the previous section, you created an IoT Edge solution and added code to the
## Deploy modules to device
-Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
+Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
Make sure that your IoT Edge device is up and running.
iot-edge Tutorial Machine Learning Edge 02 Prepare Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-02-prepare-environment.md
The development VM will be set up with:
* [Visual Studio Code](https://code.visualstudio.com/) * [Azure PowerShell](/powershell/azure/) * [VS Code Extensions](https://marketplace.visualstudio.com/search?target=VSCode)
- * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
+ * [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge)
+ * [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
* [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python) * [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) * [Docker](https://marketplace.visualstudio.com/items?itemName=PeterJausovec.vscode-docker)
Now that you have connected to the development machine, add some useful extensio
1. The script will run for a few minutes installing VS code extensions:
- * Azure IoT Tools
+ * Azure IoT Edge
+ * Azure IoT Hub
* Python * C# * Docker
iot-edge Tutorial Node Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-node-module.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module in Node.js, install the following additional prerequisites on your development machine:
To develop an IoT Edge module in Node.js, install the following additional prere
## Create a module project
-The following steps show you how to create an IoT Edge Node.js module using Visual Studio Code and the Azure IoT Tools.
+The following steps show you how to create an IoT Edge Node.js module using Visual Studio Code and the Azure IoT Edge extension.
### Create a new project
In the previous section, you created an IoT Edge solution and added code to the
## Deploy modules to device
-Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
+Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
Make sure that your IoT Edge device is up and running.
iot-edge Tutorial Python Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-python-module.md
Before beginning this tutorial, you should have gone through the previous tutori
* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. * A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module in Python, install the following additional prerequisites on your development machine:
To develop an IoT Edge module in Python, install the following additional prereq
## Create a module project
-The following steps create an IoT Edge Python module by using Visual Studio Code and the Azure IoT Tools.
+The following steps create an IoT Edge Python module by using Visual Studio Code and the Azure IoT Edge extension.
### Create a new project
In the previous section, you created an IoT Edge solution and added code to the
## Deploy modules to device
-Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
+Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
Make sure that your IoT Edge device is up and running.
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-store-data-sql-server.md
Before beginning this tutorial, you should have gone through the previous tutori
* An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * ARM devices, like Raspberry Pis, cannot run SQL Server. If you want to use SQL on an ARM device, you can use [Azure SQL Edge](../azure-sql-edge/overview.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. This tutorial uses an Azure Functions module to send data to the SQL Server. To develop an IoT Edge module with Azure Functions, install the following additional prerequisites on your development machine:
To send data into a database, you need a module that can structure the data prop
### Create a new project
-The following steps show you how to create an IoT Edge function using Visual Studio Code and the Azure IoT Tools.
+The following steps show you how to create an IoT Edge function using Visual Studio Code and the Azure IoT Edge extension.
1. Open Visual Studio Code.
iot-hub How To Routing Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-arm.md
This article shows you how to export your Azure IoT Hub template, add a route to
> [!IMPORTANT] > When you use a Resource Manager template to deploy a resource, the template replaces any existing resource of the type you're deploying. >
-> When you create a new IoT hub, overwriting an existing deployed resource isn't a concern. To create a new IoT hub, you can use a [basic template](/azure/azure-resource-manager/templates/syntax#template-format) that has the required properties instead of exporting an existing template from an IoT hub that's already deployed.
+> When you create a new IoT hub, overwriting an existing deployed resource isn't a concern. To create a new IoT hub, you can use a [basic template](../azure-resource-manager/templates/syntax.md#template-format) that has the required properties instead of exporting an existing template from an IoT hub that's already deployed.
> > However, if you add a route to an existing IoT hub Resource Manager template, use a template that you export from your IoT hub to ensure that all existing resources and properties remain connected after you deploy the updated template. Resources that are already deployed won't be replaced. For example, an exported Resource Manager template that you previously deployed might contain storage information for your IoT hub if you've connected it to storage.
-To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](iot-hub-devguide-messages-d2c.md). To walk through the steps to set up a route that sends messages to storage and then test on a simulated device, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal).
+To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](iot-hub-devguide-messages-d2c.md). To walk through the steps to set up a route that sends messages to storage and then test on a simulated device, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=portal).
## Prerequisites
Be sure to have *one* of the following resources to use when you create an endpo
* A Service Bus topic resource. If you need to create a new Service Bus topic, see [Quickstart: Create a Service Bus namespace with topic and subscription by using a Resource Manager template](../service-bus-messaging/service-bus-resource-manager-namespace-topic.md).
-* An Azure Storage resource. If you need to create a new Azure Storage, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=template).
+* An Azure Storage resource. If you need to create a new Azure Storage, see [Create a storage account](../storage/common/storage-account-create.md?tabs=template).
## Create a route
For `name`, enter a unique name for your endpoint. Leave the `id` parameter as a
# [Azure Storage](#tab/azurestorage)
-To learn how to create an Azure Storage resource (a namespace, topic, and subscription), see [Create a storage account](/azure/storage/common/storage-account-create?tabs=template).
+To learn how to create an Azure Storage resource (a namespace, topic, and subscription), see [Create a storage account](../storage/common/storage-account-create.md?tabs=template).
In the [Azure portal](https://portal.azure.com/#home), get your primary connection string for your Azure Storage resource on the resource's **Access keys** pane.
New-AzResourceGroupDeployment `
### Azure Cloud Shell deployment
-Because [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) runs in a web browser, you can [upload](/azure/cloud-shell/using-the-shell-window#upload-and-download-files) the template file before you run the deployment command. With the file uploaded, you need only the template file name (instead of the entire file path) to use in the `template-file` parameter.
+Because [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) runs in a web browser, you can [upload](../cloud-shell/using-the-shell-window.md#upload-and-download-files) the template file before you run the deployment command. With the file uploaded, you need only the template file name (instead of the entire file path) to use in the `template-file` parameter.
:::image type="content" source="media/how-to-routing-arm/upload-cloud-shell.png" alt-text="Screenshot that shows the location of the button in Azure Cloud Shell to upload a file.":::
To view your new route in the Azure portal, go to your IoT Hub resource. On the
In this how-to article, you learned how to create a route and endpoint for Event Hubs, Service Bus queues and topics, and Azure Storage.
-To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal). In the tutorial, you create a storage route and test it with a device in your IoT hub.
+To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=portal). In the tutorial, you create a storage route and test it with a device in your IoT hub.
iot-hub How To Routing Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-azure-cli.md
This article shows you how to create a route and endpoint in your hub in Azure IoT Hub and then delete your route and endpoint. Learn how to use the Azure CLI to create routes and endpoints for Azure Event Hubs, Azure Service Bus queues and topics, and Azure Storage.
-To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](/azure/iot-hub/iot-hub-devguide-messages-d2c). To walk through setting up a route that sends messages to storage and then testing on a simulated device, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=cli).
+To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](./iot-hub-devguide-messages-d2c.md). To walk through setting up a route that sends messages to storage and then testing on a simulated device, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=cli).
## Prerequisites
You can choose an Event Hubs resource (namespace and entity).
> [!TIP] > The `name` parameter's value `RootManageSharedAccessKey` is the default name that allows **Manage, Send, Listen** claims (access). If you want to restrict the claims, give the `name` parameter your own unique name and include the `--rights` flag followed by one of the claims. For example, `--name my-name --rights Send`.
- For more information about access, see [Authorize access to Azure Event Hubs](/azure/event-hubs/authorize-access-event-hubs).
+ For more information about access, see [Authorize access to Azure Event Hubs](../event-hubs/authorize-access-event-hubs.md).
```azurecli az eventhubs eventhub authorization-rule create --resource-group my-resource-group --namespace-name my-routing-namespace --eventhub-name my-event-hubs --name RootManageSharedAccessKey ```
-For more information, see [Quickstart: Create an event hub by using the Azure CLI](/azure/event-hubs/event-hubs-quickstart-cli).
+For more information, see [Quickstart: Create an event hub by using the Azure CLI](../event-hubs/event-hubs-quickstart-cli.md).
# [Service Bus queue](#tab/servicebusqueue)
To create Service Bus queue resources:
For more authorization rule options, see [az servicebus queue authorization-rule create](/cli/azure/servicebus/queue/authorization-rule#az-servicebus-queue-authorization-rule-create).
-For more information, see [Use the Azure CLI to create a Service Bus namespace and a queue](/azure/service-bus-messaging/service-bus-quickstart-cli).
+For more information, see [Use the Azure CLI to create a Service Bus namespace and a queue](../service-bus-messaging/service-bus-quickstart-cli.md).
# [Service Bus topic](#tab/servicebustopic)
To create Service Bus topic resources:
az servicebus topic subscription rule create --resource-group my-resource-group --namespace-name my-namespace --topic-name my-topic --subscription-name my-subscription --name my-filter --filter-sql-expression "my-sql-expression" ```
-For more information, see [Use Azure CLI to create a Service Bus topic and subscriptions to the topic](/azure/service-bus-messaging/service-bus-tutorial-topics-subscriptions-cli).
+For more information, see [Use Azure CLI to create a Service Bus topic and subscriptions to the topic](../service-bus-messaging/service-bus-tutorial-topics-subscriptions-cli.md).
# [Azure Storage](#tab/azurestorage)
You can choose an Azure Storage resource (account and container).
} ```
-For more information, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-cli).
+For more information, see [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-cli).
az iot hub route delete --resource-group my-resource-group --hub-name my-iot-hub
In this how-to article, you learned how to create a route and endpoint for Event Hubs, Service Bus queues and topics, and Azure Storage.
-To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=cli). In the tutorial, you create a storage route and test it with a device in your IoT hub.
+To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=cli). In the tutorial, you create a storage route and test it with a device in your IoT hub.
iot-hub How To Routing Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-portal.md
This article shows you how to create a route and endpoint in your hub in Azure IoT Hub and then delete your route and endpoint. Learn how to use the Azure portal to create routes and endpoints for Azure Event Hubs, Azure Service Bus queues and topics, Azure Storage, and Azure Cosmos DB.
-To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](/azure/iot-hub/iot-hub-devguide-messages-d2c). To walk through setting up a route that sends messages to storage and then testing on a simulated device, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal).
+To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](./iot-hub-devguide-messages-d2c.md). To walk through setting up a route that sends messages to storage and then testing on a simulated device, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=portal).
## Prerequisites
The procedures that are described in the article use the following resources:
### Azure portal
-This article uses the Azure portal to work with IoT Hub and other Azure services. To learn more about how to use the Azure portal, see [What is the Azure portal?](/azure/azure-portal/azure-portal-overview).
+This article uses the Azure portal to work with IoT Hub and other Azure services. To learn more about how to use the Azure portal, see [What is the Azure portal?](../azure-portal/azure-portal-overview.md).
### IoT hub
To create an IoT hub route, you need an IoT hub that you created by using Azure
Be sure to have the following hub resource to use when you create your IoT hub route:
-* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps to [create an IoT hub by using the Azure portal](/azure/iot-hub/iot-hub-create-through-portal).
+* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps to [create an IoT hub by using the Azure portal](./iot-hub-create-through-portal.md).
### Endpoint service
To create an IoT hub route, you need at least one other Azure service to use as
Be sure to have *one* of the following resources to use when you create an endpoint your IoT hub route:
-* An Event Hubs resource (namespace and entity). If you need to create a new Event Hubs resource, see [Quickstart: Create an event hub by using the Azure portal](/azure/event-hubs/event-hubs-create).
+* An Event Hubs resource (namespace and entity). If you need to create a new Event Hubs resource, see [Quickstart: Create an event hub by using the Azure portal](../event-hubs/event-hubs-create.md).
-* A Service Bus queue resource (namespace and queue). If you need to create a new Service Bus queue, see [Use the Azure portal to create a Service Bus namespace and queue](/azure/service-bus-messaging/service-bus-quickstart-portal).
+* A Service Bus queue resource (namespace and queue). If you need to create a new Service Bus queue, see [Use the Azure portal to create a Service Bus namespace and queue](../service-bus-messaging/service-bus-quickstart-portal.md).
-* A Service Bus topic resource (namespace and topic). If you need to create a new Service Bus topic, see [Use the Azure portal to create a Service Bus topic and subscriptions to the topic](/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal).
+* A Service Bus topic resource (namespace and topic). If you need to create a new Service Bus topic, see [Use the Azure portal to create a Service Bus topic and subscriptions to the topic](../service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md).
-* An Azure Storage resource (account and container). If you need to create a new storage account in Azure, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-portal). When you create a storage account, you have many options, but you need only a new container in your account for this article.
+* An Azure Storage resource (account and container). If you need to create a new storage account in Azure, see [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal). When you create a storage account, you have many options, but you need only a new container in your account for this article.
-* An Azure Cosmos DB resource (account, database, and container). If you need to create a new instance of Azure Cosmos DB, see [Create an Azure Cosmos DB account](/azure/cosmos-db/nosql/quickstart-portal#create-account). For the API option, select **Azure Cosmos DB for NoSQL**.
+* An Azure Cosmos DB resource (account, database, and container). If you need to create a new instance of Azure Cosmos DB, see [Create an Azure Cosmos DB account](../cosmos-db/nosql/quickstart-portal.md#create-account). For the API option, select **Azure Cosmos DB for NoSQL**.
## Create a route and endpoint
Decide which route type you want to create: an event hub, a Service Bus queue or
# [Event Hubs](#tab/eventhubs)
-To learn how to create an Event Hubs resource, see [Quickstart: Create an event hub by using the Azure portal](/azure/event-hubs/event-hubs-create).
+To learn how to create an Event Hubs resource, see [Quickstart: Create an event hub by using the Azure portal](../event-hubs/event-hubs-create.md).
1. In the Azure portal, go to your IoT hub. In the resource menu under **Hub settings**, select **Message routing**.
To learn how to create an Event Hubs resource, see [Quickstart: Create an event
# [Service Bus queue](#tab/servicebusqueue)
-To learn how to create a Service Bus queue, see [Use the Azure portal to create a Service Bus namespace and queue](/azure/service-bus-messaging/service-bus-quickstart-portal).
+To learn how to create a Service Bus queue, see [Use the Azure portal to create a Service Bus namespace and queue](../service-bus-messaging/service-bus-quickstart-portal.md).
1. In the Azure portal, go to your IoT hub. In the resource menu under **Hub settings**, select **Message routing**.
To learn how to create a Service Bus queue, see [Use the Azure portal to create
# [Service Bus topic](#tab/servicebustopic)
-To learn how to create a Service Bus topic, see [Use the Azure portal to create a Service Bus topic and subscriptions to the topic](/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal).
+To learn how to create a Service Bus topic, see [Use the Azure portal to create a Service Bus topic and subscriptions to the topic](../service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md).
1. In the Azure portal, go to your IoT hub. In the resource menu under **Hub settings**, select **Message routing**.
To learn how to create a Service Bus topic, see [Use the Azure portal to create
# [Azure Storage](#tab/azurestorage)
-To learn how to create an Azure Storage resource (with container), see [Create a storage account](/azure/iot-hub/tutorial-routing?tabs=portal#create-a-storage-account).
+To learn how to create an Azure Storage resource (with container), see [Create a storage account](./tutorial-routing.md?tabs=portal#create-a-storage-account).
1. In the Azure portal, go to your IoT hub. In the resource menu under **Hub settings**, select **Message routing**.
To learn how to create an Azure Storage resource (with container), see [Create a
# [Azure Cosmos DB](#tab/cosmosdb)
-To learn how to create an Azure Cosmos DB resource, see [Create an Azure Cosmos DB account](/azure/cosmos-db/nosql/quickstart-portal#create-account).
+To learn how to create an Azure Cosmos DB resource, see [Create an Azure Cosmos DB account](../cosmos-db/nosql/quickstart-portal.md#create-account).
1. In the Azure portal, go to your IoT hub. In the resource menu under **Hub settings**, select **Message routing**.
To learn how to create an Azure Cosmos DB resource, see [Create an Azure Cosmos
* **Collection**: Select your Azure Cosmos DB collection.
- * **Partition key name** and **Partition key template**: These values are created automatically based on your previous selections. You can leave the auto-generated values or you can change the partition template based on your business logic. For more information about partitioning, see [Partitioning and horizontal scaling in Azure Cosmos DB](/azure/cosmos-db/partitioning-overview).
+ * **Partition key name** and **Partition key template**: These values are created automatically based on your previous selections. You can leave the auto-generated values or you can change the partition template based on your business logic. For more information about partitioning, see [Partitioning and horizontal scaling in Azure Cosmos DB](../cosmos-db/partitioning-overview.md).
:::image type="content" source="media/how-to-routing-portal/add-cosmos-db-endpoint-form.png" alt-text="Screenshot that shows details of the Add a Cosmos DB endpoint form." lightbox="media/how-to-routing-portal/add-cosmos-db-endpoint-form.png":::
To delete a custom endpoint in the Azure portal:
In this how-to article, you learned how to create a route and an endpoint for Event Hubs, Service Bus queues and topics, Azure Storage, and Azure Cosmos DB.
-To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal). In the tutorial, you create a storage route and test it with a device in your IoT hub.
+To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=portal). In the tutorial, you create a storage route and test it with a device in your IoT hub.
iot-hub How To Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-powershell.md
This article shows you how to create a route and endpoint in your hub in Azure IoT Hub and then delete your route and endpoint. Learn how to use Azure PowerShell to create routes and endpoints for Azure Event Hubs, Azure Service Bus queues and topics, and Azure Storage.
-To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](/azure/iot-hub/iot-hub-devguide-messages-d2c). To walk through setting up a route that sends messages to storage and then testing on a simulated device, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal).
+To learn more about how routing works in IoT Hub, see [Use IoT Hub message routing to send device-to-cloud messages to different endpoints](./iot-hub-devguide-messages-d2c.md). To walk through setting up a route that sends messages to storage and then testing on a simulated device, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=portal).
## Prerequisites
The procedures that are described in the article use the following resources:
### Azure PowerShell
-This article uses Azure PowerShell to work with IoT Hub and other Azure services. To use Azure PowerShell locally, install the [Azure PowerShell module](/powershell/azure/install-az-ps) on your computer. Alternatively, to use Azure PowerShell in a web browser, enable [Azure Cloud Shell](/azure/cloud-shell/overview).
+This article uses Azure PowerShell to work with IoT Hub and other Azure services. To use Azure PowerShell locally, install the [Azure PowerShell module](/powershell/azure/install-az-ps) on your computer. Alternatively, to use Azure PowerShell in a web browser, enable [Azure Cloud Shell](../cloud-shell/overview.md).
### IoT hub
To create an IoT hub route, you need an IoT hub that you created by using Azure
Be sure to have the following hub resource to use when you create your IoT hub route:
-* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps to [create an IoT hub by using the New-AzIotHub PowerShell cmdlet](/azure/iot-hub/iot-hub-create-using-powershell).
+* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps to [create an IoT hub by using the New-AzIotHub PowerShell cmdlet](./iot-hub-create-using-powershell.md).
### Endpoint service
To create an IoT hub route, you need at least one other Azure service to use as
Be sure to have *one* of the following resources to use when you create an endpoint your IoT hub route:
-* An Event Hubs resource (with container). If you need to create a new Event Hubs resource, see [Quickstart: Create an event hub by using Azure PowerShell](/azure/event-hubs/event-hubs-quickstart-powershell).
+* An Event Hubs resource (with container). If you need to create a new Event Hubs resource, see [Quickstart: Create an event hub by using Azure PowerShell](../event-hubs/event-hubs-quickstart-powershell.md).
-* A Service Bus queue resource. If you need to create a new Service Bus queue, see [Use Azure PowerShell to create a Service Bus namespace and queue](/azure/service-bus-messaging/service-bus-quickstart-powershell).
+* A Service Bus queue resource. If you need to create a new Service Bus queue, see [Use Azure PowerShell to create a Service Bus namespace and queue](../service-bus-messaging/service-bus-quickstart-powershell.md).
-* A Service Bus topic resource. If you need to create a new Service Bus topic, see the [New-AzServiceBusTopic](/powershell/module/az.servicebus/new-azservicebustopic) reference and the [Azure Service Bus messaging](/azure/service-bus-messaging/) documentation.
+* A Service Bus topic resource. If you need to create a new Service Bus topic, see the [New-AzServiceBusTopic](/powershell/module/az.servicebus/new-azservicebustopic) reference and the [Azure Service Bus messaging](../service-bus-messaging/index.yml) documentation.
-* An Azure Storage resource. If you need to create a new storage account in Azure, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-powershell).
+* An Azure Storage resource. If you need to create a new storage account in Azure, see [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-powershell).
## Create resources and endpoints
To create a new Event Hubs resource that has an authorization rule:
New-AzEventHubAuthorizationRule -ResourceGroupName MyResourceGroup -NamespaceName MyNamespace -EventHubName MyEventHub -Name MyAuthRule -Rights @('Manage', 'Send', 'Listen') ```
- For more information about access, see [Authorize access to Azure Event Hubs](/azure/event-hubs/authorize-access-event-hubs).
+ For more information about access, see [Authorize access to Azure Event Hubs](../event-hubs/authorize-access-event-hubs.md).
### Create an Event Hubs endpoint
The commands in the following procedures use these references:
### Create a Service Bus namespace and queue
-To create a new [Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) queue resource:
+To create a new [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) queue resource:
1. Create a new Service Bus namespace. For `Name`, use a unique value.
The commands in the following procedures use these references:
* [Az.IotHub](/powershell/module/az.iothub/) * [Az.ServiceBus](/powershell/module/az.servicebus/)
-With [Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview) topics, users can subscribe to one or more topics. To create a topic, you also create a Service Bus namespace and subscription.
+With [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) topics, users can subscribe to one or more topics. To create a topic, you also create a Service Bus namespace and subscription.
### Create a Service Bus namespace, topic, and subscription
To create an Azure Storage endpoint and route, you need a Storage account and co
New-AzStorageAccount -ResourceGroupName MyResourceGroup -Name mystorageaccount -Location westus -SkuName Standard_GRS ```
-1. Create a new container in your storage account. You need to create a context to your storage account in a variable, and then add the variable to the `Context` parameter. To learn about your options when you create a container, see [Manage blob containers by using PowerShell](/azure/storage/blobs/blob-containers-powershell). For `Name`, use a unique value for the name of your container.
+1. Create a new container in your storage account. You need to create a context to your storage account in a variable, and then add the variable to the `Context` parameter. To learn about your options when you create a container, see [Manage blob containers by using PowerShell](../storage/blobs/blob-containers-powershell.md). For `Name`, use a unique value for the name of your container.
```powershell $ctx = New-AzStorageContext -StorageAccountName mystorageaccount -UseConnectedAccount `
Remove-AzIotHubRoute -ResourceGroupName MyResourceGroup -Name MyIotHub -RouteNam
In this how-to article, you learned how to create a route and endpoint for Event Hubs, Service Bus queues and topics, and Azure Storage.
-To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](/azure/iot-hub/tutorial-routing?tabs=portal). In the tutorial, you create a storage route and test it with a device in your IoT hub.
+To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=portal). In the tutorial, you create a storage route and test it with a device in your IoT hub.
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md
IoT Hub supports writing to Cosmos DB in JSON (if specified in the message conte
1. In **Cosmos DB account**, choose an existing Cosmos DB account from a list of Cosmos DB accounts available for selection, then select an existing database and collection in **Database** and **Collection**, respectively. 1. In **Generate a synthetic partition key for messages**, select **Enable** if needed.
- To effectively support high-scale scenarios, you can enable [synthetic partition keys](/azure/cosmos-db/nosql/synthetic-partition-keys) for the Cosmos DB endpoint. As Cosmos DB is a hyperscale data store, all data/documents written to it must contain a field that represents a logical partition. Each logical partition has a maximum size of 20 GB. You can specify the partition key property name in **Partition key name**. The partition key property name is defined at the container level and can't be changed once it has been set.
+ To effectively support high-scale scenarios, you can enable [synthetic partition keys](../cosmos-db/nosql/synthetic-partition-keys.md) for the Cosmos DB endpoint. As Cosmos DB is a hyperscale data store, all data/documents written to it must contain a field that represents a logical partition. Each logical partition has a maximum size of 20 GB. You can specify the partition key property name in **Partition key name**. The partition key property name is defined at the container level and can't be changed once it has been set.
You can configure the synthetic partition key value by specifying a template in **Partition key template** based on your estimated data volume. For example, in manufacturing scenarios, your logical partition might be expected to approach its maximum limit of 20 GB within a month. In that case, you can define a synthetic partition key as a combination of the device ID and the month. The generated partition key value is automatically added to the partition key property for each new Cosmos DB record, ensuring logical partitions are created each month for each device. 1. In **Authentication type**, choose an authentication type for your Cosmos DB endpoint. You can choose any of the supported authentication types for accessing the database, based on your system setup. > [!CAUTION]
- > If you're using the system assigned managed identity for authenticating to Cosmos DB, you must use Azure CLI or Azure PowerShell to assign the Cosmos DB Built-in Data Contributor built-in role definition to the identity. Role assignment for Cosmos DB isn't currently supported from the Azure portal. For more information about the various roles, see [Configure role-based access for Azure Cosmos DB](/azure/cosmos-db/how-to-setup-rbac). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
+ > If you're using the system assigned managed identity for authenticating to Cosmos DB, you must use Azure CLI or Azure PowerShell to assign the Cosmos DB Built-in Data Contributor built-in role definition to the identity. Role assignment for Cosmos DB isn't currently supported from the Azure portal. For more information about the various roles, see [Configure role-based access for Azure Cosmos DB](../cosmos-db/how-to-setup-rbac.md). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
1. Select **Create** to complete the creation of your custom endpoint.
Use the [troubleshooting guide for routing](troubleshoot-message-routing.md) for
* [How to send device-to-cloud messages](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
-* For information about the SDKs you can use to send device-to-cloud messages, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
--
+* For information about the SDKs you can use to send device-to-cloud messages, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
iot-hub Iot Hub Devguide Messages Read Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-custom.md
For more information about reading from custom endpoints, see:
* Reading from [Service Bus queues](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md). * Reading from [Service Bus topics](../service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md).
-* Reading from [Cosmos DB](/azure/cosmos-db/nosql/query/getting-started)
+* Reading from [Cosmos DB](../cosmos-db/nosql/query/getting-started.md)
## Next steps * For more information about IoT Hub endpoints, see [IoT Hub endpoints](iot-hub-devguide-endpoints.md). * For more information about the query language you use to define routing queries, see [Message Routing query syntax](iot-hub-devguide-routing-query-syntax.md).
-* The [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md) tutorial shows you how to use routing queries and custom endpoints.
--
+* The [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md) tutorial shows you how to use routing queries and custom endpoints.
iot-hub Iot Hub Devguide Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-security.md
There are three different ways for controlling access to IoT Hub:
> [!Tip]
-> You can enable a lock on your IoT resources to prevent them being accidentally or maliciously deleted. To learn more about Azure Resource locks, please visit, [Lock your resources to protect your infrastructure](/azure/azure-resource-manager/management/lock-resources?tabs=json)
+> You can enable a lock on your IoT resources to prevent them being accidentally or maliciously deleted. To learn more about Azure Resource locks, please visit, [Lock your resources to protect your infrastructure](../azure-resource-manager/management/lock-resources.md?tabs=json)
## Next steps - [Control access to IoT Hub using Azure Active Directory](iot-hub-dev-guide-azure-ad-rbac.md) - [Control access to IoT Hub using shared access signature](iot-hub-dev-guide-sas.md)-- [Authenticating a device to IoT Hub](iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub)-
+- [Authenticating a device to IoT Hub](iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub)
iot-hub Query Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/query-twins.md
Query expressions can have a maximum length of 8192 characters.
Currently, comparisons are supported only between primitive types (no objects), for instance `... WHERE properties.desired.config = properties.reported.config` is supported only if those properties have primitive values.
-We recommend to not take a dependency on lastActivityTime found in Device Identity Properties for Twin Queries for any scenario. This field does not guarantee an accurate gauge of device status. Instead, please use IoT Device Lifecycle events to manage device state and activities. More information on how to use IoT Hub Lifecycle events in your solution, please visit [React to IoT Hub events by using Event Grid to trigger actions](/azure/iot-hub/iot-hub-event-grid).
+We recommend to not take a dependency on lastActivityTime found in Device Identity Properties for Twin Queries for any scenario. This field does not guarantee an accurate gauge of device status. Instead, please use IoT Device Lifecycle events to manage device state and activities. More information on how to use IoT Hub Lifecycle events in your solution, please visit [React to IoT Hub events by using Event Grid to trigger actions](./iot-hub-event-grid.md).
> [!Note]
-> Avoid making any assumptions about the maximum latency of this operation. Please refer to [Latency Solutions](/azure/iot-hub/iot-hub-devguide-quotas-throttling) for more information on how to build your solution taking latency into account.
+> Avoid making any assumptions about the maximum latency of this operation. Please refer to [Latency Solutions](./iot-hub-devguide-quotas-throttling.md) for more information on how to build your solution taking latency into account.
## Next steps
-* Understand the basics of the [IoT Hub query language](iot-hub-devguide-query-language.md)
--
+* Understand the basics of the [IoT Hub query language](iot-hub-devguide-query-language.md)
key-vault About Keys Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/about-keys-details.md
You can specify additional application-specific metadata in the form of tags. Ke
## Key access control
-Access control for keys managed by Key Vault is provided at the level of a Key Vault that acts as the container of keys. The access control policy for keys is distinct from the access control policy for secrets in the same Key Vault. Users may create one or more vaults to hold keys, and are required to maintain scenario appropriate segmentation and management of keys. Access control for keys is independent of access control for secrets.
+Access control for keys managed by Key Vault is provided at the level of a Key Vault that acts as the container of keys. You can control access to keys using Key Vault [role-based access control](../general/rbac-guide.md) (recommended) or old [vault access policy](../general/assign-access-policy.md) permssion model. Role-based permission model has three predefined roles to manage keys: 'Key Vault Crypto Officer', 'Key Vault Crypto User', 'Key Vault Service Encryption User' and can be scoped to subscription, resource group or vault level.
-The following permissions can be granted, on a per user / service principal basis, in the keys access control entry on a vault. These permissions closely mirror the operations allowed on a key object. Granting access to a service principal in key vault is a onetime operation, and it will remain same for all Azure subscriptions. You can use it to deploy as many certificates as you want.
+Vault access policy permssion model permissions:
- Permissions for key management operations - *get*: Read the public part of a key, plus its attributes
The following permissions can be granted, on a per user / service principal basi
- *get rotation policy*: Retrieve rotation policy configuration - *set rotation policy*: Set rotation policy configuration
-For more information on working with keys, see [Key operations in the Key Vault REST API reference](/rest/api/keyvault). For information on establishing permissions, see [Vaults - Create or Update](/rest/api/keyvault/keyvault/vaults/create-or-update) and [Vaults - Update Access Policy](/rest/api/keyvault/keyvault/vaults/update-access-policy).
+For more information on working with keys, see [Key operations in the Key Vault REST API reference](/rest/api/keyvault).
## Next steps - [About Key Vault](../general/overview.md)
kubernetes-fleet Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/faq.md
The current preview of Azure Kubernetes Fleet Manager resource supports joining
Azure Kubernetes Fleet Manager resource is a regional resource. Support for region failover for disaster recovery use cases is in the [roadmap](https://aka.ms/fleet/roadmap).
+## What happens when the user changes the cluster identity of a joined cluster?
+Changing the identity of a member AKS cluster will break the communication between fleet and that member cluster. While the member agent will use the new identity to communicate with the fleet cluster, fleet still needs to be made aware of this new identity. To achieve this, run the following command:
+
+```azurecli
+az fleet member create \
+ --resource-group ${GROUP} \
+ --fleet-name ${FLEET} \
+ --name ${MEMBER_NAME} \
+ --member-cluster-id ${MEMBER_CLUSTER_ID}
+```
+ ## Roadmap The roadmap for Azure Kubernetes Fleet Manager resource is available [here](https://aka.ms/fleet/roadmap).
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
az vm create \
* Load balancers or services such as Application Gateway canΓÇÖt be placed in the backend pool of the load balancer * Inbound NAT Rules canΓÇÖt be specified by IP address * You can configure IP based and NIC based backend pools for the same load balancer. You canΓÇÖt create a single backend pool that mixes backed addresses targeted by NIC and IP addresses within the same pool.
+ * A virtual machine in the same virtual network as an internal load balancer cannot access the frontend of the ILB and its backend VMs simultaneously
>[!Important] > When a backend pool is configured by IP address, it will behave as a Basic Load Balancer with default outbound enabled. For secure by default configuration and applications with demanding outbound needs, configure the backend pool by NIC.
load-balancer Protect Load Balancer With Ddos Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/protect-load-balancer-with-ddos-standard.md
Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your public load balancers from large scale DDoS attacks. > [!IMPORTANT]
-> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](/azure/ddos-protection/ddos-protection-overview).
+> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
In this tutorial, you learn how to:
When no longer needed, delete the resource group, load balancer, and all related
Advance to the next article to learn how to: > [!div class="nextstepaction"]
-> [Create a public load balancer with an IP-based backend](tutorial-load-balancer-ip-backend-portal.md)
+> [Create a public load balancer with an IP-based backend](tutorial-load-balancer-ip-backend-portal.md)
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
Title: Key concepts for new users-
+ Title: Key concepts for Azure Load Testing
description: Learn how Azure Load Testing works, and the key concepts behind it.
load-testing How To Configure User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-user-properties.md
Last updated 04/27/2022
-zone_pivot_groups: load-testing-config
# Use JMeter user properties with Azure Load Testing Preview
Alternately, you can also [use environment variables and secrets in Azure Load T
## Add a JMeter user properties file to your load test
-You can define user properties for your JMeter test script by uploading a *.properties* file to the load test. The following code snippet shows an example user properties file:
+You can define user properties for your JMeter test script by uploading a *.properties* file to the load test. Azure Load Testing supports using a single properties file per load test. Additional property files are ignored.
+
+You can also specify [JMeter configuration settings](https://jmeter.apache.org/usermanual/properties_reference.html) in user properties file to override default behavior.
+
+> [!NOTE]
+> Azure Load Testing overrides specific JMeter properties. Learn more about the list of [JMeter properties that Azure Load Testing overrides](./resource-jmeter-property-overrides.md).
+
+The following code snippet shows an example user properties file that defines three user properties and configures the `jmeter.save.saveservice.thread_name` configuration setting:
```properties # peak-load.properties
durationSeconds=600
jmeter.save.saveservice.thread_name=false ```
-Azure Load Testing supports using a single properties file per load test. Additional property files are ignored.
-
-You can also specify [JMeter configuration settings](https://jmeter.apache.org/usermanual/properties_reference.html) in user properties file to override default behavior. For example, you can modify any of the `jmeter.save.saveservice.*` settings to configure the JMeter results file.
-
+# [Azure portal](#tab/portal)
To add a user properties file to your load test by using the Azure portal, follow these steps:
To add a user properties file to your load test by using the Azure portal, follo
You can select only one file as a user properties file for a load test. 1. Select **Apply** to modify the test, or **Review + create**, and then **Create** to create the new test.
+# [Azure Pipelines / GitHub Actions](#tab/pipelines+github)
If you run a load test within your CI/CD workflow, you add the user properties file to the source control repository. You then specify this properties file in the [load test configuration YAML file](./reference-test-config-yaml.md).
To add a user properties file to your load test, follow these steps:
The next time the CI/CD workflow runs, it will use the updated configuration. + ## Reference properties in JMeter
You can [download the JMeter errors logs](./how-to-find-download-logs.md) to tro
## Next steps
+- Learn more about [JMeter properties that Azure Load Testing overrides](./resource-jmeter-property-overrides.md).
- Learn more about [parameterizing a load test by using environment variables and secrets](./how-to-parameterize-load-tests.md). - Learn more about [troubleshooting load test execution errors](./how-to-find-download-logs.md).
load-testing Resource Jmeter Property Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-jmeter-property-overrides.md
+
+ Title: JMeter property overrides by Azure Load Testing
+description: 'The list of Apache JMeter properties that are overridden by Azure Load Testing. These properties are not available to redefine in your load test.'
+++++ Last updated : 01/12/2023++
+# JMeter property overriddes by Azure Load Testing
+
+Azure Load Testing enables you to specify JMeter configuration settings by [using a user properties file](./how-to-configure-user-properties.md). In this article, you learn which Apache JMeter properties Azure Load Testing already overrides. If you specify any of these properties in your load test, Azure Load Testing ignores your values.
+
+## JMeter properties
+
+This section lists the JMeter properties that Azure Load Testing overrides. Any value you specify for these properties is ignored by Azure Load Testing.
+
+* mode
+* sample_sender_strip_also_on_error
+* asynch.batch.queue.size
+* server.rmi.ssl.disable
+* jmeterengine.nongui.maxport
+* jmeterengine.nongui.port
+* client.tries
+* client.retries_delay
+* client.rmi.localport
+* server.rmi.localport
+* server_port
+* server.exitaftertest
+* jmeterengine.stopfail.system.exit
+* jmeterengine.remote.system.exit
+* jmeterengine.force.system.exit
+* jmeter.save.saveservice.output_format
+* jmeter.save.saveservice.autoflush
+* beanshell.server.file
+* jmeter.save.saveservice.connect_time
+* jpgc.repo.sendstats
+* jmeter.save.saveservice.timestamp_format
+* sampleresult.default.encoding
+* user.classpath
+* summariser.ignore_transaction_controller_sample_result
+
+## Next steps
+
+* Learn how to [Configure user properties in Azure Load Testing](./how-to-configure-user-properties.md).
logic-apps Biztalk Server Azure Integration Services Migration Approaches https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-azure-integration-services-migration-approaches.md
Consider the following testing recommendations for your migration project:
- Set up mock response testing using static results.
- Regardless whether you set up automated tests, you can use the [static results capability](/azure/logic-apps/test-logic-apps-mock-data-static-results) in Azure Logic Apps to temporarily set mock responses at the action level. This functionality lets you emulate the behavior from a specific system that you want to call. You can then perform some initial testing in isolation and reduce the amount of data that you'd create in line of business systems.
+ Regardless whether you set up automated tests, you can use the [static results capability](./test-logic-apps-mock-data-static-results.md) in Azure Logic Apps to temporarily set mock responses at the action level. This functionality lets you emulate the behavior from a specific system that you want to call. You can then perform some initial testing in isolation and reduce the amount of data that you'd create in line of business systems.
- Run side by side tests.
As a concrete example, you might rename a Service Bus connection in an **OrderQu
### Handle exceptions with scopes and "Run after" options
-[Scopes](/azure/logic-apps/logic-apps-control-flow-run-steps-group-scopes) provide the capability to group multiple actions so that you can implement Try-Catch-Finally behavior. The **Scope** action's functionality is similar to the **Region** concept in Visual Studio. On the designer, you can collapse and expand a scope's contents to improve developer productivity.
+[Scopes](./logic-apps-control-flow-run-steps-group-scopes.md) provide the capability to group multiple actions so that you can implement Try-Catch-Finally behavior. The **Scope** action's functionality is similar to the **Region** concept in Visual Studio. On the designer, you can collapse and expand a scope's contents to improve developer productivity.
-When you implement this pattern, you can also specify when to run the **Scope** action and the actions inside, based on the preceding action's execution status, which can be **Succeeded**, **Failed**, **Skipped**, or **TimedOut**. To set up this behavior, use the **Scope** action's [**Run after** (`runAfter`) options](/azure/logic-apps/logic-apps-exception-handling#manage-the-run-after-behavior):
+When you implement this pattern, you can also specify when to run the **Scope** action and the actions inside, based on the preceding action's execution status, which can be **Succeeded**, **Failed**, **Skipped**, or **TimedOut**. To set up this behavior, use the **Scope** action's [**Run after** (`runAfter`) options](./logic-apps-exception-handling.md#manage-the-run-after-behavior):
- **Is successful** - **Has failed**
You've now learned more about available migration approaches, planning considera
> [!div class="nextstepaction"] >
-> [Give feedback about migration guidance for BizTalk Server to Azure Integration Services](https://aka.ms/BizTalkMigrationGuidance)
+> [Give feedback about migration guidance for BizTalk Server to Azure Integration Services](https://aka.ms/BizTalkMigrationGuidance)
logic-apps Biztalk Server To Azure Integration Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-to-azure-integration-services-overview.md
By migrating your integration workloads to Azure Integration Services, you can r
||-| | Modern Integration Platform as a Service (iPaaS) | Azure Integration Services provides capabilities not yet conceived when BizTalk Server was originally built, for example: <br><br>- The capability to create and manage REST APIs <br>- Scalable cloud infrastructure <br>- Authentication schemes that are modern, more secure, and easier to implement <br>- Simplified development tools, including many web browser-based experiences <br>- Automatic platform updates and integration with other cloud-native services | | Consumption-based pricing | With traditional middleware platforms, you must often make significant capital investments in procuring licenses and infrastructure, forcing you to "build for peak" and creating inefficiencies. Azure Integration Services provides multiple pricing models that generally let you pay for what you use. Although some pricing models enable and provide access to more advanced features, you have the flexibility to pay for what you consume. |
-| Lower barrier to entry | BizTalk Server is a very capable middleware broker but requires significant time to learn and gain proficiency. Azure Integration Services reduces the time required to start, learn, build, and deliver solutions. For example, [Azure Logic Apps](/azure/logic-apps/logic-apps-overview) includes a visual designer that gives you a no-code or low-code experience for building declarative workflows. |
+| Lower barrier to entry | BizTalk Server is a very capable middleware broker but requires significant time to learn and gain proficiency. Azure Integration Services reduces the time required to start, learn, build, and deliver solutions. For example, [Azure Logic Apps](./logic-apps-overview.md) includes a visual designer that gives you a no-code or low-code experience for building declarative workflows. |
| SaaS connectivity | With REST APIs becoming standard for application integration, more SaaS companies have adopted this approach for exchanging data. Microsoft has built an expansive and continually growing connector ecosystem with hundreds of APIs to work with Microsoft and non-Microsoft services, systems, and protocols. In Azure Logic Apps, you can use the workflow designer to select operations from these connectors, easily create and authenticate connections, and configure the operations they want to use. This capability speeds up development and provides more consistency when authenticating access to these services using OAuth2. | | Multiple geographical deployments | Azure currently offers [60+ announced regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/), more than any other cloud provider, so that you can easily choose the datacenters and regions that are right for you and your customers. This reach lets you deploy solutions in a consistent manner across many geographies and provides opportunities from both a scalability and redundancy perspective. |
Azure Integration Services includes the following cloud-based, serverless, scala
| Service | Description | ||-|
-| Azure Logic Apps | Create and run automated logic app workflows that integrate your apps, data, services, and systems. You can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. Use the visual workflow designer to enable microservices, API orchestrations, and line-of-business integrations. To increase scale and portability while automating business-critical workflows, deploy and run anywhere that Kubernetes can run. <br><br>You can create either Consumption or Standard logic app resources. A Consumption logic app includes only one stateful workflow that runs in multi-tenant Azure Logic Apps. A Standard logic app can include multiple stateful or stateless workflows that run in single-tenant Azure Logic Apps, an App Service Environment v3, or Azure Arc enabled Logic Apps. <br><br>For positioning Azure Logic Apps within Azure Integration Services, this guide focuses on Standard logic apps, which provide the best balance between enterprise features, cost, and agility. For more information, see [Azure Logic Apps](/azure/logic-apps/logic-apps-overview). |
-| Azure Functions | Write less code, maintain less infrastructure, and save on costs to run applications. Without you having to deploy and maintain servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running. For more information, see [Azure Functions](/azure/azure-functions/functions-overview). |
-| Azure Data Factory | Visually integrate all your data sources by using more than 90 built-in, maintenance-free connectors at no added cost. Easily construct Extract, Transform, and Load (ETL) and Extract, Load, and Transform (ELT) processes code-free in an intuitive environment, or you can write your own code. To unlock business insights, deliver your integrated data to Azure Synapse Analytics. For more information, see [Azure Data Factory](/azure/data-factory/introduction). |
-| Azure Service Bus | Transfer data between applications and services, even when offline, as messages using this highly reliable enterprise message broker. Get more flexibility when brokering messages between client and server with structured first-in, first-out (FIFO) messaging, publish-subscribe capabilities, and asynchronous operations. For more information, see [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview). |
-| Azure Event Grid | Integrate applications using events delivered by an event broker to subscriber destinations, such as Azure services, other applications, or any endpoint where Event Grid has network access. Event sources can include other applications, SaaS services, and Azure services. For more information, see [Azure Event Grid](/azure/event-grid/overview). |
-| Azure API Management | Deploy API gateways side-by-side and optimize traffic flow with APIs hosted in Azure, other clouds, and on-premises. Meet security and compliance requirements, while you enjoy a unified management experience and full observability across all internal and external APIs. For more information, see [Azure API Management](/azure/api-management/api-management-key-concepts). |
+| Azure Logic Apps | Create and run automated logic app workflows that integrate your apps, data, services, and systems. You can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. Use the visual workflow designer to enable microservices, API orchestrations, and line-of-business integrations. To increase scale and portability while automating business-critical workflows, deploy and run anywhere that Kubernetes can run. <br><br>You can create either Consumption or Standard logic app resources. A Consumption logic app includes only one stateful workflow that runs in multi-tenant Azure Logic Apps. A Standard logic app can include multiple stateful or stateless workflows that run in single-tenant Azure Logic Apps, an App Service Environment v3, or Azure Arc enabled Logic Apps. <br><br>For positioning Azure Logic Apps within Azure Integration Services, this guide focuses on Standard logic apps, which provide the best balance between enterprise features, cost, and agility. For more information, see [Azure Logic Apps](./logic-apps-overview.md). |
+| Azure Functions | Write less code, maintain less infrastructure, and save on costs to run applications. Without you having to deploy and maintain servers, the cloud infrastructure provides all the up-to-date resources needed to keep your applications running. For more information, see [Azure Functions](../azure-functions/functions-overview.md). |
+| Azure Data Factory | Visually integrate all your data sources by using more than 90 built-in, maintenance-free connectors at no added cost. Easily construct Extract, Transform, and Load (ETL) and Extract, Load, and Transform (ELT) processes code-free in an intuitive environment, or you can write your own code. To unlock business insights, deliver your integrated data to Azure Synapse Analytics. For more information, see [Azure Data Factory](../data-factory/introduction.md). |
+| Azure Service Bus | Transfer data between applications and services, even when offline, as messages using this highly reliable enterprise message broker. Get more flexibility when brokering messages between client and server with structured first-in, first-out (FIFO) messaging, publish-subscribe capabilities, and asynchronous operations. For more information, see [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md). |
+| Azure Event Grid | Integrate applications using events delivered by an event broker to subscriber destinations, such as Azure services, other applications, or any endpoint where Event Grid has network access. Event sources can include other applications, SaaS services, and Azure services. For more information, see [Azure Event Grid](../event-grid/overview.md). |
+| Azure API Management | Deploy API gateways side-by-side and optimize traffic flow with APIs hosted in Azure, other clouds, and on-premises. Meet security and compliance requirements, while you enjoy a unified management experience and full observability across all internal and external APIs. For more information, see [Azure API Management](../api-management/api-management-key-concepts.md). |
:::image type="content" source="./media/biztalk-server-to-azure-integration-services-overview/azure-integration-services-architecture-overview.png" alt-text="Diagram showing Azure Integration Services member services.":::
Beyond the previously described services, Microsoft also offers the following co
| Service | Description | ||-|
-| Azure Storage | Provides highly available, massively scalable, durable, secure, and modern storage for various data objects in the cloud. You can access these data objects from anywhere in the world over HTTP or HTTPS using a REST API. <br><br>Azure Integration Services uses these capabilities to securely store configuration and telemetry data for you while transactions flow through the platform. For more information, see [Azure Storage](/azure/storage/common/storage-introduction). |
-| Azure role-based access control (Azure RBAC) | Manage access to cloud resources, which is a critical function for any organization that uses the cloud. Azure RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management to Azure resources. You can manage who can access Azure resources, what they can do with those resources, and which areas they can access. For more information, see [Azure RBAC](/azure/role-based-access-control/overview). |
-| Azure Key Vault | Provides capabilities to help you solve problems related to secrets management, key management, and certificate management. <br><br>Azure Integration Services provides integration with Azure Key Vault through application configuration settings and through a connector. This capability lets you store secrets, credentials, keys, and certificates in a secure but convenient manner. For more information, see [Azure Key Vault](/azure/key-vault/general/overview). |
-| Azure Policy | Provides capabilities that help you enforce organizational standards and assess compliance in a scalable way. Through the compliance dashboard, you get an aggregated view so you can evaluate the overall state of the environment with the ability to drill down to per-resource, per-policy granularity. <br><br>Azure Integration Services integrates with Azure Policy so you can efficiently implement widespread governance. For more information, see [Azure Policy](/azure/governance/policy/overview). |
-| Azure Networking | Provides a wide variety of networking capabilities, including connectivity, application protection services, application delivery services, and networking monitoring. <br><br>Azure Integration Services uses these capabilities to provide connectivity across services using virtual networks and private endpoints. For more information, see [Azure Networking](/azure/networking/fundamentals/networking-overview). |
-| Azure Event Hubs | Build dynamic data pipelines and immediately respond to business challenges by streaming millions of events per second from any source with this fully managed, real-time data ingestion service that's simple, trusted, and scalable. <br><br>API Management performs custom logging using Event Hubs, which is one of the best solutions when implementing a decoupled tracking solution in Azure. For more information, see [Azure Event Hubs](/azure/event-hubs/event-hubs-about). |
+| Azure Storage | Provides highly available, massively scalable, durable, secure, and modern storage for various data objects in the cloud. You can access these data objects from anywhere in the world over HTTP or HTTPS using a REST API. <br><br>Azure Integration Services uses these capabilities to securely store configuration and telemetry data for you while transactions flow through the platform. For more information, see [Azure Storage](../storage/common/storage-introduction.md). |
+| Azure role-based access control (Azure RBAC) | Manage access to cloud resources, which is a critical function for any organization that uses the cloud. Azure RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management to Azure resources. You can manage who can access Azure resources, what they can do with those resources, and which areas they can access. For more information, see [Azure RBAC](../role-based-access-control/overview.md). |
+| Azure Key Vault | Provides capabilities to help you solve problems related to secrets management, key management, and certificate management. <br><br>Azure Integration Services provides integration with Azure Key Vault through application configuration settings and through a connector. This capability lets you store secrets, credentials, keys, and certificates in a secure but convenient manner. For more information, see [Azure Key Vault](../key-vault/general/overview.md). |
+| Azure Policy | Provides capabilities that help you enforce organizational standards and assess compliance in a scalable way. Through the compliance dashboard, you get an aggregated view so you can evaluate the overall state of the environment with the ability to drill down to per-resource, per-policy granularity. <br><br>Azure Integration Services integrates with Azure Policy so you can efficiently implement widespread governance. For more information, see [Azure Policy](../governance/policy/overview.md). |
+| Azure Networking | Provides a wide variety of networking capabilities, including connectivity, application protection services, application delivery services, and networking monitoring. <br><br>Azure Integration Services uses these capabilities to provide connectivity across services using virtual networks and private endpoints. For more information, see [Azure Networking](../networking/fundamentals/networking-overview.md). |
+| Azure Event Hubs | Build dynamic data pipelines and immediately respond to business challenges by streaming millions of events per second from any source with this fully managed, real-time data ingestion service that's simple, trusted, and scalable. <br><br>API Management performs custom logging using Event Hubs, which is one of the best solutions when implementing a decoupled tracking solution in Azure. For more information, see [Azure Event Hubs](../event-hubs/event-hubs-about.md). |
| Azure SQL Database | At some point, you might need to create custom logging strategies or custom configurations to support your integration solutions. While SQL Server is commonly used on premises for this purpose, Azure SQL Database might offer a viable solution when migrating on-premises SQL Server databases to the cloud. For more information, see [Azure SQL Database](/azure/azure-sql/azure-sql-iaas-vs-paas-what-is-overview). |
-| Azure App Configuration | Centrally manage application settings and feature flags. Modern programs, especially those running in a cloud, generally have many distributed components by nature. Spreading configuration settings across these components can lead to hard-to-troubleshoot errors during application deployment. With App Configuration, you can store all the settings for your application and secure their accesses in one place. For more information, see [Azure App Configuration](/azure/azure-app-configuration/overview). |
-| Azure Monitor | Application Insights, which is part of Azure Monitor, provides application performance management and monitoring for live apps. Store application telemetry and monitor the overall health of your integration platform. You also have the capability to set thresholds and get alerts when performance exceeds configured thresholds. For more information, see [Application Insights](/azure/azure-monitor/app/app-insights-overview). |
-| Azure Automation | Automate your Azure management tasks and orchestrate actions across external systems within Azure. Built on PowerShell workflow so you can use this language's many capabilities. For more information, see [Azure Automation](/azure/automation/overview). |
+| Azure App Configuration | Centrally manage application settings and feature flags. Modern programs, especially those running in a cloud, generally have many distributed components by nature. Spreading configuration settings across these components can lead to hard-to-troubleshoot errors during application deployment. With App Configuration, you can store all the settings for your application and secure their accesses in one place. For more information, see [Azure App Configuration](../azure-app-configuration/overview.md). |
+| Azure Monitor | Application Insights, which is part of Azure Monitor, provides application performance management and monitoring for live apps. Store application telemetry and monitor the overall health of your integration platform. You also have the capability to set thresholds and get alerts when performance exceeds configured thresholds. For more information, see [Application Insights](../azure-monitor/app/app-insights-overview.md). |
+| Azure Automation | Automate your Azure management tasks and orchestrate actions across external systems within Azure. Built on PowerShell workflow so you can use this language's many capabilities. For more information, see [Azure Automation](../automation/overview.md). |
## Supported developer experiences
BizTalk Server offers the following example advantages:
#### Azure Integration Services
-In [Azure Logic Apps](/azure/logic-apps/logic-apps-overview), you can create executable business processes and applications as logic app workflows by using a "building block" way of programming with a visual designer and prebuilt operations from hundreds of connectors, requiring minimal code. A logic app workflow starts with a trigger operation followed by one or more action operations with each operation functioning as a logical step in the workflow implementation process. Your workflow can use actions to call external software, services, and systems. Some actions perform programming tasks, such as conditionals (if statements), loops, data operations, variable management, and more.
+In [Azure Logic Apps](./logic-apps-overview.md), you can create executable business processes and applications as logic app workflows by using a "building block" way of programming with a visual designer and prebuilt operations from hundreds of connectors, requiring minimal code. A logic app workflow starts with a trigger operation followed by one or more action operations with each operation functioning as a logical step in the workflow implementation process. Your workflow can use actions to call external software, services, and systems. Some actions perform programming tasks, such as conditionals (if statements), loops, data operations, variable management, and more.
Azure Logic Apps offers the following example advantages:
Integration platforms offer ways to solve problems in a consistent and unified m
- Integration account
- For [Azure Logic Apps](/azure/logic-apps/logic-apps-overview), an integration account is a cloud-based container and Azure resource that provides centralized access to reusable artifacts. For Consumption logic app workflows, these artifacts include trading partners, agreements, XSD schemas, XSLT maps, Liquid template-based maps, certificates, batch configurations, and .NET Fx assemblies.
+ For [Azure Logic Apps](./logic-apps-overview.md), an integration account is a cloud-based container and Azure resource that provides centralized access to reusable artifacts. For Consumption logic app workflows, these artifacts include trading partners, agreements, XSD schemas, XSLT maps, Liquid template-based maps, certificates, batch configurations, and .NET Fx assemblies.
For Standard logic app workflows, Azure Logic Apps recently [introduced support for calling .NET Fx assemblies from XSLT transformations](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/net-framework-assembly-support-added-to-azure-logic-apps/ba-p/3669120) without requiring an integration account. Alternatively, you can add schemas, maps, and assemblies to a Standard logic app project in Visual Studio Code and subsequently deploy to Azure.
The [BizTalk Adapter Framework](/biztalk/core/developing-custom-adapters) offers
#### Azure Integration Services
-When you build workflows with [Azure Logic Apps](/azure/logic-apps/logic-apps-overview), you can use prebuilt connectors to help you easily and quickly work with data, events, and resources in other apps, services, systems, protocols, and platforms, usually without having to write any code. Azure Logic Apps provides a constantly expanding gallery with hundreds of connectors that you can use. You can build integration solutions for many services and systems, cloud-based or on-premises, from both Microsoft and partners, such as BizTalk Server, Salesforce, Office 365, SQL databases, most Azure services, mainframes, APIs, and more. Some connectors provide operations that perform programming operations, such as conditional (if) statements, loops, data operations, variables management, and so on. If no connector is available for the resource that you want, you can use the generic HTTP operation to communicate with the service, or you can create a custom connector.
+When you build workflows with [Azure Logic Apps](./logic-apps-overview.md), you can use prebuilt connectors to help you easily and quickly work with data, events, and resources in other apps, services, systems, protocols, and platforms, usually without having to write any code. Azure Logic Apps provides a constantly expanding gallery with hundreds of connectors that you can use. You can build integration solutions for many services and systems, cloud-based or on-premises, from both Microsoft and partners, such as BizTalk Server, Salesforce, Office 365, SQL databases, most Azure services, mainframes, APIs, and more. Some connectors provide operations that perform programming operations, such as conditional (if) statements, loops, data operations, variables management, and so on. If no connector is available for the resource that you want, you can use the generic HTTP operation to communicate with the service, or you can create a custom connector.
Technically, a connector is a proxy or a wrapper around an API that the underlying service or system uses to communicate with Azure Logic Apps. This connector provides the operations that you use in your workflows to perform tasks. An operation is available either as a trigger or action with properties that you can configure. Some triggers and actions also require that you first create and configure a connection to the underlying service or system. If necessary, you'll also then authenticate access to a user account.
Most connectors in Azure Logic Apps are either a built-in connector or managed c
For more information, see the following documentation: -- [About connectors in Azure Logic Apps](/azure/connectors/apis-list)-- [Built-in connectors overview](/azure/connectors/built-in)-- [Managed connectors overview](/azure/connectors/managed)
+- [About connectors in Azure Logic Apps](../connectors/apis-list.md)
+- [Built-in connectors overview](../connectors/built-in.md)
+- [Managed connectors overview](../connectors/managed.md)
- [Managed connectors available in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) ### Application connectivity
Adapters provide the connectivity capabilities in BizTalk Server and run locally
#### Azure Integration Services
-Connectors provide the connectivity capabilities in [Azure Logic Apps](/azure/logic-apps/logic-apps-overview) and offer an abstraction on top of APIs that are usually owned by the underlying SaaS system. For example, services such as SharePoint are built using an API-first approach where APIs provide functionality to the service for end users, but the same functionality is exposed for other systems to call through an API. To simplify calling these APIs, connectors use metadata to describe the messaging contract so that developers know what data is expected in the request and in the response.
+Connectors provide the connectivity capabilities in [Azure Logic Apps](./logic-apps-overview.md) and offer an abstraction on top of APIs that are usually owned by the underlying SaaS system. For example, services such as SharePoint are built using an API-first approach where APIs provide functionality to the service for end users, but the same functionality is exposed for other systems to call through an API. To simplify calling these APIs, connectors use metadata to describe the messaging contract so that developers know what data is expected in the request and in the response.
The following screenshot shows the connector search experience for a Standard logic app workflow in single-tenant Azure Logic Apps. When you select the **Built-in** tab, you can find built-in connectors such as Azure Functions, Azure Service Bus, SQL Server, Azure Storage, File System, HTTP, and more. On the **Azure** tab, you can find more than 800 connectors, including other Microsoft SaaS connectors, partner SaaS connectors, and so on.
BizTalk Server can expose WCF-BasicHTTP receive locations as endpoints within Az
The connectivity model in Azure Integration Services differs from BizTalk Server, partially due to the evolution of the API economy. As more organizations expose access to underlying systems and data, a platform-agnostic approach was needed. REST is now the dominant architectural approach to designing modern web services.
-In [Azure Logic Apps](/azure/logic-apps/logic-apps-overview), [REST](/azure/architecture/best-practices/api-design) is the default approach for connecting systems. As Microsoft and other software vendors expose RESTful services on top of their systems and data, Azure Logic Apps can expose and consume this type of information. The OpenAPI specification makes this capability possible for both humans and computers to understand the interaction between a client and server through metadata. As part of this understanding, both request and response payloads are derived, which means you can use dynamic content to populate a workflow action's inputs and use the outputs from the response in downstream actions.
+In [Azure Logic Apps](./logic-apps-overview.md), [REST](/azure/architecture/best-practices/api-design) is the default approach for connecting systems. As Microsoft and other software vendors expose RESTful services on top of their systems and data, Azure Logic Apps can expose and consume this type of information. The OpenAPI specification makes this capability possible for both humans and computers to understand the interaction between a client and server through metadata. As part of this understanding, both request and response payloads are derived, which means you can use dynamic content to populate a workflow action's inputs and use the outputs from the response in downstream actions.
-Based on the software vendor who implements the underlying service that a connector calls, [authentication schemes](/azure/logic-apps/logic-apps-securing-a-logic-app#authentication-types-for-triggers-and-actions-that-support-authentication) vary by connector. Generally, these schemes include the following types:
+Based on the software vendor who implements the underlying service that a connector calls, [authentication schemes](./logic-apps-securing-a-logic-app.md) vary by connector. Generally, these schemes include the following types:
-- [Basic](/azure/logic-apps/logic-apps-securing-a-logic-app#basic-authentication)-- [Client Certificate](/azure/logic-apps/logic-apps-securing-a-logic-app#client-certificate-authentication)-- [Active Directory OAuth](/azure/logic-apps/logic-apps-securing-a-logic-app#azure-active-directory-oauth-authentication)-- [Raw](/azure/logic-apps/logic-apps-securing-a-logic-app#raw-authentication)-- [Managed Identity](/azure/logic-apps/logic-apps-securing-a-logic-app#managed-identity-authentication)
+- [Basic](./logic-apps-securing-a-logic-app.md#basic-authentication)
+- [Client Certificate](./logic-apps-securing-a-logic-app.md#client-certificate-authentication)
+- [Active Directory OAuth](./logic-apps-securing-a-logic-app.md#azure-active-directory-oauth-authentication)
+- [Raw](./logic-apps-securing-a-logic-app.md#raw-authentication)
+- [Managed Identity](./logic-apps-securing-a-logic-app.md#managed-identity-authentication)
-Microsoft provides strong layers of protection by [encrypting data during transit](/azure/security/fundamentals/encryption-overview#encryption-of-data-in-transit) and at rest. When Azure customer traffic moves between datacenters, outside physical boundaries that aren't controlled by Microsoft or on behalf of Microsoft, a data-link layer encryption method that uses [IEEE 802.1AE MAC Security Standards (MACsec)](https://1.ieee802.org/security/802-1ae/) applies from point-to-point across the underlying network hardware.
+Microsoft provides strong layers of protection by [encrypting data during transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit) and at rest. When Azure customer traffic moves between datacenters, outside physical boundaries that aren't controlled by Microsoft or on behalf of Microsoft, a data-link layer encryption method that uses [IEEE 802.1AE MAC Security Standards (MACsec)](https://1.ieee802.org/security/802-1ae/) applies from point-to-point across the underlying network hardware.
-Microsoft gives you the option to use [Transport Layer Security (TLS) protocol](/azure/security/fundamentals/encryption-overview#tls-encryption-in-azure) for protecting data that travels between cloud services and customers. Microsoft datacenters negotiate a TLS connection with client systems that connect to Azure services. TLS provides strong authentication, message privacy, and integrity, which enables detection of message tampering, interception, and forgery along with interoperability, algorithm flexibility, and ease of deployment and use.
+Microsoft gives you the option to use [Transport Layer Security (TLS) protocol](../security/fundamentals/encryption-overview.md#tls-encryption-in-azure) for protecting data that travels between cloud services and customers. Microsoft datacenters negotiate a TLS connection with client systems that connect to Azure services. TLS provides strong authentication, message privacy, and integrity, which enables detection of message tampering, interception, and forgery along with interoperability, algorithm flexibility, and ease of deployment and use.
While this section focused on RESTful connectivity through connectors, you can implement SOAP web service connectivity through the custom connector experience.
BizTalk Server doesn't include the concept of blocking specific adapters from di
#### Azure Integration Services
-If your organization doesn't permit connecting to restricted or unapproved resources by using managed connectors in [Azure Logic Apps](/azure/logic-apps/logic-apps-overview), you can block the capability to create and use those connections in your logic app workflows. With Azure Policy, you can define and enforce policies that prevent creating or using the connections for connectors that you want to block. For example, for security reasons, you might want to block connections to specific social media platforms or other services and systems.
+If your organization doesn't permit connecting to restricted or unapproved resources by using managed connectors in [Azure Logic Apps](./logic-apps-overview.md), you can block the capability to create and use those connections in your logic app workflows. With Azure Policy, you can define and enforce policies that prevent creating or using the connections for connectors that you want to block. For example, for security reasons, you might want to block connections to specific social media platforms or other services and systems.
### Message durability
BizTalk Server provides all these capabilities out-of-the-box. You don't need to
#### Azure Integration Services
-[Azure Logic Apps](/azure/logic-apps/logic-apps-overview) provides message durability in the following ways:
+[Azure Logic Apps](./logic-apps-overview.md) provides message durability in the following ways:
- Stateful workflows, which are the default in Consumption logic apps and available in Standard logic apps, have checkpoints that track the workflow state and store messages as they pass through workflow actions. This functionality provides access to rich data stored in the trigger and workflow instance run history where you can review detailed input and output values.
Publish-subscribe (pub-sub) capabilities exist through the [MessageBox database]
#### Azure Integration Services
-With an architecture completely different from BizTalk Server, most services in Azure Integration Services are event-based. If you need to implement a publish-subscribe solution, you can use [Azure Service Bus](/azure/service-bus-messaging/service-bus-messaging-overview). This service is a fully managed enterprise message broker with message queues and publish-subscribe topics in a namespace. You can use Azure Service Bus to decouple applications and services from each other, providing the following benefits:
+With an architecture completely different from BizTalk Server, most services in Azure Integration Services are event-based. If you need to implement a publish-subscribe solution, you can use [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md). This service is a fully managed enterprise message broker with message queues and publish-subscribe topics in a namespace. You can use Azure Service Bus to decouple applications and services from each other, providing the following benefits:
- Load balance work across competing workers. - Safely route and transfer data with control across service and application boundaries. - Coordinate transactional work that requires a high degree of reliability.
-[Azure Logic Apps](/azure/logic-apps/logic-apps-overview) includes an [Azure Service Bus connector](/connectors/servicebus/) that you can use to publish and subscribe to messages. The benefit to using Service Bus is that you can use messaging independently from your workflow. Unlike BizTalk Server, your messaging is decoupled from your workflow platform. Although messaging and workflow capabilities are decoupled in Azure Integration Services, you can create message subscriptions in Azure Service Bus, which has support for [message properties (user properties)](/rest/api/servicebus/message-headers-and-properties#message-properties). Use these properties to provide key-value pairs that are evaluated by filters created on a [topic subscription](/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal). You define these user properties when you set up an Azure Service Bus operation by adding one or more key-value pairs. For a demonstration, see the following video: [Pub Sub Messaging using Azure Integration Services - Part 2 Content Based Routing](https://youtu.be/1ZMJhWGDVro).
+[Azure Logic Apps](./logic-apps-overview.md) includes an [Azure Service Bus connector](/connectors/servicebus/) that you can use to publish and subscribe to messages. The benefit to using Service Bus is that you can use messaging independently from your workflow. Unlike BizTalk Server, your messaging is decoupled from your workflow platform. Although messaging and workflow capabilities are decoupled in Azure Integration Services, you can create message subscriptions in Azure Service Bus, which has support for [message properties (user properties)](/rest/api/servicebus/message-headers-and-properties#message-properties). Use these properties to provide key-value pairs that are evaluated by filters created on a [topic subscription](../service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md). You define these user properties when you set up an Azure Service Bus operation by adding one or more key-value pairs. For a demonstration, see the following video: [Pub Sub Messaging using Azure Integration Services - Part 2 Content Based Routing](https://youtu.be/1ZMJhWGDVro).
-Outside Azure Integration Services, you can also implement publish-subscribe scenarios by using we can also use [Azure Cache for Redis](/azure/azure-cache-for-redis/cache-overview).
+Outside Azure Integration Services, you can also implement publish-subscribe scenarios by using we can also use [Azure Cache for Redis](../azure-cache-for-redis/cache-overview.md).
### Business rules engine
BizTalk Server includes a forward-chaining rules engine that lets you construct
#### Azure Integration Services
-Although no equivalent rules engine capability currently exists in Azure, customers often use [Azure Functions](/azure/azure-functions/functions-overview) to implement rules using custom code. They then access these rules using the built-in Azure Functions connector in Azure Logic Apps.
+Although no equivalent rules engine capability currently exists in Azure, customers often use [Azure Functions](../azure-functions/functions-overview.md) to implement rules using custom code. They then access these rules using the built-in Azure Functions connector in Azure Logic Apps.
For more information about future investments in this area, see the [Road Map](#road-map) section later in this guide.
Beyond the core XML transformations, BizTalk Server also provides encoding and d
- Enterprise Integration Pack
- This component follows similar concepts in BizTalk Server and makes B2B capabilities easy to use in [Azure Logic Apps](/azure/logic-apps/logic-apps-overview). However, one major difference is that the Enterprise Integration Pack is architecturally based on integration accounts. These accounts simplify how you store, manage, and use artifacts, such as trading partners, agreements, maps (XSLT or Liquid templates), schemas, and certificates, for B2B scenarios.
+ This component follows similar concepts in BizTalk Server and makes B2B capabilities easy to use in [Azure Logic Apps](./logic-apps-overview.md). However, one major difference is that the Enterprise Integration Pack is architecturally based on integration accounts. These accounts simplify how you store, manage, and use artifacts, such as trading partners, agreements, maps (XSLT or Liquid templates), schemas, and certificates, for B2B scenarios.
- Liquid templates
To achieve these scenarios, multiple approaches exist, for example:
- Hybrid Connections
- Both an Azure service and a feature in Azure App Service, Hybrid Connections support scenarios and offers capabilities beyond those used in Azure App Service. For more information about usage outside Azure App Service, see [Azure Relay Hybrid Connections](/azure/azure-relay/relay-hybrid-connections-protocol). Within Azure App Service, you can use Hybrid Connections to access application resources in any network that can make outbound calls to Azure over port 443. Hybrid Connections provide access from your app to a TCP endpoint and doesn't enable a new way to access your app. In Azure App Service, each hybrid connection correlates to a single TCP host and port combination. This functionality enables your apps to access resources on any OS, provided that a TCP endpoint exists. Hybrid Connections doesn't know or care about the application protocol or what you want to access. This feature simply provides network access.
+ Both an Azure service and a feature in Azure App Service, Hybrid Connections support scenarios and offers capabilities beyond those used in Azure App Service. For more information about usage outside Azure App Service, see [Azure Relay Hybrid Connections](../azure-relay/relay-hybrid-connections-protocol.md). Within Azure App Service, you can use Hybrid Connections to access application resources in any network that can make outbound calls to Azure over port 443. Hybrid Connections provide access from your app to a TCP endpoint and doesn't enable a new way to access your app. In Azure App Service, each hybrid connection correlates to a single TCP host and port combination. This functionality enables your apps to access resources on any OS, provided that a TCP endpoint exists. Hybrid Connections doesn't know or care about the application protocol or what you want to access. This feature simply provides network access.
- Virtual network integration
- With [Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) integration, you can connect your Azure resource to a virtual network configured in Azure, giving your app access to resources in that virtual network. Virtual network integration in Azure Logic Apps is used only to make outbound calls from your Azure resource to your virtual network.
+ With [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) integration, you can connect your Azure resource to a virtual network configured in Azure, giving your app access to resources in that virtual network. Virtual network integration in Azure Logic Apps is used only to make outbound calls from your Azure resource to your virtual network.
- With [virtual network peering](/azure/virtual-network/virtual-network-peering-overview), you can connect your on-premises networks to Azure, which provides bi-directional connectivity between on-premises resources and Azure services. Azure Integration Services provides virtual network connectivity, allowing for hybrid integration. The following image shows a Standard logic app resource with the Networking page open and virtual network integration enabled as highlighted in the **Outbound Traffic** box. This configuration makes sure that all outbound traffic leaves from this virtual network.
+ With [virtual network peering](../virtual-network/virtual-network-peering-overview.md), you can connect your on-premises networks to Azure, which provides bi-directional connectivity between on-premises resources and Azure services. Azure Integration Services provides virtual network connectivity, allowing for hybrid integration. The following image shows a Standard logic app resource with the Networking page open and virtual network integration enabled as highlighted in the **Outbound Traffic** box. This configuration makes sure that all outbound traffic leaves from this virtual network.
:::image type="content" source="./media/biztalk-server-to-azure-integration-services-overview/standard-logic-app-networking-page-virtual-network-integration.png" alt-text="Screenshot showing Azure portal, Standard logic app resource, and Networking page with virtual network integration enabled."::: - Private endpoints
- A [private endpoint](/azure/private-link/private-endpoint-overview) is a network interface that uses a private IP address from your virtual network. This network interface privately and securely connects to an Azure resource that's powered by [Azure Private Link](/azure/private-link/private-link-overview). By enabling a private endpoint, you bring that Azure resource into your virtual network and allow resources in the network to make inbound calls to your Azure resource.
+ A [private endpoint](../private-link/private-endpoint-overview.md) is a network interface that uses a private IP address from your virtual network. This network interface privately and securely connects to an Azure resource that's powered by [Azure Private Link](../private-link/private-link-overview.md). By enabling a private endpoint, you bring that Azure resource into your virtual network and allow resources in the network to make inbound calls to your Azure resource.
The following table shows the network connectivity methods that each Azure Integration Services resource can use:
You can extend BizTalk in many ways by using custom .NET Fx code, for example:
#### Azure Integration Services
-Azure Functions provides the capability for you to write code that you can run from the Azure Functions connector in [Azure Logic Apps](/azure/logic-apps/logic-apps-overview). The Functions platform supports various programming languages and runtimes, which offer much flexibility. These functions are generally designed to have short execution times, and a rich set of developer tools exists to support local development and debugging.
+Azure Functions provides the capability for you to write code that you can run from the Azure Functions connector in [Azure Logic Apps](./logic-apps-overview.md). The Functions platform supports various programming languages and runtimes, which offer much flexibility. These functions are generally designed to have short execution times, and a rich set of developer tools exists to support local development and debugging.
In Azure Logic Apps, the **Inline Code** connector provides the action named **Execute JavaScript Code**. You can use this action to write small code snippets in JavaScript. These code snippets are also expected to have short execution times and support dynamic content inputs and outputs. After the code runs, the output is available for downstream actions in the workflow. Although no direct debugging support currently exists for this action, you can view the inputs and outputs in the workflow instance's run history.
BizTalk includes [Enterprise Single Sign-On (SSO)](/biztalk/core/enterprise-sing
#### Azure Integration Services
-[Azure Logic Apps](/azure/logic-apps/logic-apps-overview) supports the following security capabilities:
+[Azure Logic Apps](./logic-apps-overview.md) supports the following security capabilities:
- Azure Key Vault
- You can store credentials, secrets, API keys, and certificates using [Azure Key Vault](/azure/key-vault/general/basic-concepts). In Azure Logic Apps, you can access this information by using the [Azure Key Vault connector](/connectors/keyvault/) and exclude this information from the platform's logs and run history by using the [secure inputs and outputs functionality](/azure/logic-apps/logic-apps-securing-a-logic-app#obfuscate).
+ You can store credentials, secrets, API keys, and certificates using [Azure Key Vault](../key-vault/general/basic-concepts.md). In Azure Logic Apps, you can access this information by using the [Azure Key Vault connector](/connectors/keyvault/) and exclude this information from the platform's logs and run history by using the [secure inputs and outputs functionality](./logic-apps-securing-a-logic-app.md#obfuscate).
Later in the [Tracking](#tracking) section, this guide describes the run history functionality, which provides a step-by-step replay of a workflow's execution. Although Azure Logic Apps offers the value proposition of capturing every input and output in a workflow run, sometimes you need to manage access to sensitive data more granularly. You can set up obfuscation for this data by using the secure inputs and outputs capability on triggers and actions to hide such content from run history and prevent sending this data to Azure Monitor, specifically Log Analytics and Application Insights. The following image shows an example result from enabling secure inputs and secure outputs in run history.
BizTalk includes [Enterprise Single Sign-On (SSO)](/biztalk/core/enterprise-sing
- Managed identities
- Some connectors support using a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) for authenticating access to resources protected by Azure Active Directory (Azure AD). When you use a managed identity to authenticate your connection, you don't have to provide credentials, secrets, or Azure AD tokens.
+ Some connectors support using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for authenticating access to resources protected by Azure Active Directory (Azure AD). When you use a managed identity to authenticate your connection, you don't have to provide credentials, secrets, or Azure AD tokens.
### Application management and access management
Administrators use the [BizTalk Server Administrator Console](/biztalk/core/usin
#### Azure Integration Services
-The [Azure portal](/azure/azure-portal/azure-portal-overview) is a common tool that administrators and support personnel use to view and monitor the health of interfaces. For Azure Logic Apps, this experience includes rich transaction traces that are available through run history.
+The [Azure portal](../azure-portal/azure-portal-overview.md) is a common tool that administrators and support personnel use to view and monitor the health of interfaces. For Azure Logic Apps, this experience includes rich transaction traces that are available through run history.
-Granular [role-based access controls (RBAC)](/azure/role-based-access-control/overview) are also available so you can manage and restrict access to Azure resources at various levels.
+Granular [role-based access controls (RBAC)](../role-based-access-control/overview.md) are also available so you can manage and restrict access to Azure resources at various levels.
### Storage
As you're responsible for provisioning and managing your SQL databases, high ava
#### Azure Integration Services
-[Azure Logic Apps](/azure/logic-apps/logic-apps-overview) relies on [Azure Storage](/azure/storage/common/storage-introduction) to store and automatically [encrypt data at rest](/azure/logic-apps/logic-apps-securing-a-logic-app). This encryption protects your data and helps you meet your organizational security and compliance commitments. By default, Azure Storage uses Microsoft-managed keys to encrypt your data. For more information, see [Azure Storage encryption for data at rest](/azure/storage/common/storage-service-encryption).
+[Azure Logic Apps](./logic-apps-overview.md) relies on [Azure Storage](../storage/common/storage-introduction.md) to store and automatically [encrypt data at rest](./logic-apps-securing-a-logic-app.md). This encryption protects your data and helps you meet your organizational security and compliance commitments. By default, Azure Storage uses Microsoft-managed keys to encrypt your data. For more information, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md).
-When you work with Azure Storage through the Azure portal, [all transactions take place over HTTPS](/azure/security/fundamentals/encryption-overview#azure-storage-transactions). You can also work with Azure Storage by using the Storage REST API over HTTPS. To enforce using HTTPS when you call the REST APIs to access objects in storage accounts, enable the secure transfer that's required for the storage account.
+When you work with Azure Storage through the Azure portal, [all transactions take place over HTTPS](../security/fundamentals/encryption-overview.md#azure-storage-transactions). You can also work with Azure Storage by using the Storage REST API over HTTPS. To enforce using HTTPS when you call the REST APIs to access objects in storage accounts, enable the secure transfer that's required for the storage account.
### Data configuration
The separation between configuration and code becomes important when you want to
- Azure Key Vault
- This service stores and protects cryptographic keys and other secrets used by applications and cloud services. Because secure key management is essential to protect data in the cloud, use [Azure Key Vault](/azure/key-vault/general/overview) to encrypt and store keys and secrets, such as passwords.
+ This service stores and protects cryptographic keys and other secrets used by applications and cloud services. Because secure key management is essential to protect data in the cloud, use [Azure Key Vault](../key-vault/general/overview.md) to encrypt and store keys and secrets, such as passwords.
- Azure App Configuration
- This service centrally manages application settings and feature flags. You can store configurations for all your Azure apps in a universal, hosted location. Manage configurations effectively and reliably in real time and without affecting customers by avoiding time-consuming redeployments. [Azure App Configuration](/azure/azure-app-configuration/overview) is built for speed, scalability, and security.
+ This service centrally manages application settings and feature flags. You can store configurations for all your Azure apps in a universal, hosted location. Manage configurations effectively and reliably in real time and without affecting customers by avoiding time-consuming redeployments. [Azure App Configuration](../azure-app-configuration/overview.md) is built for speed, scalability, and security.
- Azure Cosmos DB
- This service is a fully managed NoSQL database for modern app development with single-digit millisecond response times plus automatic and instant scalability that guarantee speed at any scale. You can load configuration data into [Azure Cosmos DB](/azure/cosmos-db/introduction) and then access that data using the [Azure Cosmos DB connector](/connectors/documentdb/) in Azure Logic Apps.
+ This service is a fully managed NoSQL database for modern app development with single-digit millisecond response times plus automatic and instant scalability that guarantee speed at any scale. You can load configuration data into [Azure Cosmos DB](../cosmos-db/introduction.md) and then access that data using the [Azure Cosmos DB connector](/connectors/documentdb/) in Azure Logic Apps.
- Azure Table Storage
- This service provides another storage facility to keep configuration data at a low cost. You can easily access this data using the [Azure Table Storage connector](/connectors/azuretables/) in Azure Logic Apps. For more information, see [Azure Table Storage](/azure/storage/tables/table-storage-overview).
+ This service provides another storage facility to keep configuration data at a low cost. You can easily access this data using the [Azure Table Storage connector](/connectors/azuretables/) in Azure Logic Apps. For more information, see [Azure Table Storage](../storage/tables/table-storage-overview.md).
- Custom caching
- You can also implement custom caching solutions with Azure Integration Services. Popular approaches include using [caching policies](/azure/api-management/api-management-caching-policies#CachingPolicies) in [Azure API Management](/azure/api-management/api-management-key-concepts) and [Azure Cache for Redis](/azure/azure-cache-for-redis/cache-overview).
+ You can also implement custom caching solutions with Azure Integration Services. Popular approaches include using [caching policies](/azure/api-management/api-management-caching-policies#CachingPolicies) in [Azure API Management](../api-management/api-management-key-concepts.md) and [Azure Cache for Redis](../azure-cache-for-redis/cache-overview.md).
- Custom database
Some foundational differences exist between processing large files with an on-pr
##### File size limits
-In Azure, file size limits exist to ensure consistent and reliable experiences. To validate your scenario, make sure to review the [service limits documentation for Azure Logic Apps](/azure/logic-apps/logic-apps-limits-and-config?#messages). Some connectors support [message chunking](/azure/logic-apps/logic-apps-handle-large-messages) for messages that exceed the default message size limit, which varies based on the connector. Message chunking works by splitting a large message into smaller messages.
+In Azure, file size limits exist to ensure consistent and reliable experiences. To validate your scenario, make sure to review the [service limits documentation for Azure Logic Apps](./logic-apps-limits-and-config.md#messages). Some connectors support [message chunking](./logic-apps-handle-large-messages.md) for messages that exceed the default message size limit, which varies based on the connector. Message chunking works by splitting a large message into smaller messages.
-Azure Logic Apps isn't the only service that has message size limits. For example, Azure Service Bus also has [such limits](/azure/service-bus-messaging/service-bus-premium-messaging). For more information about handling large messages in Azure Service Bus, see [Large messages support](/azure/service-bus-messaging/service-bus-premium-messaging#large-messages-support).
+Azure Logic Apps isn't the only service that has message size limits. For example, Azure Service Bus also has [such limits](../service-bus-messaging/service-bus-premium-messaging.md). For more information about handling large messages in Azure Service Bus, see [Large messages support](../service-bus-messaging/service-bus-premium-messaging.md#large-messages-support).
##### Claim-check pattern
To avoid file size limitations, you can implement the [claim-check pattern](/azu
##### Azure Data Factory
-[Azure Data Factory](/azure/data-factory/introduction) provides another option for handling large files. This service is Azure's [ELT offering](/azure/data-factory/introduction) for scalable serverless data integration and data transformation with a code-free visual experience for intuitive authoring and single-pane-of-glass monitoring and management. You can also lift and shift existing [SQL Server Integration Services (SSIS)](/sql/integration-services/sql-server-integration-services) packages to Azure and run them with full compatibility in Azure Data Factory. The SSIS Integration Runtime offers a fully managed service, so you don't have to worry about infrastructure management. For more information, see [Lift and shift SQL Server Integration Services workloads to the cloud](/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview).
+[Azure Data Factory](../data-factory/introduction.md) provides another option for handling large files. This service is Azure's [ELT offering](../data-factory/introduction.md) for scalable serverless data integration and data transformation with a code-free visual experience for intuitive authoring and single-pane-of-glass monitoring and management. You can also lift and shift existing [SQL Server Integration Services (SSIS)](/sql/integration-services/sql-server-integration-services) packages to Azure and run them with full compatibility in Azure Data Factory. The SSIS Integration Runtime offers a fully managed service, so you don't have to worry about infrastructure management. For more information, see [Lift and shift SQL Server Integration Services workloads to the cloud](/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview).
In on-premises architectures, SSIS was a popular option for managing the loading of large files into databases. As the cloud equivalent for that architecture, Azure Data Factory can address the transformation and movement of large datasets across various data sources, such as file systems, databases, SAP, Azure Blob Storage, Azure Data Explorer, Oracle, DB2, Amazon RDS, and more. When you have large data processing requirements, consider using Azure Data Factory as a better option over Azure Logic Apps and Azure Service Bus.
In on-premises architectures, SSIS was a popular option for managing the loading
#### Azure Integration Services -- [Azure Monitor](/azure/azure-monitor/overview)
+- [Azure Monitor](../azure-monitor/overview.md)
- To monitor Azure resources, you can use this service and the [Log Analytics](/azure/azure-monitor/logs/log-analytics-workspace-overview) capability as a comprehensive solution for collecting, analyzing, and acting on telemetry data from your cloud and on-premises environments.
+ To monitor Azure resources, you can use this service and the [Log Analytics](../azure-monitor/logs/log-analytics-workspace-overview.md) capability as a comprehensive solution for collecting, analyzing, and acting on telemetry data from your cloud and on-premises environments.
-- In [Azure Logic Apps](/azure/logic-apps/logic-apps-overview), the following options are available:
+- In [Azure Logic Apps](./logic-apps-overview.md), the following options are available:
- - For Consumption logic app workflows, you can install the Logic Apps Management Solution (Preview) in the Azure portal and set up Azure Monitor logs to collect diagnostic data. After you set up your logic app to send that data to an Azure Log Analytics workspace, telemetry flows to where the Logic Apps Management Solution can provide health visualizations. For more information, see [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](/azure/logic-apps/monitor-logic-apps-log-analytics). With diagnostics enabled, you can also use Azure Monitor to send alerts based on different signal types such as when a trigger or a run fails. For more information, see [Monitor run status, review trigger history, and set up alerts for Azure Logic Apps](/azure/logic-apps/monitor-logic-apps?tabs=consumption#set-up-monitoring-alerts).
+ - For Consumption logic app workflows, you can install the Logic Apps Management Solution (Preview) in the Azure portal and set up Azure Monitor logs to collect diagnostic data. After you set up your logic app to send that data to an Azure Log Analytics workspace, telemetry flows to where the Logic Apps Management Solution can provide health visualizations. For more information, see [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](./monitor-logic-apps-log-analytics.md). With diagnostics enabled, you can also use Azure Monitor to send alerts based on different signal types such as when a trigger or a run fails. For more information, see [Monitor run status, review trigger history, and set up alerts for Azure Logic Apps](./monitor-logic-apps.md?tabs=consumption#set-up-monitoring-alerts).
- - For Standard logic app workflows, you can enable Application Insights at logic app resource creation to send diagnostic logging and traces from your logic app's workflows. In Application Insights, you can view an [application map](/azure/azure-monitor/app/app-map) to better understand the performance and health characteristics of your interfaces. Application Insights also includes [availability capabilities](/azure/azure-monitor/app/availability-overview) for you to configure synthetic tests that proactively call endpoints and then evaluate the response for specific HTTP status codes or payload. Based upon your configured criteria, you can send notifications to stakeholders or call a webhook for additional orchestration capabilities.
+ - For Standard logic app workflows, you can enable Application Insights at logic app resource creation to send diagnostic logging and traces from your logic app's workflows. In Application Insights, you can view an [application map](../azure-monitor/app/app-map.md) to better understand the performance and health characteristics of your interfaces. Application Insights also includes [availability capabilities](../azure-monitor/app/availability-overview.md) for you to configure synthetic tests that proactively call endpoints and then evaluate the response for specific HTTP status codes or payload. Based upon your configured criteria, you can send notifications to stakeholders or call a webhook for additional orchestration capabilities.
- [Serverless 360](https://www.serverless360.com/) is an external solution from [Kovai](https://www.kovai.co/) that provides monitoring and management through mapping Azure services, such as Azure Logic Apps, Azure Service Bus, Azure API Management, and Azure Functions. You can reprocess messages by using dead letter queues in Azure Service Bus, enable self-healing to address intermittent service disruptions, and set up proactive monitoring through synthetic transactions.
The following section describes options to track artifacts for performance monit
#### Azure Integration Services
-Azure Logic Apps provides rich run history so that developers and support analysts can review action by action telemetry, including all processed inputs and outputs. To help protect any sensitive data, you can [enable secure inputs and outputs](/azure/logic-apps/logic-apps-securing-a-logic-app?tabs=azure-portal#obfuscate) on individual actions in workflows. This capability obfuscates or hides the data in logs and workflow run histories to avoid leaks.
+Azure Logic Apps provides rich run history so that developers and support analysts can review action by action telemetry, including all processed inputs and outputs. To help protect any sensitive data, you can [enable secure inputs and outputs](./logic-apps-securing-a-logic-app.md?tabs=azure-portal#obfuscate) on individual actions in workflows. This capability obfuscates or hides the data in logs and workflow run histories to avoid leaks.
-Beyond data obfuscation, you can use [Azure RBAC](/azure/role-based-access-control/overview) rules to protect data access. Azure RBAC includes two built-in roles specifically for Azure Logic Apps, which are [Logic App Contributor and Logic App Operator](/azure/logic-apps/logic-apps-securing-a-logic-app#secure-operations).
+Beyond data obfuscation, you can use [Azure RBAC](../role-based-access-control/overview.md) rules to protect data access. Azure RBAC includes two built-in roles specifically for Azure Logic Apps, which are [Logic App Contributor and Logic App Operator](./logic-apps-securing-a-logic-app.md#secure-operations).
-Beyond Azure RBAC, you can also [restrict access to run history in Azure Logic Apps by IP address range](/azure/logic-apps/logic-apps-securing-a-logic-app#restrict-ip).
+Beyond Azure RBAC, you can also [restrict access to run history in Azure Logic Apps by IP address range](./logic-apps-securing-a-logic-app.md#restrict-ip).
### Hosting
You can install and run BizTalk Server on your own hardware, on-premises virtual
| WS2 | 2 | 7 | | WS3 | 4 | 14 |
- For the latest information, see [Pricing tiers in the Standard model](/azure/logic-apps/logic-apps-pricing#standard-pricing-tiers).
+ For the latest information, see [Pricing tiers in the Standard model](./logic-apps-pricing.md#standard-pricing-tiers).
- Availability and redundancy
- In Azure, [availability zones](/azure/reliability/availability-zones-overview#availability-zones) provide resiliency, distributed availability, and active-active-active zone scalability. To increase availability for your logic app workloads, you can [enable availability zone support](/azure/logic-apps/set-up-zone-redundancy-availability-zone), but only when you create your logic app. You'll need at least three separate availability zones in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information, see [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
+ In Azure, [availability zones](../reliability/availability-zones-overview.md#availability-zones) provide resiliency, distributed availability, and active-active-active zone scalability. To increase availability for your logic app workloads, you can [enable availability zone support](/azure/logic-apps/set-up-zone-redundancy-availability-zone), but only when you create your logic app. You'll need at least three separate availability zones in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information, see [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
- Isolated and dedicated environment
- For Standard logic apps, you have the option to select an App Service Environment (ASE) v3 for your deployment environment. With an ASE v3, you get a fully isolated and dedicated environment to run applications at high scale with predictable pricing. You pay only for the [ASE App Service plan](/azure/logic-apps/single-tenant-overview-compare), no matter how many logic apps that you create and run.
+ For Standard logic apps, you have the option to select an App Service Environment (ASE) v3 for your deployment environment. With an ASE v3, you get a fully isolated and dedicated environment to run applications at high scale with predictable pricing. You pay only for the [ASE App Service plan](./single-tenant-overview-compare.md), no matter how many logic apps that you create and run.
##### Azure Service Bus
Azure Service Bus offers various pricing tiers so that you can choose the best t
| Predictable performance | Variable latency | | Fixed pricing | Pay as you go variable pricing | | Ability to scale workload up and down | Not available |
-| Message size up to 100 MB. SeeΓÇ»[Large message support](/azure/service-bus-messaging/service-bus-premium-messaging#large-messages-support). | Message size up to 256 KB |
+| Message size up to 100 MB. SeeΓÇ»[Large message support](../service-bus-messaging/service-bus-premium-messaging.md#large-messages-support). | Message size up to 256 KB |
-For the latest information, see [Service Bus Premium and Standard messaging tiers](/azure/service-bus-messaging/service-bus-premium-messaging).
+For the latest information, see [Service Bus Premium and Standard messaging tiers](../service-bus-messaging/service-bus-premium-messaging.md).
##### Azure API Management Azure API Management offers various pricing tiers so that you can choose the best tier that meets your needs. Each tier has its own capabilities and are named Consumption, Developer, Basic, Standard, and Premium.
-The capabilities in these tiers range from Azure AD integration, Azure virtual network support, built-in cache, self-hosted gateways, and more. For more information about these tiers and their capabilities, see [Feature-based comparison of the Azure API Management tiers](/azure/api-management/api-management-features).
+The capabilities in these tiers range from Azure AD integration, Azure virtual network support, built-in cache, self-hosted gateways, and more. For more information about these tiers and their capabilities, see [Feature-based comparison of the Azure API Management tiers](../api-management/api-management-features.md).
##### Azure Data Factory
-Azure Data Factory offers various pricing models so that you can choose the best model that meets your needs. The options vary based upon the [runtime type](/azure/data-factory/concepts-integration-runtime#integration-runtime-types), which includes the Azure Integration Runtime, Azure Managed VNET Integration Runtime, and the Self-Hosted Integration Runtime. Within each runtime offering, consider the support for orchestrations, data movement activity, pipeline activity, and external pipeline activity. For more information about cost planning and pricing, see [Plan to manage costs for Azure Data Factory](/azure/data-factory/plan-manage-costs) and [Understanding Data Factory pricing through examples](/azure/data-factory/pricing-concepts)
+Azure Data Factory offers various pricing models so that you can choose the best model that meets your needs. The options vary based upon the [runtime type](../data-factory/concepts-integration-runtime.md#integration-runtime-types), which includes the Azure Integration Runtime, Azure Managed VNET Integration Runtime, and the Self-Hosted Integration Runtime. Within each runtime offering, consider the support for orchestrations, data movement activity, pipeline activity, and external pipeline activity. For more information about cost planning and pricing, see [Plan to manage costs for Azure Data Factory](../data-factory/plan-manage-costs.md) and [Understanding Data Factory pricing through examples](../data-factory/pricing-concepts.md)
### Deployment
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-maps.md
This article shows how to add a map to your integration account. If you're worki
* Standard workflows
- * Only XSLT 1.0 is supported.
-
- * References to external assemblies from maps aren't supported.
+ * References to external assemblies from maps are currently in preview. To configure support for external assemblies, see [.NET Framework assembly support for XSLT transformations added to Azure Logic Apps (Standard)](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/net-framework-assembly-support-added-to-azure-logic-apps/ba-p/3669120).
* No limits apply to map file sizes.
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
addProperty(<object>, '<property>', <value>)
To add a parent property to an existing property, use the `setProperty()` function, not the `addProperty()` function. Otherwise, the function returns only the child object as output. ```
-setProperty(<object>['<parent-property>'], '<parent-property>', addProperty(<object>['<parent-property>'], '<child-property>', <value>)
+setProperty(<object>, '<parent-property>', addProperty(<object>['<parent-property>'], '<child-property>', <value>)
``` | Parameter | Required | Type | Description |
setProperty(<object>, '<property>', <value>)
To set the child property in a child object, use a nested `setProperty()` call instead. Otherwise, the function returns only the child object as output. ```
-setProperty(<object>['<parent-property>'], '<parent-property>', setProperty(<object>['parentProperty'], '<child-property>', <value>))
+setProperty(<object>, '<parent-property>', setProperty(<object>['parentProperty'], '<child-property>', <value>))
``` | Parameter | Required | Type | Description |
machine-learning Concept Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-component.md
Title: "What is a component (preview)"
+ Title: "What is a component"
description: Use Azure Machine Learning components to build machine learning pipelines.
machine-learning Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-soft-delete.md
Azure Machine Learning recovers Azure RBAC role assignments for the workspace id
Recovery of a workspace may not always be possible. Azure Machine Learning stores workspace metadata on [other Azure resources associated with the workspace](concept-workspace.md#associated-resources). In the event these dependent Azure resources were deleted, it may prevent the workspace from being recovered or correctly restored. Dependencies of the Azure Machine Learning workspace must be recovered first, before recovering a deleted workspace. Azure Container Registry isn't a hard requirement required for recovery.
-Enable [data protection capabilities on Azure Storage](/azure/storage/blobs/soft-delete-blob-overview) to improve chances of successful recovery.
+Enable [data protection capabilities on Azure Storage](../storage/blobs/soft-delete-blob-overview.md) to improve chances of successful recovery.
## Permanently delete a soft-deleted workspace
When you select *Permanently delete* on a soft-deleted workspace, it triggers ha
During the time of preview, workspace soft delete is enabled on an opt-in basis per Azure subscription. When soft delete is enabled for a subscription, it's enabled for all Azure Machine Learning workspaces in that subscription.
-To enable workspace soft delete on your Azure subscription, [register the preview feature](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal#register-preview-feature) in the Azure portal. Select `Workspace soft delete` under the `Microsoft.MachineLearningServices` resource provider. It may take 15 minutes for the UX to appear in the Azure portal after registering your subscription.
+To enable workspace soft delete on your Azure subscription, [register the preview feature](../azure-resource-manager/management/preview-features.md?tabs=azure-portal#register-preview-feature) in the Azure portal. Select `Workspace soft delete` under the `Microsoft.MachineLearningServices` resource provider. It may take 15 minutes for the UX to appear in the Azure portal after registering your subscription.
Before disabling workspace soft delete on an Azure subscription, purge or recover soft-deleted workspaces. After you disable soft delete on a subscription, workspaces that remain in soft deleted state are automatically purged when the retention period elapses.
For more information, see the [Export or delete workspace data](how-to-export-de
## Next steps + [Create and manage a workspace](how-to-manage-workspace.md)
-+ [Export or delete workspace data](how-to-export-delete-data.md)
++ [Export or delete workspace data](how-to-export-delete-data.md)
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Batch endpoints can be used to perform batch scoring on large amounts of data. S
Batch endpoints support reading files located in the following storage options:
-* Azure Machine Learning Data Stores. The following stores are supported:
- * Azure Blob Storage
- * Azure Data Lake Storage Gen1
- * Azure Data Lake Storage Gen2
-* Azure Machine Learning Data Assets. The following types are supported:
+* [Azure Machine Learning Data Assets](#input-data-from-a-data-asset). The following types are supported:
* Data assets of type Folder (`uri_folder`). * Data assets of type File (`uri_file`). * Datasets of type `FileDataset` (Deprecated).
-* Azure Storage Accounts. The following storage containers are supported:
+* [Azure Machine Learning Data Stores](#input-data-from-data-stores). The following stores are supported:
+ * Azure Blob Storage
+ * Azure Data Lake Storage Gen1
+ * Azure Data Lake Storage Gen2
+* [Azure Storage Accounts](#input-data-from-azure-storage-accounts). The following storage containers are supported:
* Azure Data Lake Storage Gen1 * Azure Data Lake Storage Gen2 * Azure Blob Storage
Batch endpoints support reading files located in the following storage options:
> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
-## Reading data from data stores
+## Input data from a data asset
-Data from Azure Machine Learning registered data stores can be directly referenced by batch deployments jobs. In this example, we're going to first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store:
+Azure Machine Learning data assets (formerly known as datasets) are supported as inputs for jobs. Follow these steps to run a batch endpoint job using data stored in a registered data asset in Azure Machine Learning:
-1. Let's get access to the default data store in the Azure Machine Learning workspace. If your data is in a different store, you can use that store instead. There's no requirement of using the default data store.
+> [!WARNING]
+> Data assets of type Table (`MLTable`) aren't currently supported.
- # [Azure CLI](#tab/cli)
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset.
- ```azurecli
- DATASTORE_ID=$(az ml datastore show -n workspaceblobstore | jq -r '.id')
+ # [Azure CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-classifier-mlflow/data
```
-
- > [!NOTE]
- > Data stores ID would look like `/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>`.
-
+
+ Then, create the data asset:
+
+ ```bash
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
# [Python](#tab/sdk)-
+
```python
- default_ds = ml_client.datastores.get_default()
+ data_path = "heart-classifier-mlflow/data"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ```
+
+ Then, create the data asset:
+
+ ```python
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+ To get the newly created data asset, use:
+
+ ```python
+ heart_dataset_unlabeled = ml_client.data.get(name=dataset_name, label="latest")
``` # [REST](#tab/rest)
- Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the data store information.
-
-
-
- > [!TIP]
- > The default blob data store in a workspace is called __workspaceblobstore__. You can skip this step if you already know the resource ID of the default data store in your workspace.
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You will need them later.
-1. We'll need to upload some sample data to it. This example assumes you've uploaded the sample data included in the repo in the folder `sdk/python/endpoints/batch/heart-classifier/data` in the folder `heart-classifier/data` in the blob storage account. Ensure you have done that before moving forward.
1. Create a data input: # [Azure CLI](#tab/cli)
- Let's place the file path in the following variable:
- ```azurecli
- DATA_PATH="heart-disease-uci-unlabeled"
- INPUT_PATH="$DATASTORE_ID/paths/$DATA_PATH"
+ DATASET_ID=$(az ml data show -n heart-dataset-unlabeled --label latest --query id)
``` # [Python](#tab/sdk) ```python
- data_path = "heart-classifier/data"
- input = Input(type=AssetTypes.URI_FOLDER, path=f"{default_ds.id}/paths/{data_path})
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
``` # [REST](#tab/rest)
Data from Azure Machine Learning registered data stores can be directly referenc
"InputData": { "mnistinput": { "JobInputType" : "UriFolder",
- "Uri": "azureml:/subscriptions/<subscription>/resourceGroups/<resource-group/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>/paths/<data-path>"
+ "Uri": "azureml://locations/<location>/workspaces/<workspace>/data/<dataset_name>/versions/labels/latest"
} } } } ```
-
+ > [!NOTE]
- > See how the path `paths` is appended to the resource id of the data store to indicate that what follows is a path inside of it.
+ > Data assets ID would look like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/data/<data-asset>/versions/<version>`.
- > [!TIP]
- > You can also use `azureml://datastores/<data-store>/paths/<data-path>` as a way to indicate the input.
1. Run the deployment: # [Azure CLI](#tab/cli) ```bash
- INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $INPUT_PATH)
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $DATASET_ID)
```
-
+
+ > [!TIP]
+ > You can also use `--input azureml:/<dataasset_name>@latest` as a way to indicate the input.
+ # [Python](#tab/sdk) ```python
Data from Azure Machine Learning registered data stores can be directly referenc
) ```
- # [REST](#tab/rest)
+ # [REST](#tab/rest)
- __Request__
+ __Request__
```http POST jobs HTTP/1.1
Data from Azure Machine Learning registered data stores can be directly referenc
Content-Type: application/json ```
-## Reading data from a data asset
+## Input data from data stores
-Azure Machine Learning data assets (formerly known as datasets) are supported as inputs for jobs. Follow these steps to run a batch endpoint job using data stored in a registered data asset in Azure Machine Learning:
-
-> [!WARNING]
-> Data assets of type Table (`MLTable`) aren't currently supported.
+Data from Azure Machine Learning registered data stores can be directly referenced by batch deployments jobs. In this example, we're going to first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store:
-1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset.
+1. Let's get access to the default data store in the Azure Machine Learning workspace. If your data is in a different store, you can use that store instead. There's no requirement of using the default data store.
# [Azure CLI](#tab/cli)
-
- Create a data asset definition in `YAML`:
-
- __heart-dataset-unlabeled.yml__
- ```yaml
- $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
- name: heart-dataset-unlabeled
- description: An unlabeled dataset for heart classification.
- type: uri_folder
- path: heart-classifier-mlflow/data
- ```
-
- Then, create the data asset:
-
- ```bash
- az ml data create -f heart-dataset-unlabeled.yml
- ```
-
- # [Python](#tab/sdk)
-
- ```python
- data_path = "heart-classifier-mlflow/data"
- dataset_name = "heart-dataset-unlabeled"
-
- heart_dataset_unlabeled = Data(
- path=data_path,
- type=AssetTypes.URI_FOLDER,
- description="An unlabeled dataset for heart classification",
- name=dataset_name,
- )
- ```
-
- Then, create the data asset:
-
- ```python
- ml_client.data.create_or_update(heart_dataset_unlabeled)
+
+ ```azurecli
+ DATASTORE_ID=$(az ml datastore show -n workspaceblobstore | jq -r '.id')
```
- To get the newly created data asset, use:
-
+ > [!NOTE]
+ > Data stores ID would look like `/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>`.
+
+ # [Python](#tab/sdk)
+ ```python
- heart_dataset_unlabeled = ml_client.data.get(name=dataset_name, label="latest")
+ default_ds = ml_client.datastores.get_default()
``` # [REST](#tab/rest)
- Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You will need them later.
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the data store information.
+
+
+
+ > [!TIP]
+ > The default blob data store in a workspace is called __workspaceblobstore__. You can skip this step if you already know the resource ID of the default data store in your workspace.
+1. We'll need to upload some sample data to it. This example assumes you've uploaded the sample data included in the repo in the folder `sdk/python/endpoints/batch/heart-classifier/data` in the folder `heart-classifier/data` in the blob storage account. Ensure you have done that before moving forward.
1. Create a data input: # [Azure CLI](#tab/cli)
+ Let's place the file path in the following variable:
+ ```azurecli
- DATASET_ID=$(az ml data show -n heart-dataset-unlabeled --label latest --query id)
+ DATA_PATH="heart-disease-uci-unlabeled"
+ INPUT_PATH="$DATASTORE_ID/paths/$DATA_PATH"
``` # [Python](#tab/sdk) ```python
- input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ data_path = "heart-classifier/data"
+ input = Input(type=AssetTypes.URI_FOLDER, path=f"{default_ds.id}/paths/{data_path})
``` # [REST](#tab/rest)
Azure Machine Learning data assets (formerly known as datasets) are supported as
"InputData": { "mnistinput": { "JobInputType" : "UriFolder",
- "Uri": "azureml://locations/<location>/workspaces/<workspace>/data/<dataset_name>/versions/labels/latest"
+ "Uri": "azureml:/subscriptions/<subscription>/resourceGroups/<resource-group/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>/paths/<data-path>"
} } } } ``` -
+
> [!NOTE]
- > Data assets ID would look like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/data/<data-asset>/versions/<version>`.
+ > See how the path `paths` is appended to the resource id of the data store to indicate that what follows is a path inside of it.
+ > [!TIP]
+ > You can also use `azureml://datastores/<data-store>/paths/<data-path>` as a way to indicate the input.
1. Run the deployment: # [Azure CLI](#tab/cli) ```bash
- INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $DATASET_ID)
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $INPUT_PATH)
```-
- > [!TIP]
- > You can also use `--input azureml:/<dataasset_name>@latest` as a way to indicate the input.
-
+
# [Python](#tab/sdk) ```python
Azure Machine Learning data assets (formerly known as datasets) are supported as
) ```
- # [REST](#tab/rest)
+ # [REST](#tab/rest)
- __Request__
+ __Request__
```http POST jobs HTTP/1.1
Azure Machine Learning data assets (formerly known as datasets) are supported as
Content-Type: application/json ```
-## Reading data from Azure Storage Accounts
+## Input data from Azure Storage Accounts
Azure Machine Learning batch endpoints can read data from cloud locations in Azure Storage Accounts, both public and private. Use the following steps to run a batch endpoint job using data stored in a storage account:
machine-learning How To Attach Kubernetes To Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-to-workspace.md
If you are using the Azure portal to assign roles and have a **system-assigned m
If you have user-assigned managed identity, select **Managed identity** to find the target identity.
-You can use Managed Identity to pull images from Azure Container Registry. Grant the __AcrPull__ role to the compute Managed Identity. For more information, see [Azure Container Registry roles and permissions](/azure/container-registry/container-registry-roles).
+You can use Managed Identity to pull images from Azure Container Registry. Grant the __AcrPull__ role to the compute Managed Identity. For more information, see [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md).
You can use a managed identity to access Azure Blob: - For read-only purpose, __Storage Blob Data Reader__ role should be granted to the compute managed identity.
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
def run(mini_batch):
The method receives a list of file paths as a parameter (`mini_batch`). You can use this list to either iterate over each file and process it one by one, or to read the entire batch and process it at once. The best option will depend on your compute memory and the throughput you need to achieve. For an example of how to read entire batches of data at once see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments). > [!NOTE]
-> __How is work distributed?__:
+> __How is work distributed?__
> > Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
-The `run()` method should return a pandas DataFrame or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element will represent a single file processed. For a tabular dataset, each row/element will represent a row in a processed file.
+The `run()` method should return a Pandas `DataFrame` or an array/list. Each returned output element indicates one successful run of an input element in the input `mini_batch`. For file datasets, each row/element will represent a single file processed. For a tabular dataset, each row/element will represent a row in a processed file.
> [!IMPORTANT]
-> __How to write predictions?__:
+> __How to write predictions?__
> > Use __arrays__ when you need to output a single prediction. Use __pandas DataFrames__ when you need to return multiple pieces of information. For instance, for tabular data, you may want to append your predictions to the original record. Use a pandas DataFrame for this case. For file datasets, __we still recommend to output a pandas DataFrame__ as they provide a more robust approach to read the results. >
The `run()` method should return a pandas DataFrame or an array/list. Each retur
The resulting DataFrame or array is appended to the output file indicated. There's no requirement on the cardinality of the results (1 file can generate 1 or many rows/elements in the output). All elements in the result DataFrame or array will be written to the output file as-is (considering the `output_action` isn't `summary_only`).
+#### Python packages for scoring
+
+Any library that your scoring script requires to run needs to be indicated in the environment where your batch deployment runs. As for scoring scripts, environments are indicated per deployment. Usually, you will indicate your requirements using a `conda.yml` dependencies file which may look as follows:
+
+__mnist/environment/conda.yml__
+
+
+Refer to [Create a batch deployment](how-to-use-batch-endpoint.md#create-a-batch-deployment) for more details about how to indicate the environment for your model.
+ ## Writing predictions in a different way By default, the batch deployment will write the model's predictions in a single file as indicated in the deployment. However, there are some cases where you need to write the predictions in multiple files. For instance, if the input data is partitioned, you typically would want to generate your output partitioned too. On those cases you can [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md) to indicate:
machine-learning How To Convert Custom Model To Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-convert-custom-model-to-mlflow.md
The following code demonstrates how to create a Python wrapper for an `sklearn`
# Load training and test datasets from sys import version_info import sklearn
-from sklearn import datasets
-from sklearn.model_selection import train_test_split
- import mlflow.pyfunc
-from sklearn.metrics import mean_squared_error
-from sklearn.model_selection import train_test_split
+ PYTHON_VERSION = "{major}.{minor}.{micro}".format(major=version_info.major, minor=version_info.minor,
conda_env = {
Once your environment is ready, you can pass the SKlearnWrapper, the Conda environment, and your newly created artifacts dictionary to the mlflow.pyfunc.save_model() method. Doing so saves the model to your disk. ```python
-mlflow_pyfunc_model_path = "sklearn_mlflow_pyfunc7"
+mlflow_pyfunc_model_path = "sklearn_mlflow_pyfunc_custom"
mlflow.pyfunc.save_model(path=mlflow_pyfunc_model_path, python_model=SKLearnWrapper(), conda_env=conda_env, artifacts=artifacts) ```
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
These resources can be deleted by selecting them from the list and choosing **De
> [!IMPORTANT] > If the resource is configured for soft delete, the data won't be deleted unless you optionally select to delete the resource permanently. For more information, see the following articles: > * [Workspace soft-deletion](concept-soft-delete.md).
-> * [Soft delete for blobs](/azure/storage/blobs/soft-delete-blob-overview).
-> * [Soft delete in Azure Container Registry](/azure/container-registry/container-registry-soft-delete-policy).
-> * [Azure log analytics workspace](/azure/azure-monitor/logs/delete-workspace).
-> * [Azure Key Vault soft-delete](/azure/key-vault/general/soft-delete-overview).
+> * [Soft delete for blobs](../storage/blobs/soft-delete-blob-overview.md).
+> * [Soft delete in Azure Container Registry](../container-registry/container-registry-soft-delete-policy.md).
+> * [Azure log analytics workspace](../azure-monitor/logs/delete-workspace.md).
+> * [Azure Key Vault soft-delete](../key-vault/general/soft-delete-overview.md).
:::image type="content" source="media/how-to-export-delete-data/delete-resource-group-resources.png" alt-text="Screenshot of portal, with delete icon highlighted.":::
You can download a registered model by navigating to the **Model** and choosing
## Next steps
-Learn more about [Managing a workspace](how-to-manage-workspace.md).
+Learn more about [Managing a workspace](how-to-manage-workspace.md).
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
It might take a few minutes to start the job and the training applications speci
- To connect via SSH to the container where the job is running, run the command `az ml job connect-ssh --name <job-name> --node-index <compute node index> --private-key-file-path <path to private key>`. To set up the Azure Machine Learning CLIv2, follow this [guide](./how-to-configure-cli.md).
-You can find the reference documentation for the SDKv2 [here](/azure/machine-learning/).
+You can find the reference documentation for the SDKv2 [here](./index.yml).
You can access the applications only when they are in **Running** status and only the **job owner** is authorized to access the applications. If you're training on multiple nodes, you can pick the specific node you would like to interact with by passing in the node index.
To submit a job with a debugger attached and the execution paused, you can use d
## Next steps
-+ Learn more about [how and where to deploy a model](./how-to-deploy-online-endpoints.md).
++ Learn more about [how and where to deploy a model](./how-to-deploy-online-endpoints.md).
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
Format:
`runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>` Example:
-`runs:/$RUN_ID/model/`
+`runs:/<run-id>/model/`
```cli
-az ml model create --name my-model --version 1 --path runs:/$RUN_ID/model/ --type mlflow_model
+az ml model create --name my-model --version 1 --path runs:/<run-id>/model/ --type mlflow_model
``` ### azureml job
Format:
`azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>` Examples:-- Default artifact location: `azureml://jobs/$RUN_ID/outputs/artifacts/paths/model/`
- * This is equivalent to `runs:/$RUN_ID/model/`.
+- Default artifact location: `azureml://jobs/<run-id>/outputs/artifacts/paths/model/`
+ * This is equivalent to `runs:/<run-id>/model/`.
* *artifacts* is the reserved keyword to refer to the output that represents the default artifact location.-- From a named output directory: `azureml://jobs/$RUN_ID/outputs/trained-model`
+- From a named output directory: `azureml://jobs/<run-id>/outputs/trained-model`
- From a specific file or folder path within the named output directory:
- * `azureml://jobs/$RUN_ID/outputs/trained-model/paths/cifar.pt`
- * `azureml://jobs/$RUN_ID/outputs/checkpoints/paths/model/`
+ * `azureml://jobs/<run-id>/outputs/trained-model/paths/cifar.pt`
+ * `azureml://jobs/<run-id>/outputs/checkpoints/paths/model/`
Saving model from a named output: ```cli
-az ml model create --name my-model --version 1 --path azureml://jobs/$RUN_ID/outputs/trained-model
+az ml model create --name my-model --version 1 --path azureml://jobs/<run-id>/outputs/trained-model
``` For a complete example, see the [CLI reference](/cli/azure/ml/model).
Format:
`runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>` Example:
-`runs:/$RUN_ID/model/`
+`runs:/<run-id>/model/`
```python from azure.ai.ml.entities import Model from azure.ai.ml.constants import ModelType run_model = Model(
- path="runs:/$RUN_ID/model/"
+ path="runs:/<run-id>/model/"
name="run-model-example", description="Model created from run.", type=ModelType.MLFLOW
Format:
`azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>` Examples:-- Default artifact location: `azureml://jobs/$RUN_ID/outputs/artifacts/paths/model/`
- * This is equivalent to `runs:/$RUN_ID/model/`.
+- Default artifact location: `azureml://jobs/<run-id>/outputs/artifacts/paths/model/`
+ * This is equivalent to `runs:/<run-id>/model/`.
* *artifacts* is the reserved keyword to refer to the output that represents the default artifact location.-- From a named output directory: `azureml://jobs/$RUN_ID/outputs/trained-model`
+- From a named output directory: `azureml://jobs/<run-id>/outputs/trained-model`
- From a specific file or folder path within the named output directory:
- * `azureml://jobs/$RUN_ID/outputs/trained-model/paths/cifar.pt`
- * `azureml://jobs/$RUN_ID/outputs/checkpoints/paths/model/`
+ * `azureml://jobs/<run-id>/outputs/trained-model/paths/cifar.pt`
+ * `azureml://jobs/<run-id>/outputs/checkpoints/paths/model/`
Saving model from a named output:
machine-learning How To Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md
If you author files locally (or in a DSVM), you can use `azcopy` or the [Azure S
||| |`read_delimited` | `infer_column_types`: Boolean to infer column data types. Defaults to True. Type inference requires that the data source is accessible from the current compute. Currently type inference will only pull the first 200 rows.<br><br>`encoding`: Specify the file encoding. Supported encodings: `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default encoding: `utf8`.<br><br>`header`: user can choose one of the following options: `no_header`, `from_first_file`, `all_files_different_headers`, `all_files_same_headers`. Defaults to `all_files_same_headers`.<br><br>`delimiter`: The separator used to split columns.<br><br>`empty_as_string`: Specify if empty field values should be loaded as empty strings. The default (False) will read empty field values as nulls. Passing this setting as *True* will read empty field values as empty strings. If the values are converted to numeric or datetime, then this setting has no effect, as empty values will be converted to nulls.<br><Br>`include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting is useful when reading multiple files, and you want to know which file a particular record originated from. Also, you can keep useful information in file path.<br><br>`support_multi_line`: By default (`support_multi_line=False`), all line breaks, including line breaks in quoted field values, will be interpreted as a record break. Reading data this way is faster and more optimized for parallel execution on multiple CPU cores. However, it may result in silent production of more records with misaligned field values. This setting should be set to True when the delimited files are known to contain quoted line breaks. | | `read_parquet` | `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting is useful when you're reading multiple files, and you want to know which file a particular record originated from. Also, you can keep useful information in file path. |
-| `read_delta_lake` | `timestamp_as_of`: Timestamp to be specified for time-travel on the specific Delta Lake data.<br><br>`version_as_of`: Version to be specified for time-travel on the specific Delta Lake data.
+| `read_delta_lake` (preview) | `timestamp_as_of`: Timestamp to be specified for time-travel on the specific Delta Lake data.<br><br>`version_as_of`: Version to be specified for time-travel on the specific Delta Lake data.
| `read_json_lines` | `include_path_column`: Boolean to keep path information as column in the MLTable. Defaults to False. This setting is useful when you're reading multiple files, and you want to know which file a particular record originated from. Also, you can keep useful information in file path.<br><br>`invalid_lines`: How to handle lines that are invalid JSON. Supported values are `error` and `drop`. Defaults to `error`.<br><br>`encoding`: Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default is `utf8`. ##### Other transformations
tbl = mltable.load(uri)
df = tbl.to_pandas_dataframe() ```
-### Delta Lake
+### Delta Lake (preview)
In this example, we assume you have a data in Delta format on Azure Data Lake:
df.head()
- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job) - [Create data assets](how-to-create-data-assets.md#create-data-assets)-- [Data administration](how-to-administrate-data-authentication.md#data-administration)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning How To Search Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-search-assets.md
Title: Search for assets (preview)
+ Title: Search for assets
description: Find your Azure Machine Learning assets with search
Previously updated : 07/14/2022 Last updated : 1/12/2023
-# Search for Azure Machine Learning assets (preview)
+# Search for Azure Machine Learning assets
Use the search bar to find machine learning assets across all workspaces, resource groups, and subscriptions in your organization. Your search text will be used to find assets such as:
Use the search bar to find machine learning assets across all workspaces, resour
* Environments * Data
-> [!IMPORTANT]
-> The search functionality is currently in public preview.
-> The preview version is provided without a service level agreement.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Free text search 1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
The following diagram shows how communications flow through private endpoints to
* The Azure Container Registry and Azure Storage Account must be in the same Azure Resource Group as the workspace.
-* If you want to use a [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) to create and manage online endpoints and online deployments, the identity should have the proper permissions. For details about the required permissions, see [Set up service authentication](/azure/machine-learning/how-to-identity-based-service-authentication#workspace). For example, you need to assign the proper RBAC permission for Azure Key Vault on the identity.
+* If you want to use a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) to create and manage online endpoints and online deployments, the identity should have the proper permissions. For details about the required permissions, see [Set up service authentication](./how-to-identity-based-service-authentication.md#workspace). For example, you need to assign the proper RBAC permission for Azure Key Vault on the identity.
> [!IMPORTANT] > The end-to-end example in this article comes from the files in the __azureml-examples__ GitHub repository. To clone the samples repository and switch to the repository's `cli/` directory, use the following commands:
az group delete --resource-group <resource-group-name>
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)-- [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md)
+- [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md)
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
Ensure that you're using a compatible python version
* [mpi4py installation](https://aka.ms/azureml/environment/install-mpi4py) ### Interactive auth was attempted-- Failed to create or update the conda environment because pip attempted interactive authentication -- Instead, provide authentication via [workspace connection](https://aka.ms/azureml/environment/set-connection-v1)
+<!--issueDescription-->
+This issue can happen when pip attempts interactive authentication during package installation.
+
+**Potential causes:**
+* You've listed a package that requires authentication, but you haven't provided credentials
+* During the image build, pip tried to prompt you to authenticate which failed the build
+because you can't provide interactive authentication during a build
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Provide authentication via workspace connections
+
+*Applies to: Python SDK azureml V1*
+
+```
+from azureml.core import Workspace
+ws = Workspace.from_config()
+ws.set_connection("connection1", "PythonFeed", "<URL>", "Basic", "{'Username': '<username>', 'Password': '<password>'}")
+```
+
+*Applies to: Azure CLI extensions V1 & V2*
+
+Create a workspace connection from a YAML specification file
+
+```
+az ml connection create --file connection.yml --resource-group my-resource-group --workspace-name my-workspace
+```
+
+**Resources**
+* [Python SDK AzureML v1 workspace connections](https://aka.ms/azureml/environment/set-connection-v1)
+* [Python SDK AzureML v2 workspace connections](/python/api/azure-ai-ml/azure.ai.ml.entities.workspaceconnection)
+* [Azure CLI workspace connections](/cli/azure/ml/connection)
### Forbidden blob-- Failed to create or update the conda environment because a blob contained in the associated storage account was inaccessible-- Either open up permissions on the blob or add/replace the SAS token in the URL
+<!--issueDescription-->
+This issue can happen when an attempt to access a blob in a storage account is rejected.
+
+**Potential causes:**
+* The authorization method you're using to access the storage account is invalid
+* You're attempting to authorize via shared access signature (SAS), but the SAS token is expired or invalid
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Read the following to understand [how to authorize access to blob data in the Azure portal](../storage/blobs/authorize-data-operations-portal.md)
+
+Read the following to understand [how to authorize access to data in Azure storage](../storage/common/authorize-data-access.md)
+
+Read the following if you're interested in [using SAS to access Azure storage resources](../storage/common/storage-sas-overview.md)
### Horovod build-- Failed to create or update the conda environment because horovod failed to build-- See [horovod installation](https://aka.ms/azureml/environment/install-horovod)
+<!--issueDescription-->
+This issue can happen when the conda environment fails to be created or updated because horovod failed to build.
+
+**Potential causes:**
+* Horovod installation requires other modules that you haven't installed
+* Horovod installation requires certain libraries that you haven't included
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Many issues could cause a horovod failure, and there's a comprehensive list of them in horovod's documentation
+* Review the [horovod troubleshooting guide](https://horovod.readthedocs.io/en/stable/troubleshooting_include.html#)
+* Review your Build log to see if there's an error message that surfaced when horovod failed to build
+* It's possible that the problem you're encountering is detailed in the horovod troubleshooting guide, along with a solution
+
+**Resources**
+* [horovod installation](https://aka.ms/azureml/environment/install-horovod)
### Conda command not found - Failed to create or update the conda environment because the conda command is missing
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Try to delete some unused endpoints in this subscription.
#### Role assignment quota
-When you are creating a managed online endpoint, role assignment is required for the [managed identity](/azure/active-directory/managed-identities-azure-resources/overview) to access workspace resources. If you've reached the [role assignment limit](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-rbac-limits), try to delete some unused role assignments in this subscription. You can check all role assignments in the Azure portal by going to the Access Control menu.
+When you are creating a managed online endpoint, role assignment is required for the [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to access workspace resources. If you've reached the [role assignment limit](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-rbac-limits), try to delete some unused role assignments in this subscription. You can check all role assignments in the Azure portal by going to the Access Control menu.
#### Kubernetes quota
We recommend that you use Azure Functions, Azure Application Gateway, or any ser
- [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)-- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
+- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
A deployment is a set of resources required for hosting the model that does the
* The environment in which the model runs. * The pre-created compute and resource settings.
-1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yml`.
-
+1. Create an environment where your batch deployment will run. Such environment needs to include the packages `azureml-core` and `azureml-dataset-runtime[fuse]` which are required by batch endpoints plus any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yml`:
+
+ __mnist/environment/conda.yml__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist/environment/conda.yml":::
+
+ > [!IMPORTANT]
+ > The packages `azureml-core` and `azureml-dataset-runtime[fuse]` are required by batch deployments and should be included in the environment dependencies.
+
+ Indicate the environment as follows:
+
# [Azure CLI](#tab/azure-cli) The environment definition will be included in the deployment definition itself as an anonymous environment. You will see in the following lines in the deployment:
A deployment is a set of resources required for hosting the model that does the
- The conda file we used looks as follows:
-
- __mnist/environment/conda.yml__
-
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist/environment/conda.yml":::
- > [!WARNING] > Curated environments are not supported in batch deployments. You will need to indicate your own environment. You can always use the base image of a curated environment as yours to simplify the process.
- > [!IMPORTANT]
- > The packages `azureml-core` and `azureml-dataset-runtime[fuse]` are required by batch deployments and should be included in the environment dependencies.
-
- 1. Create a deployment definition # [Azure CLI](#tab/azure-cli)
machine-learning Reference Yaml Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-mltable.md
If the user doesn't define options for `read_parquet` transformation, default op
- `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting is useful when you're reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
-## MLTable transformations: read_delta_lake
+## MLTable transformations: read_delta_lake (preview)
```yaml type: mltable
transformations:
timestamp_as_of: '2022-08-26T00:00:00Z' ```
-### Delta lake transformations
+### Delta lake transformations (preview)
- `timestamp_as_of`: Datetime string in RFC-3339/ISO-8601 format to be specified for time-travel on the specific Delta Lake data. - `version_as_of`: Version to be specified for time-travel on the specific Delta Lake data.
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
Previously updated : 3/31/2022 Last updated : 1/12/2022 # How to configure data sources for Azure Managed Grafana
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
## Supported Grafana data sources
-By design, Grafana can be configured with multiple data sources. A data source is an externalized storage backend that holds your telemetry information. Azure Managed Grafana supports many popular data sources. Azure-specific data sources are:
+By design, Grafana can be configured with multiple data sources. A data source is an externalized storage backend that holds your telemetry information. Azure Managed Grafana supports many popular data sources.
-- [Azure Data Explorer](https://github.com/grafana/azure-data-explorer-datasource?utm_source=grafana_add_ds)-- [Azure Monitor](https://grafana.com/docs/grafana/latest/datasources/azuremonitor/)
+Azure-specific data sources available for all customers:
-Other data sources include:
+- [Azure Data Explorer](https://github.com/grafana/azure-data-explorer-datasource?utm_source=grafana_add_ds)
+- [Azure Monitor](https://grafana.com/docs/grafana/latest/datasources/azuremonitor/). Is preloaded in all Grafana instances.
+
+Data sources reserved for Grafana Enterprise customers - exclusively preloaded in instances with a Grafana Enterprise subscription:
+
+- [AppDynamics](https://grafana.com/grafana/plugins/dlopes7-appdynamics-datasource)
+- [Azure Devops](https://grafana.com/grafana/plugins/grafana-azuredevops-datasource)
+- [DataDog](https://grafana.com/grafana/plugins/grafana-datadog-datasource)
+- [Dynatrace](https://grafana.com/grafana/plugins/grafana-dynatrace-datasource)
+- [Gitlab](https://grafana.com/grafana/plugins/grafana-gitlab-datasource)
+- [Honeycomb](https://grafana.com/grafana/plugins/grafana-honeycomb-datasource)
+- [Jira](https://grafana.com/grafana/plugins/grafana-jira-datasource)
+- [MongoDB](https://grafana.com/grafana/plugins/grafana-mongodb-datasource)
+- [New Relic](https://grafana.com/grafana/plugins/grafana-newrelic-datasource)
+- [Oracle Database](https://grafana.com/grafana/plugins/grafana-oracle-datasource)
+- [Salesforce](https://grafana.com/grafana/plugins/grafana-salesforce-datasource)
+- [SAP HANA®](https://grafana.com/grafana/plugins/grafana-saphana-datasource)
+- [ServiceNow](https://grafana.com/grafana/plugins/grafana-servicenow-datasource)
+- [Snowflake](https://grafana.com/grafana/plugins/grafana-snowflake-datasource)
+- [Splunk](https://grafana.com/grafana/plugins/grafana-splunk-datasource)
+- [Splunk Infrastructure monitoring (SignalFx)](https://grafana.com/grafana/plugins/grafana-splunk-monitoring-datasource)
+- [Wavefront](https://grafana.com/grafana/plugins/grafana-wavefront-datasource)
+
+Other data sources:
- [Alertmanager](https://grafana.com/docs/grafana/latest/datasources/alertmanager/) - [CloudWatch](https://grafana.com/docs/grafana/latest/datasources/aws-cloudwatch/)
Other data sources include:
- [TestData DB](https://grafana.com/docs/grafana/latest/datasources/testdata/) - [Zipkin](https://grafana.com/docs/grafana/latest/datasources/zipkin/)
-You can find all available Grafana data sources by going to your resource and selecting **Configuration** > **Data sources** from the left menu. Search for the data source you need from the available list and select **Add data source**.
+For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website.
+
+## Add a datasource
+
+A number of data sources are added by in your Grafana instance by default. To add more data sources, follow the steps below using the Azure portal or the Azure CLI.
+
+### [Portal](#tab/azure-portal)
+
+1. Open the Grafana UI of your Azure Managed Grafana instance and select **Configuration** > **Data sources** from the left menu.
+1. Select **Add data source**, search for the data source you need from the available list, and select it.
+1. Fill out the form with the data source settings and select **Save and test** to validate the connection to your data source.
:::image type="content" source="media/data-sources/add-data-source.png" alt-text="Screenshot of the Add data source page."::: > [!NOTE] > Installing Grafana plugins listed on the page **Configuration** > **Plugins** isnΓÇÖt currently supported.
-For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website.
+### [Azure CLI](#tab/azure-cli)
+
+Run the [az grafana data-source create](/cli/azure/grafana/data-source#az-grafana-data-source-create) command to add and manage Azure Managed Grafana data sources with the Azure CLI.
+
+For example, to add an Azure SQL data source, run:
+
+```azurecli-interactive
+
+az grafana data-source create --name <instance-name> --definition '{
+ "access": "proxy",
+ "database": "testdb",
+ "jsonData": {
+ "authenticationType": "SQL Server Authentication",
+ "encrypt": "false"
+ },
+ "secureJsonData": {
+ "password": "verySecretPassword"
+ },
+ "name": "Microsoft SQL Server",
+ "type": "mssql",
+ "url": "<url>",
+ "user": "<user>"
+}'
+```
+++
+## Update a data source
-## Configuration for Azure Monitor
+### Azure Monitor configuration
-The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow these steps in your Managed Grafana endpoint:
+The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow these steps below in the Grafana UI of your Azure Managed Grafana instance or in the Azure CLI.
+
+### [Portal](#tab/azure-portal)
1. From the left menu, select **Configuration** > **Data sources**.
The Azure Monitor data source is automatically added to all new Managed Grafana
:::image type="content" source="media/data-sources/configure-Azure-Monitor.png" alt-text="Screenshot of the Azure Monitor page in data sources.":::
-Authentication and authorization are then made through the provided managed identity. With Managed Identity, you can assign permissions for your Managed Grafana instance to access Azure Monitor data without having to manually manage service principals in Azure Active Directory (Azure AD).
+Authentication and authorization are made through the provided managed identity. Using managed identity, lets you assign permissions for your Managed Grafana instance to access Azure Monitor data without having to manually manage service principals in Azure Active Directory (Azure AD).
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the [az grafana data-source update](/cli/azure/grafana/data-source#az-grafana-data-source-update) command to update the configuration of your Azure Monitor data sources using the Azure CLI.
+
+For example:
+
+```azurecli-interactive
+
+az grafana data-source update --data-source 'Azure Monitor' --name <instance-name> --definition '{
+ "datasource": {
+ "access": "proxy",
+ "basicAuth": false,
+ "basicAuthUser": "",
+ "database": "",
+ "id": 1,
+ "isDefault": false,
+ "jsonData": {
+ "azureAuthType": "msi",
+ "subscriptionId": "<subscription-ID>"
+ },
+ "name": "Azure Monitor",
+ "orgId": 1,
+ "readOnly": false,
+ "secureJsonFields": {},
+ "type": "grafana-azure-monitor-datasource",
+ "typeLogoUrl": "",
+ "uid": "azure-monitor-oob",
+ "url": "",
+ "user": "",
+ "version": 1,
+ "withCredentials": false
+ },
+ "id": 1,
+ "message": "Datasource updated",
+ "name": "Azure Monitor"
+}
+```
++ > [!NOTE]
-> User assigned managed identity isn't supported currently.
+> User-assigned managed identity isn't supported currently.
-## Configuration for Azure Data Explorer
+### Azure Data Explorer configuration
Azure Managed Grafana can also access data sources using a service principal set up in Azure Active Directory (Azure AD).
+### [Portal](#tab/azure-portal)
+ 1. From the left menu, select **Configuration** > **Data sources**. :::image type="content" source="media/data-sources/configuration.png" alt-text="Screenshot of the Add data sources page.":::
-1. **Azure Data Explorer Datasource** is listed as a built-in data source for your Managed Grafana instance. Select this data source.
+1. Add the **Azure Data Explorer Datasource** data source to your Managed Grafana instance.
1. In the **Settings** tab, fill out the form under **Connection Details**, and optionally also edit the **Query Optimizations**, **Database schema settings**, and **Tracking** sections. :::image type="content" source="media/data-sources/data-explorer-connection-settings.jpg" alt-text="Screenshot of the Connection details section for Data Explorer in data sources.":::
Azure Managed Grafana can also access data sources using a service principal set
1. Select **Save & test** to validate the connection. "Success" is displayed on screen and confirms that Azure Managed Grafana is able to fetch the data source through the provided connection details, using the service principal in Azure AD.
+### [Azure CLI](#tab/azure-cli)
+
+1. Run the [az grafana data-source create](/cli/azure/grafana/data-source#az-grafana-data-source-create) command to create the Azure Data Explorer data source.
+
+ For example:
+
+ ```azurecli-interactive
+ az grafana data-source create --name <grafana-instance-name> --definition '{
+ "access": "proxy",
+ "jsonData": {
+ "azureCloud": "azuremonitor",
+ "clientId": "<client-ID>",
+ "clusterUrl": "<cluster URL>",
+ "dataConsistency": "strongconsistency",
+ "defaultDatabase": "<database-name>",
+ "queryTimeout": "120s",
+ "tenantId": "<tenant-ID>"
+ },
+ "name": "<data-source-name>",
+ "type": "grafana-azure-data-explorer-datasource",
+ }'
+ ```
+
+1. Run the [az grafana data-source update](/cli/azure/grafana/data-source#az-grafana-data-source-update) command to update the configuration of the Azure Data Explorer data source.
+++ ## Next steps > [!div class="nextstepaction"]
managed-grafana How To Grafana Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-grafana-enterprise.md
+
+ Title: Subscribe to Grafana Enterprise
+description: Activate Grafana Enterprise (preview) to access Grafana Enterprise plugins within Azure Managed Grafana
++++ Last updated : 01/09/2022++
+# Enable Grafana Enterprise (preview)
+
+In this guide, learn how to activate the Grafana Enterprise (preview) add-on in Azure Managed Grafana, update your Grafana Enterprise plan, and access [Grafana Enterprise plugins](https://grafana.com/docs/plugins/).
+
+The Grafana Enterprise plans offered through Azure Managed Grafana enable users to access Grafana Enterprise plugins to do more with Azure Managed Grafana.
+
+Grafana Enterprise plugins, as of January 2023:
+
+- AppDynamics
+- Azure DevOps
+- Datadog
+- Databricks
+- Dynatrace
+- GitLab
+- Honeycomb
+- Jira
+- MongoDB
+- New Relic
+- Oracle Database
+- Salesforce
+- SAP HANA®
+- ServiceNow
+- Snowflake
+- Splunk
+- Splunk Infrastructure Monitoring
+- Sqlyze Datasource
+- Wavefront
+
+> [!NOTE]
+> Grafana Enterprise plugins are directly supported by Grafana Labs. For more information and an updated list, go to [Grafana Enterprise plugins](https://grafana.com/docs/plugins/).
+
+You can enable access to Grafana Enterprise plugins by selecting a Grafana Enterprise plan when creating a new workspace, or you can add a Grafana Enterprise plan on an already-created Azure Managed Grafana instance.
+
+> [!NOTE]
+> The Grafana Enterprise monthly plan is a paid plan, owned and charged by Grafana Labs, through Azure Marketplace. Go to [Azure Managed Grafana pricing](https://azure.microsoft.com/pricing/details/managed-grafana/) for details.
+
+> [!IMPORTANT]
+> Grafana Enterprise is currently in preview within Azure Managed Grafana.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- This guide assumes that you already know the basics of [creating an Azure Managed Grafana instance](quickstart-managed-grafana-portal.md).
+
+## Create a workspace with Grafana Enterprise enabled
+
+To activate Grafana Enterprise plugins when creating an Azure Managed Grafana Workspace, in **Create a Grafana Workspace**, go to the **Basics** tab and follow the steps below:
+
+1. Under **Project Details**, select an Azure subscription and enter a resource group name or use the generated suggested resource group name
+1. Under **Instance Details**, select an Azure region and enter a resource name.
+1. Under **Grafana Enterprise**, check the box **Grafana Enterprise**, select **Free Trial - Azure Managed Grafana Enterprise Upgrade** and keep the option **Recurring billing** on **Disabled**.
+
+ :::image type="content" source="media/grafana-enterprise/create-with-enterprise-plan.png" alt-text="Screenshot of the Grafana dashboard, instance creation basic details.":::
+
+ > [!CAUTION]
+ > Each Azure subscription can benefit from one free Grafana Enterprise trial. The free trial lets you try the Grafana Enterprise plan for one month. If you select a free trial and enable recurring billing, you will start getting charged after the end of your first month. Disable recurring billing if you just want to test Grafana Enterprise.
+
+1. Select **Review + create** and review the information about your new instance, including the costs that may be associated with the Grafana Enterprise plan and potential other paid options.
+
+ :::image type="content" source="media/grafana-enterprise/creation-cost-review.png" alt-text="Screenshot of the Grafana dashboard. Workspace information and cost review.":::
+
+1. Read and check the box at the bottom of the page to state that you agree with the terms displayed, and select **Create** to finalize the creation of your new Azure Managed Grafana instance.
+
+## Activate Grafana Enterprise on an existing workspace
+
+To enable Grafana Enterprise on an existing Azure Managed Grafana instance, follow the steps below:
+
+ 1. In the Azure portal, open your Grafana instance and under **Settings**, select **Grafana Enterprise**.
+ :::image type="content" source="media/grafana-enterprise/enable-grafana-enterprise.png" alt-text="Screenshot of the Grafana dashboard showing how to enable Grafana enterprise on an existing workspace." lightbox="media/grafana-enterprise/enable-grafana-enterprise.png":::
+ 1. Select **Free Trial - Azure Managed Grafana Enterprise Upgrade** to test Grafana Enterprise for free or select the monthly plan. Review the associated costs to make sure that you selected a plan that suits you. Recurring billing is disabled by default.
+ > [!CAUTION]
+ > Each Azure subscription can benefit from one free Grafana Enterprise trial. The free trial lets you try the Grafana Enterprise plan for one month. If you select a free trial and enable recurring billing, you will start getting charged after the end of your first month. Disable recurring billing if you just want to test Grafana Enterprise.
+
+ 1. Read and check the box at the bottom of the page to state that you agree with the terms displayed, and select **Update** to finalize the creation of your new Azure Managed Grafana instance.
+
+## Update a Grafana Enterprise plan
+
+To update the Grafana Enterprise plan of an existing Azure Managed Grafana instance, optionally follow the steps below:
+
+ 1. In the Azure portal, open your Grafana instance and under **Settings**, select **Grafana Enterprise**. Review the available information about your current plan, price and billing.
+ :::image type="content" source="media/grafana-enterprise/update-grafana-enterprise.png" alt-text="Screenshot of the Grafana dashboard showing how to update a Grafana Enterprise plan." lightbox="media/grafana-enterprise/update-grafana-enterprise.png":::
+ 1. Select **Change plan** to review available Grafana Enterprise plans and select another plan. Then select **Update** at the bottom of the page to switch to the selected plan.
+ 1. Select **Edit recurring billing** and select **Enabled** to activate recurring billing and agree to be billed on your renewal date, or select **Disabled** to disable the renewal of your Grafana Enterprise plan. The subscription will expire on the date displayed on screen. To confirm, select **Update**.
+ 1. Select **Cancel enterprise** option to cancel the Grafana Enterprise subscription. Enter the name of the plan and select **Cancel enterprise** again to confirm.
+
+ > [!NOTE]
+ > If you configure Grafana Enterprise data sources and later cancel your subscription, you will no longer have access to them. Your Grafana Enterprise data sources and associated dashboards will remain in your Grafana instance but you will need to subscribe again to Grafana Enterprise to regain access to your data.
+
+The Azure platform displays some useful links at the bottom of the page.
+
+## Start using Grafana Enterprise plugins
+
+Grafana Enterprise gives you access to preinstalled plugins reserved for Grafana Enterprise customers. Once you've activated a Grafana Enterprise plan, go to the Grafana platform, and then select **Configuration > Data sources** from the left menu to set up a data source.
++
+## Next steps
+
+In this how-to guide, you learned how to enable Grafana Enterprise plugins. To learn how to configure data sources, go to:
+
+> [!div class="nextstepaction"]
+> [Configure data sources](how-to-data-source-plugins-managed-identity.md)
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-managed.md
To set custom prices in an individual market, export, modify, and then import th
1. Select the exportedPrice.xlsx file you updated, and then click **Open**. > [!NOTE]
-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](./marketplace-geo-availability-currencies.md).
## Choose who can see your plan
The actions that are available in the **Action** column of the **Plan overview**
## Next steps - [Test and publish Azure application offer](azure-app-test-publish.md).-- [Sell an Azure application offer](azure-app-marketing.md) through the **Co-sell with Microsoft** and/or **Resell through CSPs** programs.
+- [Sell an Azure application offer](azure-app-marketing.md) through the **Co-sell with Microsoft** and/or **Resell through CSPs** programs.
marketplace Azure App Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-metered-billing.md
When it comes to defining the offer along with its pricing models, it is importa
> You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee. > [!Note]
-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](./marketplace-geo-availability-currencies.md).
## Sample offer As an example, Contoso is a publisher with a managed application service called Contoso Analytics (CoA). CoA allows customers to analyze large amount of data for reporting and data warehousing. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish offers to Azure customers. There are two plans associated with CoA, outlined below:
Follow the instruction in [Support for the commercial marketplace program in Par
**Video tutorial** -- [Metered Billing for Azure Managed Applications Overview](https://go.microsoft.com/fwlink/?linkid=2196310)--
+- [Metered Billing for Azure Managed Applications Overview](https://go.microsoft.com/fwlink/?linkid=2196310)
marketplace Azure Container Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-troubleshoot.md
You may also encounter errors due to vulnerabilities in your images. For more in
<!-- LINKS --> [container-certification-troubleshooting]: ./azure-container-certification-faq.yml
-[cluster-extension]: /azure/aks/integrations#extensions/
+[cluster-extension]: ../aks/integrations.md#extensions
[grant-access]: ./azure-container-technical-assets-kubernetes.md#grant-access-to-your-azure-container-registry
marketplace Azure Vm Plan Pricing And Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-pricing-and-availability.md
When you remove a market, customers from that market who are using active deploy
Select **Save** to continue. > [!NOTE]
-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](./marketplace-geo-availability-currencies.md).
## Pricing
marketplace Create Consulting Service Pricing Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-consulting-service-pricing-availability.md
To validate the conversion or to set custom prices in an individual market, you
1. In the spreadsheet, you can adjust prices and currencies for each market. See [Geographic availability and currency support for the commercial marketplace](./marketplace-geo-availability-currencies.md) for the list of supported currencies. When you're done, save the file. 1. In Partner Center, under **Pricing**, select the **Import pricing data** link. Importing the file will overwrite previous pricing information. > [!Note]
-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](./marketplace-geo-availability-currencies.md).
> [!IMPORTANT] > The prices you define in Partner Center are static and don't follow variations in the exchange rates. To change the price in one or more markets after publication, update and resubmit your offer in Partner Center.
Select **Save draft** before continuing.
## Next steps
-* [Review and publish](review-publish-offer.md)
--
+* [Review and publish](review-publish-offer.md)
marketplace Create New Saas Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer-plans.md
Every plan must be available in at least one market. On the **Pricing and availa
1. Select **Save**, to close the dialog box. > [!Note]
-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](./marketplace-geo-availability-currencies.md).
## Define a pricing model You must associate a pricing model with each plan: either _flat rate_ or _per user_. All plans in the same offer must use the same pricing model. For example, an offer cannot have one plan that's flat rate and another plan thatΓÇÖs per user. For more information, see [SaaS pricing models](plan-saas-offer.md#saas-pricing-models).
If you haven't already done so, create a development and test (DEV) offer to tes
- [Publishing a Private SaaS plan](https://go.microsoft.com/fwlink/?linkid=2196256) - [Configuring SaaS Pricing in Partner Center: Publisher Overview](https://go.microsoft.com/fwlink/?linkid=2201523)-- [Configuring SaaS Pricing in Partner Center: Publisher Demo](https://go.microsoft.com/fwlink/?linkid=2201524)--
+- [Configuring SaaS Pricing in Partner Center: Publisher Demo](https://go.microsoft.com/fwlink/?linkid=2201524)
marketplace Marketplace Geo Availability Currencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-geo-availability-currencies.md
Once a plan is created and saved, the prices in all local currencies are static
As an ISV, you have several options available to minimize impact of foreign exchange fluctuations: -- [Stop selling in a specific market or markets](/azure/marketplace/update-existing-offer)-- [Update the prices of a published offer, to set specific local currency prices, using 1 of 2 options](/azure/marketplace/price-changes-faq):
+- [Stop selling in a specific market or markets](./update-existing-offer.md)
+- [Update the prices of a published offer, to set specific local currency prices, using 1 of 2 options](./price-changes-faq.yml):
- You can review the local market prices, using the Export capability in Pricing & Availability, and then update any local market prices (using Import), and then republish the plan ΓÇô donΓÇÖt forget to update all the plans in an offer. - Modify the USD base price of a plan, save and republish the plan. This will update the local market prices using the most recent available foreign exchange rate. It takes at least 90 days for price increases to be visible to customers.
As an ISV, you have several options available to minimize impact of foreign exch
- If possible, set up the Private Offer as an upfront one-time payment, so that the exchange rate variations are as small as possible - If possible, have the customer billing profile to be set in USD
- - For multi-year deals, plan them as several one-year private offers, each with an upfront one-time payment
-----
+ - For multi-year deals, plan them as several one-year private offers, each with an upfront one-time payment
marketplace Marketplace Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-rewards.md
As you grow through the Microsoft commercial marketplace, you unlock new benefit
The program creates a positive feedback loop: the benefits at each stage of growth help you progress to the next stage, helping you to grow your business to Microsoft customers, with Microsoft's field, and through Microsoft's channel by leveraging the commercial marketplace as your platform
-Your benefits are differentiated based on whether your offer is [List, Trial, Consulting or Transact](/azure/marketplace/determine-your-listing-type).
+Your benefits are differentiated based on whether your offer is [List, Trial, Consulting or Transact](./determine-your-listing-type.md).
Based on your eligibility, you'll be contacted by a member of the Rewards team when your offer goes live.  
Your steps to get started are easy:
> [!NOTE] > If your offer has been live for more than three weeks and you have not received a message, check in Partner Center to find who in your organization owns the offer. They should have the communication and next steps. If you cannot determine the owner, or if the owner has left your company, open a [support ticket](https://go.microsoft.com/fwlink/?linkid=2165533).
-The scope of the activities available to you expands as you grow your offerings in the marketplace. All listings receive a base level of optimization recommendations and promotion as part of a self-serve email of resources and best practices.
+The scope of the activities available to you expands as you grow your offerings in the marketplace. All listings receive a base level of optimization recommendations and promotion as part of a self-serve email of resources and best practices.
marketplace Saas Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/saas-metered-billing.md
Understanding the offer hierarchy is important when it comes to defining the off
> [!IMPORTANT] > You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee. > [!Note]
-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](../marketplace-geo-availability-currencies.md).
## Sample offer As an example, Contoso is a publisher with a SaaS service called Contoso Notification Services (CNS). CNS lets its customers send notifications either via email or text. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish SaaS offers to Azure customers. There are three plans associated with CNS, outlined below:
To understand publisher support options and open a support ticket with Microsoft
**Video tutorials** - [SaaS Metered Billing Overview](https://go.microsoft.com/fwlink/?linkid=2196314)-- [The SaaS Metered Billing API with REST](https://go.microsoft.com/fwlink/?linkid=2196418)--
+- [The SaaS Metered Billing API with REST](https://go.microsoft.com/fwlink/?linkid=2196418)
marketplace Price Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/price-changes.md
For a price decrease to a Software as a service offer to take effect on the firs
For a price increase to a Software as a service offer to take effect on the first of a future month, 90 days out, publish the price change at least four days before the end of the current month. > [!Note]
-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies#how-we-convert-currency).
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](./marketplace-geo-availability-currencies.md#how-we-convert-currency).
## Changing the flat fee of a SaaS or Azure app offer To update the monthly or yearly price of a SaaS or Azure app offer:
If the price change was an increase and the cancelation was after the 2-day peri
After the price change is canceled, follow the steps in the appropriate part of this article to schedule a new price change with the needed modifications.  ## Next steps -- Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
+- Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
migrate How To Discover Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-applications.md
The software inventory is exported and downloaded in Excel format. The **Softwar
> [!NOTE] > Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight. + Once connected, appliance gathers configuration and performance data of SQL Server instances and databases. The SQL Server configuration data is updated once every 24 hours and the performance data are captured every 30 seconds. Hence any change to the properties of the SQL Server instance and databases such as database status, compatibility level etc. can take up to 24 hours to update on the portal. ## Discover ASP.NET web apps
openshift Howto Use Key Vault Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-use-key-vault-secrets.md
keywords: azure, openshift, red hat, key vault
# Use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift
-Azure Key Vault Provider for Secrets Store CSI Driver allows you to get secret contents stored in an [Azure Key Vault instance](/azure/key-vault/general/basic-concepts) and use the [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/introduction.html) to mount them into Kubernetes pods. This article explains how to use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift.
+Azure Key Vault Provider for Secrets Store CSI Driver allows you to get secret contents stored in an [Azure Key Vault instance](../key-vault/general/basic-concepts.md) and use the [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/introduction.html) to mount them into Kubernetes pods. This article explains how to use Azure Key Vault Provider for Secrets Store CSI Driver on Azure Red Hat OpenShift.
> [!NOTE] > Azure Key Vault Provider for Secrets Store CSI Driver is an Open Source project that works with Azure Red Hat OpenShift. While the instructions presented in this article show an example of how the Secrets Store CSI driver can be implemented, they are intended as a general guide to using the driver with ARO. Support for this implementation of an Open Source project would be provided by the project.
Uninstall the Key Vault Provider and the CSI Driver.
``` oc adm policy remove-scc-from-user privileged \ system:serviceaccount:k8s-secrets-store-csi:secrets-store-csi-driver
- ```
-
+ ```
orbital Delete Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/delete-contact.md
Title: Cancel a scheduled contact on Azure Orbital Ground Station service description: Learn how to cancel a scheduled contact.-+ Last updated 07/13/2022-+ # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
orbital Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/downlink-aqua.md
Title: Schedule a contact with NASA's AQUA public satellite using Azure Orbital Ground Station service
-description: How to schedule a contact with NASA's AQUA public satellite using Azure Orbital Ground Station service
-
+ Title: Downlink data from NASA's Aqua satellite by using Azure Orbital Ground Station
+description: Learn how to schedule a contact with NASA's Aqua public satellite by using the Azure Orbital Ground Station service.
+ Last updated 07/12/2022-
-# Customer intent: As a satellite operator, I want to ingest data from NASA's AQUA public satellite into Azure.
+
+# Customer intent: As a satellite operator, I want to ingest data from NASA's Aqua public satellite into Azure.
-# Tutorial: Downlink data from NASA's AQUA public satellite
+# Tutorial: Downlink data from NASA's Aqua public satellite
-You can communicate with satellites directly from Azure using Azure Orbital's Ground Station (AOGS) service. Once downlinked, this data can be processed and analyzed in Azure. In this guide you'll learn how to:
+You can communicate with satellites directly from Azure by using the Azure Orbital Ground Station service. After you downlink data, you can process and analyze it in Azure. In this guide, you'll learn how to:
> [!div class="checklist"]
-> * Create & authorize a spacecraft for AQUA
-> * Prepare a virtual machine (VM) to receive the downlinked AQUA data
-> * Configure a contact profile for an AQUA downlink mission
-> * Schedule a contact with AQUA using Azure Orbital and save the downlinked data
-
+> * Create and authorize a spacecraft for the Aqua public satellite.
+> * Prepare a virtual machine (VM) to receive downlinked Aqua data.
+> * Configure a contact profile for an Aqua downlink mission.
+> * Schedule a contact with Aqua by using Azure Orbital and save the downlinked data.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Must be a Contributer at the subscription level.
+- Contributor permissions at the subscription level.
## Sign in to Azure
-Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
+Sign in to the [Azure portal - Azure Orbital Preview](https://aka.ms/orbital/portal).
> [!NOTE]
-> These steps must be followed as is or you won't be able to find the resources. Please use the specific link above to sign in directly to the Azure Orbital Preview page.
+> For all the procedures in this tutorial, follow the steps exactly as shown, or you won't be able to find the resources. Use the preceding link to sign in directly to the Azure Orbital Preview page.
+
+## Create a spacecraft resource for Aqua
-## Create & authorize a spacecraft for AQUA
-### Create a new spacecraft resource for AQUA
-1. In the Azure portal search box, enter **Spacecraft*. Select **Spacecraft** in the search results.
-2. In the **Spacecraft** page, select Create.
-3. Learn an up-to-date Two-Line Element (TLE) for AQUA by checking celestrak at https://celestrak.com/NORAD/elements/active.txt
+1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+2. On the **Spacecraft** page, select **Create**.
+3. Get an up-to-date Two-Line Element (TLE) for Aqua by checking [CelesTrak](https://celestrak.com/NORAD/elements/active.txt).
+
> [!NOTE]
- > You will want to periodically update this TLE value to ensure that it is up-to-date prior to scheduling a contact. A TLE that is more than one or two weeks old may result in an unsuccessful downlink.
-4. In **Create spacecraft resource**, enter or select this information in the Basics tab:
+ > Be sure to update this TLE value before you schedule a contact. A TLE that's more than two weeks old might result in an unsuccessful downlink.
+
+4. In **Create spacecraft resource**, on the **Basics** tab, enter or select this information:
| **Field** | **Value** | | | |
- | Subscription | Select your subscription |
- | Resource Group | Select your resource group |
- | Name | **AQUA** |
- | Region | Select **West US 2** |
- | NORAD ID | **27424** |
- | TLE title line | **AQUA** |
- | TLE line 1 | Enter TLE line 1 from Celestrak |
- | TLE line 2 | Enter TLE line 2 from Celestrak |
-
-5. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
-6. In the **Links** page, enter or select this information:
+ | **Subscription** | Select your subscription. |
+ | **Resource Group** | Select your resource group. |
+ | **Name** | Enter **AQUA**. |
+ | **Region** | Select **West US 2**. |
+ | **NORAD ID** | Enter **27424**. |
+ | **TLE title line** | Enter **AQUA**. |
+ | **TLE line 1** | Enter TLE line 1 from CelesTrak. |
+ | **TLE line 2** | Enter TLE line 2 from CelesTrak. |
+
+5. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page. Then, enter or select this information:
| **Field** | **Value** | | | |
- | Direction | Select **Downlink** |
- | Center Frequency | Enter **8160** |
- | Bandwidth | Enter **15** |
- | Polarization | Select **RHCP** |
+ | **Direction** | Select **Downlink**. |
+ | **Center Frequency** | Enter **8160**. |
+ | **Bandwidth** | Enter **15**. |
+ | **Polarization** | Select **RHCP**. |
-7. Select the **Review + create** tab, or select the **Review + create** button.
-8. Select **Create**
+7. Select the **Review + create** tab, or select the **Next: Review + create** button.
+8. Select **Create**.
-### Request authorization of the new AQUA spacecraft resource
-1. Navigate to the newly created spacecraft resource's overview page.
-2. Select **New support request** in the Support + troubleshooting section of the left-hand blade.
+## Request authorization of the new Aqua spacecraft resource
+
+1. Go to the overview page for the newly created spacecraft resource.
+2. On the left pane, in the **Support + troubleshooting** section, select **New support request**.
+
> [!NOTE]
- > A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request.
+ > A [Basic support plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request.
-3. In the **New support request** page, enter or select this information in the Basics tab:
+3. On the **New support request** page, on the **Basics** tab, enter or select this information:
-| **Field** | **Value** |
-| | |
-| Summary | Request Authorization for [**AQUA**] |
-| Issue type | Select **Technical** |
-| Subscription | Select the subscription in which the spacecraft resource was created |
-| Service | Select **My services** |
-| Service type | Search for and select **Azure Orbital** |
-| Problem type | Select **Spacecraft Management and Setup** |
-| Problem subtype | Select **Spacecraft Registration** |
+ | **Field** | **Value** |
+ | | |
+ | **Summary** | Enter **Request authorization for AQUA**. |
+ | **Issue type** | Select **Technical**. |
+ | **Subscription** | Select the subscription in which you created the spacecraft resource. |
+ | **Service** | Select **My services**. |
+ | **Service type** | Search for and select **Azure Orbital**. |
+ | **Problem type** | Select **Spacecraft Management and Setup**. |
+ | **Problem subtype** | Select **Spacecraft Registration**. |
-4. Select the Details tab at the top of the page
-5. In the Details tab, enter this information in the Problem details section:
+4. Select the **Details** tab at the top of the page. In the **Problem details** section, enter this information:
-| **Field** | **Value** |
-| | |
-| When did the problem start? | Select the current date & time |
-| Description | List AQUA's center frequency (**8160**) and the desired ground stations |
-| File upload | Upload any pertinent licensing material, if applicable |
+ | **Field** | **Value** |
+ | | |
+ | **When did the problem start?** | Select the current date and time. |
+ | **Description** | List Aqua's center frequency (**8160**) and the desired ground stations. |
+ | **File upload** | Upload any pertinent licensing material, if applicable. |
6. Complete the **Advanced diagnostic information** and **Support method** sections of the **Details** tab.
-7. Select the **Review + create** tab, or select the **Review + create** button.
+7. Select the **Review + create** tab, or select the **Next: Review + create** button.
8. Select **Create**. > [!NOTE]
- > You can confirm that your spacecraft resource for AQUA is authorized by checking that the **Authorization status** shows **Allowed** in the spacecraft's overiew page.
+ > You can confirm that your spacecraft resource for Aqua is authorized by checking that the **Authorization status** shows **Allowed** on the spacecraft's overview page.
+## Prepare your virtual machine and network to receive Aqua data
-## Prepare your virtual machine (VM) and network to receive AQUA data
+1. [Create a virtual network](../virtual-network/quick-create-portal.md) to host your data endpoint VM.
+2. [Create a virtual machine](../virtual-network/quick-create-portal.md#create-virtual-machines) within the virtual network that you created. Ensure that this VM has the following specifications:
+ - The operating system is Linux (Ubuntu 18.04 or later).
+ - The size is at least 32 GiB of RAM.
+ - The VM has internet access for downloading tools by having one standard public IP address.
-1. [Create a virtual network](../virtual-network/quick-create-portal.md) to host your data endpoint virtual machine (VM)
-2. [Create a virtual machine (VM)](../virtual-network/quick-create-portal.md#create-virtual-machines) within the virtual network above. Ensure that this VM has the following specifications:
-- Operation System: Linux (Ubuntu 18.04 or higher)-- Size: at least 32 GiB of RAM-- Ensure that the VM has internet access for downloading tools by having one standard public IP address
+ > [!TIP]
+ > The public IP address here is only for internet connectivity, not contact data. For more information, see [Default outbound access in Azure](../virtual-network/ip-services/default-outbound-access.md).
-> [!TIP]
-> The Public IP Address here is only for internet connectivity not Contact Data. For more information, see [Default outbound access in Azure](../virtual-network/ip-services/default-outbound-access.md).
+3. Enter the following commands to create a temporary file system (*tmpfs*) on the virtual machine. This virtual machine is where the data will be written to avoid slow writes to disk.
-3. Create a tmpfs on the virtual machine. This virtual machine is where the data will be written to in order to avoid slow writes to disk:
-```console
-sudo mkdir /media/aqua
-sudo mount -t tmpfs -o size=28G tmpfs /media/aqua
-```
-4. Ensure that SOCAT is installed on the machine:
-```console
-sudo apt install socat
-```
+ ```console
+ sudo mkdir /media/aqua
+ sudo mount -t tmpfs -o size=28G tmpfs /media/aqua
+ ```
+4. Enter the following command to ensure that the Socat tool is installed on the machine:
+
+ ```console
+ sudo apt install socat
+ ```
5. [Prepare the network for Azure Orbital Ground Station integration](prepare-network.md) to configure your network.
-## Configure a contact profile for an AQUA downlink mission
-1. In the Azure portal search box, enter **Contact profile**. Select **Contact profile** in the search results.
-2. In the **Contact profile** page, select **Create**.
-3. In **Create contact profile resource**, enter or select this information in the **Basics** tab:
+## Configure a contact profile for an Aqua downlink mission
+
+1. In the Azure portal's search box, enter **Contact profile**. Select **Contact profile** in the search results.
+2. On the **Contact profile** page, select **Create**.
+3. In **Create contact profile resource**, on the **Basics** tab, enter or select this information:
| **Field** | **Value** | | | |
- | Subscription | Select your subscription |
- | Resource group | Select your resource group |
- | Name | Enter **AQUA_Downlink** |
- | Region | Select **West US 2** |
- | Minimum viable contact duration | **PT1M** |
- | Minimum elevation | **5.0** |
- | Auto track configuration | **Disabled** |
- | Event Hubs Namespace | Select an Event Hubs Namespace to which you'll send telemetry data of your contacts. Select a Subscription before you can select an Event Hubs Namespace. |
- | Event Hubs Instance | Select an Event Hubs Instance that belongs to the previously selected Namespace. *This field will only appear if an Event Hubs Namespace is selected first*. |
--
-4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
-5. In the **Links** page, select **Add new Link**
-6. In the **Add Link** page, enter, or select this information:
+ | **Subscription** | Select your subscription. |
+ | **Resource group** | Select your resource group. |
+ | **Name** | Enter **AQUA_Downlink**. |
+ | **Region** | Select **West US 2**. |
+ | **Minimum viable contact duration** | Enter **PT1M**. |
+ | **Minimum elevation** | Enter **5.0**. |
+ | **Auto track configuration** | Select **Disabled**. |
+ | **Event Hubs Namespace** | Select an Azure Event Hubs namespace to which you'll send telemetry data for your contacts. You must select a subscription before you can select an Event Hubs namespace. |
+ | **Event Hubs Instance** | Select an Event Hubs instance that belongs to the previously selected namespace. This field appears only if you select an Event Hubs namespace first. |
+
+4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page. Then, select **Add new Link**.
+6. On the **Add Link** pane, enter or select this information:
| **Field** | **Value** | | | |
- | Direction | **Downlink** |
- | Gain/Temperature in db/K | **0** |
- | Center Frequency | **8160.0** |
- | Bandwidth MHz | **15.0** |
- | Polarization | **RHCP** |
- | Endpoint name | Enter the name of the virtual machine (VM) you created above |
- | IP Address | Enter the Private IP address of the virtual machine you created above (VM) |
- | Port | **56001** |
- | Protocol | **TCP** |
- | Demodulation Configuration | Select the 'Preset Named Modem Configuration' option and choose **Aqua Direct Broadcast**|
- | Decoding Configuration | Leave this field **blank** |
--
-7. Select the **Submit** button
-8. Select the **Review + create** tab or select the **Review + create** button
-9. Select the **Create** button
-
-## Schedule a contact with AQUA using Azure Orbital and save the downlinked data
-
-1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
-2. In the **Spacecraft** page, select **AQUA**.
-3. Select **Schedule contact** on the top bar of the spacecraftΓÇÖs overview.
-4. In the **Schedule contact** page, specify this information from the top of the page:
+ | **Direction** | Enter **Downlink**. |
+ | **Gain/Temperature in db/K** | Enter **0**. |
+ | **Center Frequency** | Enter **8160.0**. |
+ | **Bandwidth MHz** | Enter **15.0**. |
+ | **Polarization** | Enter **RHCP**. |
+ | **Endpoint name** | Enter the name of the virtual machine that you created earlier. |
+ | **IP Address** | Enter the private IP address of the virtual machine that you created earlier. |
+ | **Port** | Enter **56001**. |
+ | **Protocol** | Enter **TCP**. |
+ | **Demodulation Configuration** | Select the **Preset Named Modem Configuration** option, and then select **Aqua Direct Broadcast**.|
+ | **Decoding Configuration** | Leave this field blank. |
+
+7. Select the **Submit** button.
+8. Select the **Review + create** tab, or select the **Next: Review + create** button.
+9. Select **Create**.
+
+## Schedule a contact with Aqua and save the downlinked data
+
+1. In the Azure portal's search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+2. On the **Spacecraft** page, select **AQUA**.
+3. Select **Schedule contact** on the top bar of the spacecraft's overview.
+4. On the **Schedule contact** page, specify this information:
| **Field** | **Value** | | | |
- | Contact profile | Select **AQUA_Downlink** |
- | Ground station | Select **Quincy** |
- | Start time | Identify a start time for the contact availability window |
- | End time | Identify an end time for the contact availability window |
+ | **Contact profile** | Select **AQUA_Downlink**. |
+ | **Ground station** | Select **Quincy**. |
+ | **Start time** | Identify a start time for the contact availability window. |
+ | **End time** | Identify an end time for the contact availability window. |
5. Select **Search** to view available contact times.
-6. Select one or more contact windows and select **Schedule**.
-7. View the scheduled contact by selecting the **AQUA** spacecraft and navigating to **Contacts**.
-8. Shortly before the contact begins executing, start listening on port 56001, and output the data received into the file:
-```console
-socat -u tcp-listen:56001,fork create:/media/aqua/out.bin
-```
-9. Once your contact has executed, copy the output file from the tmpfs into your home directory to avoid being overwritten when another contact is executed.
-```console
-mkdir ~/aquadata
-cp /media/aqua/out.bin ~/aquadata/raw-$(date +"%FT%H%M%z").bin
-```
-
- > [!NOTE]
- > For a 10 minute long contact with AQUA while it is transmitting with 15MHz of bandwidth, you should expect to receive somewhere in the order of 450MB of data.
+6. Select one or more contact windows, and then select **Schedule**.
+7. View the scheduled contact by selecting the **AQUA** spacecraft and going to **Contacts**.
+8. Shortly before you start running the contact, start listening on port 56001 and output the data received in the file:
+
+ ```console
+ socat -u tcp-listen:56001,fork create:/media/aqua/out.bin
+ ```
+9. After you run your contact, copy the output file from *tmpfs* into your home directory, to avoid overwriting the file when you run another contact:
+
+ ```console
+ mkdir ~/aquadata
+ cp /media/aqua/out.bin ~/aquadata/raw-$(date +"%FT%H%M%z").bin
+ ```
+
+> [!NOTE]
+> For a 10-minute contact with Aqua while it's transmitting with 15 MHz of bandwidth, you should expect to receive around 450 MB of data.
## Next steps -- [Collect and process Aqua satellite payload](satellite-imagery-with-orbital-ground-station.md)
+- [Collect and process an Aqua satellite payload](satellite-imagery-with-orbital-ground-station.md)
orbital License Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/license-spacecraft.md
Once the licenses are in place, the spacecraft object will be updated by Azure O
## FAQ Q. Are third party ground stations such as KSAT included in this process?
-A. No, the process on this page applies to Microsoft sites only. For more information, see (to add link to third party page).
+A. No, the process on this page applies to Microsoft sites only. For more information, see [Integrate partner network ground stations](./partner-network-integration.md).
## Next steps - [Integrate partner network ground stations](./partner-network-integration.md)
orbital Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-network.md
Ensure the objects comply with the recommendations in this article. Note, these
## Prepare subnet for VNET injection Prerequisites:-- An entire subnet with no existing IPs allocated or in use that can be dedicated to Orbital GSaaS in your virtual network in your resource group.
+- An entire subnet with no existing IPs allocated or in use that can be dedicated to the Azure Orbital Ground Station service in your virtual network in your resource group.
Steps: 1. Delegate a subnet to service named: Microsoft.Orbital/orbitalGateways. Follow instructions here: [Add or remove a subnet delegation in an Azure virtual network](../virtual-network/manage-subnet-delegation.md).
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
Execute steps listed in [Tutorial: Downlink data from NASA's AQUA public satelli
The above tutorial provides a walkthrough for scheduling a contact with Aqua and collecting the direct broadcast data on an Azure VM. > [!NOTE]
-> In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](downlink-aqua.md#prepare-your-virtual-machine-vm-and-network-to-receive-aqua-data), use the following values:
+> In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](downlink-aqua.md#prepare-your-virtual-machine-and-network-to-receive-aqua-data), use the following values:
> > - **Name:** receiver-vm > - **Operating System:** Linux (CentOS Linux 7 or higher)
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/troubleshoot.md
This document contains information about troubleshooting your solutions that use
* Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription.
- Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](/azure/cost-management-billing/manage/change-credit-card).
+ Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md).
* The EA subscription doesn't allow Marketplace purchases.
- Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support).
+ Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support).
## Unable to create Datadog - An Azure Native ISV Service resource
To set up the Azure Datadog integration, you must have **Owner** access on the A
:::image type="content" source="media/troubleshoot/diagnostic-setting.png" alt-text="Datadog diagnostic setting on the Azure resource" border="true"::: -- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](../../azure-monitor/essentials/resource-logs-categories.md).
-- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal).
+- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md?tabs=portal).
- Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings.
If the Datadog agent has been configured with an incorrect key, navigate to the
## Next steps -- Learn about [managing your instance](manage.md) of Datadog.
+- Learn about [managing your instance](manage.md) of Datadog.
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
This document contains information about troubleshooting your solutions that use
### Logs not being emitted -- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](../../azure-monitor/essentials/resource-logs-categories.md).
-- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal)
+- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md?tabs=portal)
- Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings.
This document contains information about troubleshooting your solutions that use
## Next steps -- Learn about [managing your instance](dynatrace-how-to-manage.md) of Dynatrace.
+- Learn about [managing your instance](dynatrace-how-to-manage.md) of Dynatrace.
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/troubleshoot.md
This document contains information about troubleshooting your solutions that use
## Unable to create an Elastic resource
-Elastic integration with Azure can only be set up by users who have *Owner* or *Contributor* access on the Azure subscription. [Confirm that you have the appropriate access](/azure/role-based-access-control/check-access).
+Elastic integration with Azure can only be set up by users who have *Owner* or *Contributor* access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md).
## Logs not being emitted to Elastic
Elastic integration with Azure can only be set up by users who have *Owner* or *
:::image type="content" source="media/troubleshoot/check-diagnostic-setting.png" alt-text="Verify diagnostic setting"::: -- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](../../azure-monitor/essentials/resource-logs-categories.md).
-- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal)
+- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md?tabs=portal)
- Export of Metrics data is not supported currently by the partner solutions under Azure Monitor diagnostic settings.
Elastic integration with Azure can only be set up by users who have *Owner* or *
* Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription.
- Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](/azure/cost-management-billing/manage/change-credit-card).
+ Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md).
* The EA subscription doesn't allow Marketplace purchases.
- Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace#enabling-azure-marketplace-purchases).
+ Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases).
## Get support
In the Elastic site, open a support request.
## Next steps
-Learn about [managing your instance](manage.md) of Elastic.
+Learn about [managing your instance](manage.md) of Elastic.
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/troubleshoot.md
This article describes how to troubleshoot the Logz.io integration with Azure.
## Owner role needed to create resource
-To set up Logz.io, you must be assigned the [Owner role](/azure/role-based-access-control/rbac-and-directory-admin-roles) in the Azure subscription. Before you begin this integration, [check your access](/azure/role-based-access-control/check-access).
+To set up Logz.io, you must be assigned the [Owner role](../../role-based-access-control/rbac-and-directory-admin-roles.md) in the Azure subscription. Before you begin this integration, [check your access](../../role-based-access-control/check-access.md).
## Single sign-on errors
Use the following patterns to add new values:
### Logs not being sent to Logz.io -- Only resources listed in [Azure Monitor resource log categories](/azure/azure-monitor/essentials/resource-logs-categories) send logs to Logz.io. To verify whether a resource is sending logs to Logz.io:
+- Only resources listed in [Azure Monitor resource log categories](../../azure-monitor/essentials/resource-logs-categories.md) send logs to Logz.io. To verify whether a resource is sending logs to Logz.io:
- 1. Go to [Azure diagnostic setting](/azure/azure-monitor/essentials/diagnostic-settings) for the specific resource.
+ 1. Go to [Azure diagnostic setting](../../azure-monitor/essentials/diagnostic-settings.md) for the specific resource.
1. Verify that there's a Logz.io diagnostic setting. :::image type="content" source="media/troubleshoot/diagnostics.png" alt-text="Screenshot of the Azure monitoring diagnostic settings for Logz.io."::: -- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal).
+- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md?tabs=portal).
- Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings. ## Register resource provider
-You must register `Microsoft.Logz` in the Azure subscription that contains the Logz.io resource, and any subscriptions with resources that send data to Logz.io. For more information about troubleshooting resource provider registration, see [Resolve errors for resource provider registration](/azure/azure-resource-manager/troubleshooting/error-register-resource-provider).
+You must register `Microsoft.Logz` in the Azure subscription that contains the Logz.io resource, and any subscriptions with resources that send data to Logz.io. For more information about troubleshooting resource provider registration, see [Resolve errors for resource provider registration](../../azure-resource-manager/troubleshooting/error-register-resource-provider.md).
## Limit reached in monitored resources
Purchase fails because a valid credit card isn't connected to the Azure subscrip
To resolve a purchase error: - Use a different Azure subscription.-- Add or update the subscription's credit card or payment method. For more information, see [Add or update a credit card for Azure](/azure/cost-management-billing/manage/change-credit-card).
+- Add or update the subscription's credit card or payment method. For more information, see [Add or update a credit card for Azure](../../cost-management-billing/manage/change-credit-card.md).
You can view the error's output from the resource's deployment page, by selecting **Operation Details**.
You can view the error's output from the resource's deployment page, by selectin
## Next steps - Learn how to [manage](manage.md) your Logz.io integration.-- To learn more about SSO, see [Set up Logz.io single sign-on](setup-sso.md).
+- To learn more about SSO, see [Set up Logz.io single sign-on](setup-sso.md).
partner-solutions Nginx Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-create.md
Title: Create an NGINX for Azure deployment
-description: This article describes how to use the Azure portal to create an instance of NGINX.
-
+ Title: Create an NGINXaaS deployment
+description: This article describes how to use the Azure portal to create an instance of NGINXaaS.
++ Previously updated : 05/12/2022 Last updated : 01/11/2023+
-# QuickStart: Get started with NGINX
+# QuickStart: Get started with NGINXaaS
-In this quickstart, you'll use the Azure Marketplace to find and create an instance of **NGINX for Azure**.
+In this quickstart, you'll use the Azure Marketplace to find and create an instance of **NGINXaaS**.
-## Create new NGINX deployment
+## Create a new NGINXaaS resource
### Basics
-1. To create an NGINX deployment using the Marketplace, subscribe to **NGINX for Azure** in the Azure portal.
+1. To create an NGINXaaS deployment using the Marketplace, subscribe to **NGINXaaS** in the Azure portal.
-1. Set the following values in the **Create NGINX Deployment** pane.
+1. Set the following values in the **Create NGINXaaS** pane.
- :::image type="content" source="media/nginx-create/nginx-create.png" alt-text="Screenshot of basics pane of the NGINX create experience.":::
+ :::image type="content" source="media/nginx-create/nginx-create.png" alt-text="Screenshot of basics pane of the NGINXaaS create experience.":::
| Property | Description | |||
- | Subscription | From the drop-down, select your Azure subscription where you have owner access |
- | Resource group | Specify whether you want to create a new resource group or use an existing one. A resource group is a container that holds related resources for an Azure solution. For more information, see Azure Resource Group overview. |
- | NGINX account name | Put the name for the NGINX account you want to create |
- | Location | Select West Central US. West Central US is the only Azure region supported by NGINX during preview |
- | Plan | Specified based on the selected NGINX plan |
- | Price | Pay As You Go |
+ | **Subscription** | From the drop-down, select your Azure subscription where you have owner access. |
+ | **Resource group** | Specify whether you want to create a new resource group or use an existing one. A resource group is a container that holds related resources for an Azure solution. For more information, see Azure Resource Group overview. |
+ | **Name** | Put the name for the NGINXaaS account you want to create. |
+ | **Region** | Select West Central US. West Central US is the only Azure region supported by NGINXaaS during preview. |
+ | **Pricing Plan** | Specified based on the selected NGINXaaS plan. |
+ > [!NOTE]
-> West Central US is the only Azure region supported by NGINX during preview.
+> West Central US is the only Azure region supported by NGINXaaS during preview.
+<!-- Is this still true at GA -->
### Networking
-1. After filling in the proper values, select the **Next: Networking** to see the **Networking** screen. Specify the VNet and Subnet that is associated with the NGINX deployment.
+1. After filling in the proper values, select the **Next: Networking** to see the **Networking** screen. Specify the VNet and Subnet that is associated with the NGINXaaS deployment.
- :::image type="content" source="media/nginx-create/nginx-networking.png" alt-text="Screenshot of the networking pane in the NGINX create experience.":::
+ :::image type="content" source="media/nginx-create/nginx-networking.png" alt-text="Screenshot of the networking pane in the NGINXaaS create experience.":::
-1. Select the checkbox **I allow NGINX service provider to access the above virtual network for deployment** to indicate that you acknowledge access to your Tenant to ensure VNet and NIC association.
+1. Select the checkbox **I allow NGINXaaS service provider to access the above virtual network for deployment** to indicate that you acknowledge access to your Tenant to ensure VNet and NIC association.
1. Select either Public or Private End points for the IP address selection. ### Tags
-You can specify custom tags for the new NGINX resource in Azure by adding custom key-value pairs.
+You can specify custom tags for the new NGINXaaS resource in Azure by adding custom key-value pairs.
1. Select Tags.
- :::image type="content" source="media/nginx-create/nginx-custom-tags.png" alt-text="Screenshot showing the tags pane in the NGINX create experience.":::
+ :::image type="content" source="media/nginx-create/nginx-custom-tags.png" alt-text="Screenshot showing the tags pane in the NGINXaaS create experience.":::
| Property | Description | |-| -|
- |Name | Name of the tag corresponding to the Azure NGINX resource. |
- | Value | Value of the tag corresponding to the Azure NGINX resource. |
+ |**Name** | Name of the tag corresponding to the Azure NGINXaaS resource. |
+ | **Value** | Value of the tag corresponding to the Azure NGINXaaS resource. |
### Review and create
-1. Select the **Next: Review + Create** to navigate to the final step for resource creation. When you get to the **Review + Create** page, all validations are run. At this point, review all the selections made in the Basics, Networking, and optionally Tags panes. You can also review the NGINX and Azure Marketplace terms and conditions.
+1. Select the **Next: Review + Create** to navigate to the final step for resource creation. When you get to the **Review + Create** page, all validations are run. At this point, review all the selections made in the Basics, Networking, and optionally Tags panes. You can also review the NGINXaaS and Azure Marketplace terms and conditions.
:::image type="content" source="media/nginx-create/nginx-review-and-create.png" alt-text="screenshot of review and create nginx resource":::
-1. Once you've reviewed all the information select **Create**. Azure now deploys the NGINX for Azure resource.
+1. Once you've reviewed all the information select **Create**. Azure now deploys the NGINXaaSaaS resource.
- :::image type="content" source="media/nginx-create/nginx-deploy.png" alt-text="Screenshot showing NGINX deployment in process.":::
+ :::image type="content" source="media/nginx-create/nginx-deploy.png" alt-text="Screenshot showing NGINXaaS deployment in process.":::
## Deployment completed
-1. Once the create process is completed, select **Go to Resource** to navigate to the specific NGINX resource.
+1. Once the create process is completed, select **Go to Resource** to navigate to the specific NGINXaaS resource.
- :::image type="content" source="media/nginx-create/nginx-overview-pane.png" alt-text="Screenshot of a completed NGINX deployment.":::
+ :::image type="content" source="media/nginx-create/nginx-overview-pane.png" alt-text="Screenshot of a completed NGINXaaS deployment.":::
1. Select **Overview** in the Resource menu to see information on the deployed resources.
- :::image type="content" source="media/nginx-create/nginx-overview-pane.png" alt-text="Screenshot of information on the NGINX resource overview.":::
+ :::image type="content" source="media/nginx-create/nginx-overview-pane.png" alt-text="Screenshot of information on the NGINXaaS resource overview.":::
## Next steps -- [Manage the NGINX resource](nginx-manage.md)
+- [Manage the NGINXaaS resource](nginx-manage.md)
partner-solutions Nginx Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-manage.md
Title: Manage an NGINX resource through the Azure portal
-description: This article describes management functions for NGINX on the Azure portal.
--
+ Title: Manage an NGINXaaS resource through the Azure portal
+description: This article describes management functions for NGINXaaS on the Azure portal.
+ Previously updated : 05/12/2022++ Last updated : 01/11/2023+
-# Manage your NGINX for Azure (preview) integration through the portal
+# Manage your NGINXaaS integration through the portal
-Once your NGINX resource is created in the Azure portal, you might need to get information about it or change it. Here's list of ways to manage your NGINX resource.
+Once your NGINXaaS resource is created in the Azure portal, you might need to get information about it or change it. Here's list of ways to manage your NGINXaaS resource.
- [Configure managed identity](#configure-managed-identity) - [Changing the configuration](#changing-the-configuration) - [Adding certificates](#adding-certificates) - [Send metrics to monitoring](#send-metrics-to-monitoring)-- [Delete an NGINX deployment](#delete-an-nginx-deployment)
+- [Delete an NGINXaaS deployment](#delete-an-nginxaas-deployment)
- [GitHub integration](#github-integration) ## Configure managed identity Add a new User Assigned Managed Identity.
-1. From the Resource menu, select your NGINX deployment.
+1. From the Resource menu, select your NGINXaaS deployment.
-1. From **Settings** on the left, select **Identity**.
+1. From **Settings** in the Resource menu, select **Identity**.
- :::image type="content" source="media/nginx-manage/nginx-identity.png" alt-text="Screenshot showing how to add a managed identity to NGINX resource.":::
+ :::image type="content" source="media/nginx-manage/nginx-identity.png" alt-text="Screenshot showing how to add a managed identity to NGINXaaS resource.":::
1. To add a User Assigned identity, select **Add** in the working pane. You see a new pane for adding **User assigned managed identities** on the right that are part of the subscription. Select an identity and select **Add**.
Add a new User Assigned Managed Identity.
## Changing the configuration
-1. From the Resource menu, select your NGINX deployment.
+1. From the Resource menu, select your NGINXaaS deployment.
-1. Select **NGINX configuration** on the left.
+1. Select **NGINXaaS configuration** in the Resource menu.
- :::image type="content" source="media/nginx-manage/nginx-configuration.png" alt-text="Screenshot resources for NGINX configuration settings.":::
+ :::image type="content" source="media/nginx-manage/nginx-configuration.png" alt-text="Screenshot resources for NGINXaaS configuration settings.":::
-1. To upload an existing **NGINX config package**, type the appropriate `.conf file` in **File path** in the working paned and select the **+** button and for config package.
+1. To upload an existing **NGINXaaS config package**, type the appropriate `.conf file` in **File path** in the working paned and select the **+** button and for config package.
:::image type="content" source="media/nginx-manage/nginx-config-path.png" alt-text="Screenshot of config (. C O N F) file for uploading.":::
Add a new User Assigned Managed Identity.
1. To edit the config file within the Editor, select the pencil icon. When you're done editing, select **Submit**.
- :::image type="content" source="media/nginx-manage/nginx-config-editor.png" alt-text="Screenshot of editor for config file with Intelisense displayed.":::
+ :::image type="content" source="media/nginx-manage/nginx-config-editor.png" alt-text="Screenshot of editor for config file with Intellisense displayed.":::
## Adding certificates You can add a certificate by uploading it to Azure Key vault, and then associating the certificate with your deployment.
-1. From the Resource menu, select your NGINX deployment.
+1. From the Resource menu, select your NGINXaaS deployment.
-1. Select **NGINX certificates** in **Settings** on the left.
+1. Select **NGINXaaS certificates** in **Settings** in the Resource menu.
- :::image type="content" source="media/nginx-manage/nginx-certificates.png" alt-text="Screenshot of NGINX certificate uploading.":::
+ :::image type="content" source="media/nginx-manage/nginx-certificates.png" alt-text="Screenshot of NGINXaaS certificate uploading.":::
-1. Select **Add certificate**. You see an **Add certificate** pane on the right. Add the appropriate information
+1. Select **Add certificate**. You see an **Add certificate** in the working pane. Add the appropriate information
:::image type="content" source="media/nginx-manage/nginx-add-certificate.png" alt-text="Screenshot of the add certificate pane.":::
You can add a certificate by uploading it to Azure Key vault, and then associati
## Send metrics to monitoring
-1. From the Resource menu, select your NGINX deployment.
+1. From the Resource menu, select your NGINXaaS deployment.
-1. Select **NGINX Monitoring** under the **Settings** on the left.
+1. Select **NGINXaaS Monitoring** under the **Settings** in the Resource menu.
- :::image type="content" source="media/nginx-manage/nginx-monitoring.png" alt-text="Screenshot of NGINX monitoring in Azure metrics.":::
+ :::image type="content" source="media/nginx-manage/nginx-monitoring.png" alt-text="Screenshot of NGINXaaS monitoring in Azure metrics.":::
1. Select **Send metrics to Azure Monitor** to enable metrics and select **Save**. :::image type="content" source="media/nginx-manage/nginx-send-to-monitor.png" alt-text="screenshot of nginx sent to monitoring":::
-## Delete an NGINX deployment
+## Delete an NGINXaaS deployment
-To delete a deployment of NGINX for Azure (preview):
+To delete a deployment of NGINXaaS:
-1. From the Resource menu, select your NGINX deployment.
+1. From the Resource menu, select your NGINXaaS deployment.
-1. Select **Overview** on the left.
+1. Select **Overview** in the Resource menu.
1. Select **Delete**.
- :::image type="content" source="media/nginx-manage/nginx-delete-deployment.png" alt-text="Screenshot showing how delete an NGINX resource.":::
+ :::image type="content" source="media/nginx-manage/nginx-delete-deployment.png" alt-text="Screenshot showing how to delete an NGINXaaS resource.":::
-1. Confirm that you want to delete the NGINX resource.
+1. Confirm that you want to delete the NGINXaaS resource.
- :::image type="content" source="media/nginx-manage/nginx-confirm-delete.png" alt-text="Screenshot showing the final confirmation of delete for NGINX resource.":::
+ :::image type="content" source="media/nginx-manage/nginx-confirm-delete.png" alt-text="Screenshot showing the final confirmation of delete for NGINXaaS resource.":::
1. Select **Delete**.
-After the account is deleted, logs are no longer sent to NGINX, and all billing stops for NGINX through Azure Marketplace.
+After the account is deleted, logs are no longer sent to NGINXaaS, and all billing stops for NGINXaaS through Azure Marketplace.
> [!NOTE] > The delete button on the main account is only activated if all the sub-accounts mapped to the main account are already deleted. Refer to section for deleting sub-accounts here.
After the account is deleted, logs are no longer sent to NGINX, and all billing
Enable CI/CD deployments via GitHub Actions integrations.
-<!-- <<Add screenshot for GitHub integration>> -->
- ## Next steps
-For help with troubleshooting, see [Troubleshooting NGINX integration with Azure](nginx-troubleshoot.md).
+For help with troubleshooting, see [Troubleshooting NGINXaaS integration with Azure](nginx-troubleshoot.md).
partner-solutions Nginx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-overview.md
Title: What is NGINX for Azure
-description: Learn about using the NGINX Cloud-Native Observability Platform in the Azure Marketplace.
+ Title: What is NGINXaaS
+description: Learn about using the NGINXaaS Cloud-Native Observability Platform in the Azure Marketplace.
++ - Previously updated : 05/12/2022 Last updated : 01/11/2023+
-# What is NGINX for Azure (preview)?
+# What is NGINXaaS?
-In this article you learn how to enable deeper integration of the **NGINX** SaaS service with Azure.
+In this article you learn how to enable deeper integration of the NGINXSaaS service with Azure.
-NGINX for Azure (preview) delivers secure and high performance applications using familiar and trusted load balancing solutions. Use NGINX for Azure (preview) as a reverse proxy within your Azure environment.
+NGINXaaS delivers secure and high performance applications using familiar and trusted load balancing solutions. Use NGINXaaS as a reverse proxy within your Azure environment.
-The NGINX for Azure (preview) offering in the Azure Marketplace allows you to manage NGINX in the Azure portal as an integrated service. You can implement NGINX as a monitoring solution for your cloud workloads through a streamlined workflow.
+The NGINXaaS offering in the Azure Marketplace allows you to manage NGINX in the Azure portal as an integrated service. You can implement NGINXaaS as a monitoring solution for your cloud workloads through a streamlined workflow.
-You can set up the NGINX resources through a resource provider named Nginx.NginxPlus. You can create and manage NGINX resources through the Azure portal. NGINX owns and runs the software as a service (SaaS) application including the NGINX accounts created.
+You can set up the NGINXaaS resources through a resource provider named Nginx.NginxPlus. You can create and manage NGINXaaS resources through the Azure portal. NGINX owns and runs the software as a service (SaaS) application including the NGINX accounts created.
-Here are the key capabilities provided by the NGINX for Azure (preview) integration:
+Here are the key capabilities provided by the NGINXaaS integration:
-- **Seamless onboarding** of NGINX SaaS software as an integrated service on Azure-- **Unified billing** of NGINX SaaS through Azure Monthly bill -- **Single-Sign on to NGINX.** - No separate sign-up needed from NGINX portal-- **Lift and Shift config files** - Ability to use existing Configuration (.conf) files for SaaS deployment
+- **Seamless onboarding** of NGINXaaS software as an integrated service on Azure.
+- **Unified billing** of NGINXaaS through Azure monthly billing.
+- **Single-Sign on to NGINXaaS** - No separate sign-up needed from NGINX portal.
+- **Lift and Shift config files** - Ability to use existing Configuration (.conf) files for NGINXaaS deployment.
-## Pre-requisites
+## Pre-requisites for NGINXaaS
### Subscription owner
-The NGINX for Azure (preview) integration can only be set up by users who have Owner access on the Azure subscription. Ensure you have the appropriate Owner access before starting to set up this integration.
+The NGINXaaS integration can only be set up by users who have Owner access on the Azure subscription. Ensure you have the appropriate Owner access before starting to set up this integration.
-## Find NGINX for Azure (preview) in the Azure Marketplace
+## Find NGINXaaS in the Azure Marketplace
1. Navigate to the Azure Marketplace page.
-1. Search for _NGINX for Azure_ listed.
+1. Search for _NGINXaaS_ listed.
-1. In the plan overview pane, select the **Setup and Subscribe**. The **Create new NGINX account** window opens.
+1. In the plan overview pane, select the **Subscribe**. The **Create NGINXaaS** form opens in the working pane.
## Next steps
-To create an instance of NGINX, see [QuickStart: Get started with NGINX](nginx-create.md).
+To create an instance of NGINXaaS, see [QuickStart: Get started with NGINXaaS](nginx-create.md).
partner-solutions Nginx Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-troubleshoot.md
Title: Troubleshooting your NGINX for Azure deployment
-description: This article provides information about getting support and troubleshooting an NGINX for Azure integration.
--
+ Title: Troubleshooting your NGINXaaS deployment
+description: This article provides information about getting support and troubleshooting an NGINXaaS integration.
+ Previously updated : 05/12/2022+ Last updated : 01/11/2023+++
-# Troubleshooting NGINX integration with Azure
+# Troubleshooting NGINXaaS integration with Azure
-You can get support for your NGINX deployment through a **New Support request**. The procedure for creating the request is here. In addition, we have included other troubleshooting for problems you might experience in creating and using an NGINX deployment.
+You can get support for your NGINXaaS deployment through a **New Support request**. The procedure for creating the request is here. In addition, we have included other troubleshooting for problems you might experience in creating and using an NGINXaaS deployment.
## Getting support
-1. To contact support about an Azure NGINX integration, open your NGINX Deployment in the portal.
+1. To contact support about an NGINXaaS resource, select the resource in the Resource menu.
1. Select the **New Support request** in Resource menu on the left. 1. Select **Raise a support ticket** and fill out the details.
- :::image type="content" source="media/nginx-troubleshoot/nginx-support-request.png" alt-text="Screenshot of an new NGINX support ticket.":::
+ :::image type="content" source="media/nginx-troubleshoot/nginx-support-request.png" alt-text="Screenshot of a new NGINXaaS support ticket.":::
## Troubleshooting
-### Unable to create an NGINX resource as not a subscription owner
+### Unable to create an NGINXaaS resource as not a subscription owner
-The NGINX for Azure integration can only be set up by users who have Owner access on the Azure subscription. Ensure you have the appropriate Owner access before starting to set up this integration.
+The NGINXaaS integration can only be set up by users who have Owner access on the Azure subscription. Ensure you have the appropriate Owner access before starting to set up this integration.
## Next steps
-Learn about [managing your instance](nginx-manage.md) of NGINX.
+Learn about [managing your instance](nginx-manage.md) of NGINXaaS.
partner-solutions Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/partners.md
Title: Partner services
description: Learn about services offered by partners on Azure. - Previously updated : 09/24/2022 + Last updated : 01/11/2023+
Azure Native ISV Services are available through the Marketplace.
|[Datadog](datadog/overview.md) | Monitoring and analytics platform for large scale applications. | |[Elastic](elastic/overview.md) | Build modern search experiences and maximize visibility into health, performance, and security of your infrastructure, applications, and data. | |[Logz.io](logzio/overview.md) | Observability platform that centralizes log, metric, and tracing analytics. |
-|[Dynatrace for Azure](dynatrace/dynatrace-overview.md) | Provides deep cloud observability, advanced AIOps, and continuous runtime application security. |
+|[Azure Native Dynatrace Service](dynatrace/dynatrace-overview.md) | Provides deep cloud observability, advanced AIOps, and continuous runtime application security. |
## Data and storage
Azure Native ISV Services are available through the Marketplace.
|Partner |Description | |||
-|[NGINX for Azure (preview)](nginx/nginx-overview.md) | Use NGINX for Azure (preview) as a reverse proxy within your Azure environment. |
+|[NGINXaaS ](nginx/nginx-overview.md) | Use NGINXaaS as a reverse proxy within your Azure environment. |
peering-service Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/azure-portal.md
Title: Create Azure Peering Service Connection - Azure portal
-description: Learn how to create Azure Peering Service by using the Azure portal
+ Title: Create Azure Peering Service connection - Azure portal
+description: Learn how to create, configure, and delete an Azure Peering Service connection using the Azure portal
Previously updated : 04/07/2021 Last updated : 01/12/2023 +
-# Create Peering Service Connection using the Azure portal
+# Create Peering Service connection using the Azure portal
+
+> [!div class="op_single_selector"]
+> * [Portal](azure-portal.md)
+> * [PowerShell](powershell.md)
+> * [Azure CLI](cli.md)
Azure Peering Service is a networking service that enhances customer connectivity to Microsoft public cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. In this article, you'll learn how to Create a Peering Service connection by using the Azure portal.
-If you don't have an Azure subscription, create an [account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
-
->
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prerequisites
-You must have the following:
-
-### Azure account
-
-You must have a valid and active Microsoft Azure account. This account is required to set up the Peering Service connection. Peering Service connection is a resource within Azure subscriptions.
-
-### Connectivity provider
+- An Azure subscription
-You can work with any [Azure peering service provider](./location-partners.md) to obtain Peering Service to connect your network optimally with the Microsoft network.
+- A connectivity provider. For more information, see [Azure peering service partners](./location-partners.md).
+## Sign in to Azure
--
-## Sign in to the Azure portal
-
-From a browser, go to the Azure portal and sign in with your Azure account.
+Sign in to the [Azure portal](https://portal.azure.com)
## Create a Peering Service connection
-1. To create a Peering Service connection, selectΓÇ»**Create a resource** >ΓÇ»**Peering Service**.
+1. In the search box at the top of the portal, enter *Peering Service*. Select **Peering Services** in the search results.
- ![Create Peering Service](./media/peering-service-portal/peering-servicecreate.png)
+1. Select **+ Create**.
-1. Enter the following details on the **Basics** tab on the **Create a peering service connection** page.
+1. In **Create a peering service connection**, enter or select the following information in the **Basics** tab:
-
-1. Select the subscription and the resource group associated with the subscription.
+ | Setting | Value |
+ | - | -- |
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **Create new**. </br> Enter *myResourceGroup*. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter *myPeeringService*. |
- ![Create Peering basic tab](./media/peering-service-portal/peering-servicebasics.png)
+ :::image type="content" source="./media/azure-portal/peering-service-basics.png" alt-text="Screenshot of the Basics tab of Create a peering service connection in Azure portal.":::
-1. Enter a **Name** to which the Peering Service instance should be registered.
+ > [!NOTE]
+ > Once a Peering Service resource is created under a certain subscription and resource group, it cannot be moved to another resource group or subscription.
-1. Now, select the **Next: Configuration** button at the bottom of the page. The **Configuration** page appears.
+1. Select **Next: Configuration**.
## Configure the Peering Service connection
-1. On the **Configuration** page, select the customer network location to which the Peering Service must be enabled by selecting the same from the **Peering service location** drop-down list.
+1. On the **Configuration** page, select your **Country** and **State/Province** where the Peering Service must be enabled.
-1. Select the service provider from whom the Peering Service must be obtained by selecting a provider name from the **Peering service provider** drop-down list.
+1. Select the **Provider** that you're using to enable the Peering Service.
-1. Select the **provider's primary peering location** closest to the customer network location. This is the peering service location between Microsoft and Partner.
+1. Select the **provider primary peering location** closest to your network location. This is the peering service location between Microsoft and the Partner.
-1. Select the **provider's backup peering location** as the next closest to the customer network location. peering services will be active via backup location only in the event of failure of primary peering service location for disaster recovery. If "None" is selected, internet will be the default failover route in the event of primary peering service location failure.
+1. Select the **provider backup peering location** as the next closest to your network location. A peering service will be active via the backup peering location only in the event of failure of primary peering service location for disaster recovery. If **None** is selected, internet will be the default failover route in the event of primary peering service location failure.
-
-1. Select **Create new prefix** at the bottom of the **Prefixes** section, and text boxes appear. Now, enter the name of the prefix resource and the prefixes that are associated with the service provider.
+1. Under the **Prefixes** section, select **Create new prefix**. In **Name**, enter a name for the prefix resource. Enter the prefixes that are associated with the service provider in **Prefix**. In **Prefix key**, enter the prefix key that was given to you by your provider (ISP or IXP). This key allows Microsoft to validate the prefix and provider who have allocated your IP prefix.
-1. Select **Prefix Key** and add the Prefix Key that has been given to you by your provider (ISP or IXP). This key allows MS to validate the prefix and provider who have allocated your IP prefix.
- > ![Screenshot shows the Configuration tab of the Create a peering service connection page where you can enter the Prefix key.](./media/peering-service-portal/peering-serviceconfiguration.png)
+ :::image type="content" source="./media/azure-portal/peering-service-configuration.png" alt-text="Screenshot of the Configuration tab of Create a peering service connection in Azure portal.":::
-1. Select the **Review + create** button at the lower left of the page. The **Review + create** page appears, and Azure validates your configuration.
+1. Select **Review + create**.
-1. When you see the **Validation passed** message as shown, select **Create**.
-
- > ![Screenshot shows the Review + create tab of the Create a peering service connection page.](./media/peering-service-portal/peering-service-prefix.png)
+1. Review the settings, and then select **Create**.
+ :::image type="content" source="./media/azure-portal/peering-service-create.png" alt-text="Screenshot of the Review + create tab of Create a peering service connection in Azure portal.":::
-1. After you create a Peering Service connection, additional validation is performed on the included prefixes. You can review the validation status under the **Prefixes** section of the resource name. If the validation fails, one of the following error messages is displayed:
+1. After you create a Peering Service connection, additional validation is performed on the included prefixes. You can review the validation status under the **Prefixes** section of your Peering Service. If the validation fails, one of the following error messages is displayed:
- Invalid Peering Service prefix, the prefix should be valid format, only IPv4 prefix is supported currently.
- - Prefix was not received from Peering Service provider, contact Peering Service provider.
- - Prefix announcement does not have a valid BGP community, contact Peering Service provider.
+ - Prefix wasn't received from Peering Service provider, contact Peering Service provider.
+ - Prefix announcement doesn't have a valid BGP community, contact Peering Service provider.
- Prefix received with longer AS path(>3), contact Peering Service provider. - Prefix received with private AS in the path, contact Peering Service provider. ### Add or remove a prefix
-Select **Add prefixes** on the **Prefixes** page to add prefixes.
+1. In the search box at the top of the portal, enter *Peering Service*. Select **Peering Services** in the search results.
-Select the ellipsis (...) next to the listed prefix, and select the **Delete** option.
+1. Select your Peering Service that you want to add or remove a prefix to or from it.
-### Delete a Peering Service connection
+1. Select **Prefixes**, and then select **Add prefix** to add prefixes.
-On the **All Resources** page, select the check box on the Peering Service and select the **Delete** option at the top of the page.
+1. Select the ellipsis (**...**) next to the listed prefix, and select **Delete**.
> [!NOTE] > You can't modify an existing prefix.
->
+
+### Delete a Peering Service connection
+
+1. In the search box at the top of the portal, enter *Peering Service*. Select **Peering Services** in the search results.
+
+1. Select the checkbox next to the Peering Service that you want to delete, and then select **Delete** at the top of the page.
+
+1. Enter *yes* in **Confirm delete**, and then select **Delete**.
+
+ :::image type="content" source="./media/azure-portal/peering-service-delete.png" alt-text="Screenshot of deleting a Peering Service in Azure portal.":::
## Next steps -- To learn about Peering Service connection, see [Peering Service connection](connection.md).-- To learn about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md).-- To measure telemetry, see [Measure connection telemetry](measure-connection-telemetry.md).-- To create the peering service connection by using Azure PowerShell, see [Create a Peering Service connection - Azure PowerShell](powershell.md).-- To create the connection by using the Azure CLI, see [Create a Peering Service connection - Azure CLI](cli.md).
+- To learn more about Peering Service connection, see [Peering Service connection](connection.md).
+- To learn more about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md).
+- To measure Peering Service connection telemetry, see [Measure connection telemetry](measure-connection-telemetry.md).
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
az postgres flexible-server db create \
Next, create a non-admin user and grant all permissions to the database. > [!NOTE]
-> You can read more detailed information about creating PostgreSQL users in [Create users in Azure Database for PostgreSQL](/azure/PostgreSQL/flexible-server/how-to-create-users).
+> You can read more detailed information about creating PostgreSQL users in [Create users in Azure Database for PostgreSQL](./how-to-create-users.md).
#### [Passwordless (Recommended)](#tab/passwordless)
az group delete \
## Next steps > [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](../howto-migrate-using-export-and-import.md)
+> [Migrate your database using Export and Import](../howto-migrate-using-export-and-import.md)
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
In this article, you learn how to create and manage read replicas in Azure Datab
## Prerequisites
-An [Azure Database for PostgreSQL server](/azure/postgresql/flexible-server/quickstart-create-server-portal) to be the primary server.
+An [Azure Database for PostgreSQL server](./quickstart-create-server-portal.md) to be the primary server.
> [!NOTE] > When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica.
The **Read Replica Lag** metric shows the time since the last replayed transacti
* Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
-[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
+[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
openssl s_client -showcerts -connect <your-postgresql-server-name>:443
``` ### 14. What if I have further questions?
-If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help please create a [support request](/azure/azure-portal/supportability/how-to-create-azure-support-request):
+If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help please create a [support request](../../azure-portal/supportability/how-to-create-azure-support-request.md):
* ForΓÇ»*Issue type*, selectΓÇ»*Technical*. * ForΓÇ»*Subscription*, select your *subscription*. * ForΓÇ»*Service*, selectΓÇ»*My Services*, then selectΓÇ»*Azure Database for PostgreSQL ΓÇô Single Server*. * ForΓÇ»*Problem type*, selectΓÇ»*Security*.
-* ForΓÇ»*Problem subtype*, selectΓÇ» *Azure Encryption and Infrastructure Double Encryption*
-
+* ForΓÇ»*Problem subtype*, selectΓÇ» *Azure Encryption and Infrastructure Double Encryption*
private-5g-core Azure Private 5G Core Release Notes 2209 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2209.md
This article applies to the Azure Private 5G Core 2209 release (PMN-4-17-2). Thi
## What's new -- **Updated template for Log Analytics** - There is a new version of the Log Analytics Dashboard Quickstart template. This is required to view metrics on Packet Core versions 4.17 and above. To continue using your Log Analytics Dashboard, you must redeploy it with the new template. See [Create an overview Log Analytics dashboard using an ARM template](/azure/private-5g-core/create-overview-dashboard).
+- **Updated template for Log Analytics** - There is a new version of the Log Analytics Dashboard Quickstart template. This is required to view metrics on Packet Core versions 4.17 and above. To continue using your Log Analytics Dashboard, you must redeploy it with the new template. See [Create an overview Log Analytics dashboard using an ARM template](./create-overview-dashboard.md).
## Issues fixed in the 2209 release
The following table provides a summary of known issues carried over from the pre
## Next steps - [Upgrade the packet core instance in a site - Azure portal](upgrade-packet-core-azure-portal.md)-- [Upgrade the packet core instance in a site - ARM template](upgrade-packet-core-arm-template.md)
+- [Upgrade the packet core instance in a site - ARM template](upgrade-packet-core-arm-template.md)
private-5g-core Azure Stack Edge Packet Core Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-packet-core-compatibility.md
The following table provides information on which versions of the ASE device are
## Next steps -- Refer to [Update your Azure Stack Edge Pro GPU](/azure/databox-online/azure-stack-edge-gpu-install-update) for the latest available version of ASE and for more information on how to upgrade your device.
+- Refer to [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md) for the latest available version of ASE and for more information on how to upgrade your device.
- Refer to the packet core release notes for more information on the packet core version you're using or plan to use.
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
In this step, you'll create the mobile network site resource representing the ph
1. In the **Packet core** section, set the fields as follows:
- >[!IMPORTANT]
- > If you're configuring your packet core to have more than one attached data network, leave the **Custom location** field blank. You'll configure this field at the end of this procedure.
- - Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) to fill out the **Technology type**, **Azure Stack Edge device**, and **Custom location** fields. - Select the recommended packet core version in the **Version** field.
In this step, you'll create the mobile network site resource representing the ph
:::image type="content" source="media/create-a-site/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/create-a-site/site-related-resources.png":::
-15. If you have not yet set the **Custom location** field, set it now. From the resource group overview, select the **Packet Core Control Plane** resource and select **Configure a custom location**. Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) to fill out the **Custom location** field and select **Modify**.
- ## Next steps If you haven't already done so, you should now design the policy control configuration for your private mobile network. This allows you to customize how your packet core instances apply quality of service (QoS) characteristics to traffic. You can also block or limit certain flows.
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
Each Azure Private 5G Core Preview site contains a packet core instance, which is a cloud-native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC). In this how-to guide, you'll learn how to modify a packet core instance using the Azure portal; this includes modifying the packet core's custom location, connected Azure Stack Edge device, and access network configuration. You'll also learn how to add and modify the data networks attached to the packet core instance.
->[!IMPORTANT]
-> If you have configured or will configure your packet core to have more than one attached data network, you'll need to modify it without a custom location to avoid issues with your data networks.
->
-> 1. Follow this how-to guide to modify your packet core instance with the following changes:
->
-> 1. In [Modify the packet core configuration](#modify-the-packet-core-configuration), make a note of the custom location value in the **Custom ARC location** field.
-> 1. Set the **Custom ARC location** field to **None**.
-> 1. In [Submit and verify changes](#submit-and-verify-changes), the packet core will be redeployed at an uninstalled state with the new configuration.
->
-> 2. Follow this how-to guide again to set the **Custom ARC location** field to the custom location value you noted down.
- If you want to modify a packet core instance's local access configuration, follow [Modify the local access configuration in a site](modify-local-access-configuration.md). ## Prerequisites
If you want to modify a packet core instance's local access configuration, follo
- If you want to make changes to the attached data networks, refer to [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to collect the new values and make sure they're in the correct format. - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
+## Plan a maintenance window
+
+The following modifications will trigger a packet core reinstall, during which your service will be unavailable:
+
+- Attaching a new or existing data network to the packet core instance.
+- Detaching a data network from the packet core instance.
+- Changing the packet core instance's custom location.
+
+If you're making any of these changes, we recommend modifying your packet core instance during a maintenance window to minimize the impact on your service. The packet core reinstall will take approximately 45 minutes, but this time may vary between systems. You should allow up to two hours for the process to complete.
+
+If you're making a change that doesn't trigger a reinstall, you can skip the next step and move to [Select the packet core instance to modify](#select-the-packet-core-instance-to-modify).
+
+## Back up deployment information
+
+The following list contains the data that will be lost over a packet core reinstall. If you're making a change that triggers a reinstall, back up any information you'd like to preserve; after the reinstall, you can use this information to reconfigure your packet core instance.
+
+1. If you want to keep using the same credentials when signing in to [distributed tracing](distributed-tracing.md), save a copy of the current password to a secure location.
+1. If you want to keep using the same credentials when signing in to the [packet core dashboards](packet-core-dashboards.md), save a copy of the current password to a secure location.
+1. Any customizations made to the packet core dashboards won't be carried over the upgrade. Refer to [Exporting a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#exporting-a-dashboard) in the Grafana documentation to save a backed-up copy of your dashboards.
+1. Most UEs will automatically re-register and recreate any sessions after the upgrade completes. If you have any special devices that require manual operations to recover from a packet core outage, gather a list of these UEs and their recovery steps.
+ ## Select the packet core instance to modify In this step, you'll navigate to the **Packet Core Control Plane** resource representing your packet core instance.
In this step, you'll navigate to the **Packet Core Control Plane** resource repr
:::image type="content" source="media/modify-packet-core/modify-packet-core-configuration.png" alt-text="Screenshot of the Azure portal showing the Modify packet core option."::: 7. Choose the next step:
- - If you want to make changes to the packet core configuration, access network values, or remove the custom location go to [Modify the packet core configuration](#modify-the-packet-core-configuration).
+ - If you want to make changes to the packet core configuration or access network values, go to [Modify the packet core configuration](#modify-the-packet-core-configuration).
- If you want to configure a new or existing data network and attach it to the packet core instance, go to [Attach a data network](#attach-a-data-network). - If you want to make changes to a data network that's already attached to the packet core instance, go to [Modify attached data network configuration](#modify-attached-data-network-configuration).
To make changes to a data network attached to your packet core instance:
- If you made changes to the packet core configuration, check that the fields under **Connected ASE device**, **Custom ARC location** and **Access network** contain the updated information. - If you made changes to the attached data networks, check that the fields under **Data networks** contain the updated information.
+## Restore backed up deployment information
+
+If you made changes that triggered a packet core reinstall, reconfigure your deployment using the information you gathered in [Back up deployment information](#back-up-deployment-information).
+
+1. Follow [Access the distributed tracing web GUI](distributed-tracing.md#access-the-distributed-tracing-web-gui) to restore access to distributed tracing.
+1. Follow [Access the packet core dashboards](packet-core-dashboards.md#access-the-packet-core-dashboards) to restore access to your packet core dashboards.
+1. If you backed up any packet core dashboards, follow [Importing a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#importing-a-dashboard) in the Grafana documentation to restore them.
+1. If you have UEs that require manual operations to recover from a packet core outage, follow their recovery steps.
+ ## Next steps Use Log Analytics or the packet core dashboards to confirm your packet core instance is operating normally after you modify it.
private-5g-core Reinstall Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/reinstall-packet-core.md
If you're experiencing issues with your deployment, reinstalling the packet core
## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- If your packet core instance is still handling requests from your UEs, we recommend performing the reinstall during a maintenance window to minimize the impact on your service.
+- If your packet core instance is still handling requests from your UEs, we recommend performing the reinstall during a maintenance window to minimize the impact on your service. The packet core reinstall will take approximately 45 minutes, but this time may vary between systems. You should allow up to two hours for the process to complete.
## View the packet core instance's installation status
Before reinstalling, follow this step to check the packet core instance's instal
## Back up deployment information
-The following list contains data that will get lost over a packet core reinstall. Back up any information you'd like to preserve; after the reinstall, you can use this information to reconfigure your packet core instance.
+The following list contains the data that will be lost over a packet core reinstall. Back up any information you'd like to preserve; after the reinstall, you can use this information to reconfigure your packet core instance.
1. If you want to keep using the same credentials when signing in to [distributed tracing](distributed-tracing.md), save a copy of the current password to a secure location. 1. If you want to keep using the same credentials when signing in to the [packet core dashboards](packet-core-dashboards.md), save a copy of the current password to a secure location.
private-5g-core Upgrade Packet Core Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md
The template modifies the version of an existing [**Microsoft.MobileNetwork/pack
We recommend upgrading your packet core instance during a maintenance window to minimize the impact of the upgrade on your service.
-When planning for your upgrade, make sure you're allowing sufficient time for an upgrade and a possible rollback in the event of any issues. In addition, consider the following points for pre- and post-upgrade steps you may need to plan for when scheduling your maintenance window:
+When planning for your upgrade, make sure you're allowing sufficient time for an upgrade and a possible rollback in the event of any issues. An upgrade and rollback of packet core can each take up to two hours to complete.
+
+In addition, consider the following points for pre- and post-upgrade steps you may need to plan for when scheduling your maintenance window:
- Refer to the packet core release notes for the version of packet core you're upgrading to and whether it's supported by the version your Azure Stack Edge (ASE) is currently running. - If your ASE version is incompatible with the packet core version you're upgrading to, you'll need to upgrade ASE first. Refer to [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md) for the latest available version of ASE.
When planning for your upgrade, make sure you're allowing sufficient time for an
### Back up deployment information
-The following list contains data that will get lost over a packet core upgrade. Back up any information you'd like to preserve; after the upgrade, you can use this information to reconfigure your packet core instance.
+The following list contains the data that will be lost over a packet core upgrade. Back up any information you'd like to preserve; after the upgrade, you can use this information to reconfigure your packet core instance.
1. If you want to keep using the same credentials when signing in to [distributed tracing](distributed-tracing.md), save a copy of the current password to a secure location. 1. If you want to keep using the same credentials when signing in to the [packet core dashboards](packet-core-dashboards.md), save a copy of the current password to a secure location.
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
To check which version your packet core instance is currently running, and wheth
## Plan for your upgrade
-We recommend upgrading your packet core instance during a maintenance window to minimize the impact of the upgrade on your service.
+The service will be unavailable during the upgrade period. We recommend upgrading your packet core instance during a maintenance window to minimize the impact of the upgrade on your service.
-When planning for your upgrade, make sure you're allowing sufficient time for an upgrade and a possible rollback in the event of any issues. In addition, consider the following points for pre- and post-upgrade steps you may need to plan for when scheduling your maintenance window:
+When planning for your upgrade, make sure you're allowing sufficient time for an upgrade and a possible rollback in the event of any issues. An upgrade and rollback of packet core can each take up to two hours to complete.
+
+In addition, consider the following points for pre- and post-upgrade steps you may need to plan for when scheduling your maintenance window:
- Refer to the packet core release notes for the version of packet core you're upgrading to and whether it's supported by the version your Azure Stack Edge (ASE) is currently running. - If your ASE version is incompatible with the packet core version you're upgrading to, you'll need to upgrade ASE first. Refer to [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md) for the latest available version of ASE.
When planning for your upgrade, make sure you're allowing sufficient time for an
### Back up deployment information
-The following list contains data that will get lost over a packet core upgrade. Back up any information you'd like to preserve; after the upgrade, you can use this information to reconfigure your packet core instance.
+The following list contains the data that will be lost over a packet core upgrade. Back up any information you'd like to preserve; after the upgrade, you can use this information to reconfigure your packet core instance.
1. If you want to keep using the same credentials when signing in to [distributed tracing](distributed-tracing.md), save a copy of the current password to a secure location. 1. If you want to keep using the same credentials when signing in to the [packet core dashboards](packet-core-dashboards.md), save a copy of the current password to a secure location.
public-multi-access-edge-compute-mec Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/partner-solutions.md
## List of Partner solutions that can be deployed in Azure public MEC
-The table in this article provides information on Partner solutions that can be deployed in Public MEC using ARM templates.
+The table in this article provides information on Partner solutions that can be deployed in Public MEC.
>
The table in this article provides information on Partner solutions that can be
| **VMware** | [SDWAN Edge](https://sase.vmware.com/products/component-network-edge)| [VMware SD-WAN - Virtual Edge](https://azuremarketplace.microsoft.com/marketplace/apps/vmware-inc.sol-42222-bbj?tab=Overview) | | | | |
-For details on how to use ARM templates to deploy a solution, and which variables in the ARM template to modify for each solution, please open a CSS ticket for Azure Public MEC product.
Currently, the solutions can be deployed at the following locations:
purview How To Workflow Asset Curation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-asset-curation.md
This guide will take you through the creation and management of approval workflo
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
- :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-section.png" alt-text="Screenshot showing the management center left menu with the new workflow section highlighted.":::
+ :::image type="content" source="./media/how-to-workflow-asset-curation/workflow-section.png" alt-text="Screenshot showing the management center left menu with the new workflow section highlighted.":::
1. To create new workflows, select **Authoring** in the workflow section. This will take you to the workflow authoring experiences.
- :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-authoring-experience.png" alt-text="Screenshot showing the authoring workflows page, showing a list of all workflows.":::
+ :::image type="content" source="./media/how-to-workflow-asset-curation/workflow-authoring-experience.png" alt-text="Screenshot showing the authoring workflows page, showing a list of all workflows.":::
>[!NOTE] >If the authoring tab is greyed out, you don't have the permissions to be able to author workflows. You'll need the [workflow admin role](catalog-permissions.md).
-1. To create a new workflow, select **+New** button.
+1. To create a new workflow, select the **+New** button.
- :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-authoring-select-new.png" alt-text="Screenshot showing the authoring workflows page, with the plus sign New button highlighted.":::
+ :::image type="content" source="./media/how-to-workflow-asset-curation/workflow-authoring-select-new.png" alt-text="Screenshot showing the authoring workflows page, with the plus sign New button highlighted.":::
1. To create **Approval workflows for asset curation** Select **Data Catalog** and select **Continue**
- :::image type="content" source="./media/how-to-workflow-business-terms-approval/select-data-catalog.png" alt-text="Screenshot showing the new workflows menu, with Data Catalog selected.":::
+ :::image type="content" source="./media/how-to-workflow-asset-curation/select-data-catalog.png" alt-text="Screenshot showing the new workflows menu, with Data Catalog selected.":::
1. In the next screen, you'll see all the templates provided by Microsoft Purview to create a workflow. Select the template using which you want to start your authoring experiences and select **Continue**. Each of these templates specifies the kind of action that will trigger the workflow. In the screenshot below we've selected **Update asset attributes** to create approval workflow for asset updates.
- :::image type="content" source="./media/how-to-workflow-asset-curation/update-asset-attributes-continue.png" alt-text="Screenshot showing the new data catalog workflow menu, showing template options, with the Continue button selected.":::
+ :::image type="content" source="./media/how-to-workflow-asset-curation/update-asset-attributes-continue.png" alt-text="Screenshot showing the new data catalog workflow menu, showing template options, with the Continue button selected." lightbox="./media/how-to-workflow-asset-curation/update-asset-attributes-continue.png":::
1. Next, enter a workflow name and optionally add a description. Then select **Continue**.
- :::image type="content" source="./media/how-to-workflow-business-terms-approval/name-and-continue.png" alt-text="Screenshot showing the new data catalog workflow menu with a name entered into the name textbox.":::
+ :::image type="content" source="./media/how-to-workflow-asset-curation/name-and-continue.png" alt-text="Screenshot showing the new data catalog workflow menu with a name entered into the name textbox.":::
1. You'll now be presented with a canvas where the selected template is loaded by default.
This guide will take you through the creation and management of approval workflo
1. Once you're done defining a workflow, you need to bind the workflow to a collection hierarchy path. The binding implies that this workflow is triggered only for update operation on data assets in that collection. A workflow can be bound to only one hierarchy path. To bind a workflow or to apply a scope to a workflow, you need to select ΓÇÿApply workflowΓÇÖ. Select the scopes you want this workflow to be associated with and select **OK**.
- :::image type="content" source="./media/how-to-workflow-asset-curation/select-apply-workflow.png" alt-text="Screenshot showing the new data catalog workflow menu with the Apply Workflow button highlighted at the top of the workspace.":::
+ :::image type="content" source="./media/how-to-workflow-asset-curation/select-apply-workflow.png" alt-text="Screenshot showing the new data catalog workflow menu with the Apply Workflow button highlighted at the top of the workspace." lightbox="./media/how-to-workflow-asset-curation/select-apply-workflow.png":::
- :::image type="content" source="./media/how-to-workflow-asset-curation/select-okay.png" alt-text="Screenshot showing the apply workflow window, showing a list of items that the workflow can be applied to. At the bottom of the window, the O K button is selected.":::
+ :::image type="content" source="./media/how-to-workflow-asset-curation/select-okay.png" alt-text="Screenshot showing the apply workflow window, showing a list of items that the workflow can be applied to. At the bottom of the window, the O K button is selected." lightbox="./media/how-to-workflow-asset-curation/select-okay.png":::
>[!NOTE]
- > - The Microsoft Purview workflow engine will always resolve to the closest workflow that the collection hierarchy path is associated with. In case a direct binding is not found, it will traverse up in the tree to find the workflow associated with the closest parent in the collection tree.
-
+ > The Microsoft Purview workflow engine will always resolve to the closest workflow that the collection hierarchy path is associated with. In case a direct binding is not found, it will traverse up in the tree to find the workflow associated with the closest parent in the collection tree.
1. By default, the workflow will be enabled. To disable, toggle the Enable button in the top menu. 1. Finally select **Save and close** to create and the workflow.
- :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-enabled.png" alt-text="Screenshot showing the workflow authoring page, showing the newly created workflow listed among all other workflows.":::
+ :::image type="content" source="./media/how-to-workflow-asset-curation/workflow-enabled.png" alt-text="Screenshot showing the workflow authoring page, showing the newly created workflow listed among all other workflows." lightbox="./media/how-to-workflow-asset-curation/workflow-enabled.png":::
## Edit an existing workflow To modify an existing workflow, select the workflow and then select **Edit** in the top menu. You'll then be presented with the canvas containing workflow definition. Modify the workflow and select **Save** to commit changes. ## Disable a workflow To disable a workflow, select the workflow and then select **Disable** in the top menu. You can also disable the workflow by selecting **Edit** and changing the enable toggle in workflow canvas. ## Delete a workflow To delete a workflow, select the workflow and then select **Delete** in the top menu. ## Limitations for asset curation with approval workflow enabled
-* Lineage updates are directly stored in Purview data catalog without any approvals.
+- Lineage updates are directly stored in Purview data catalog without any approvals.
## Next steps
reliability Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-energy-data-services.md
+
+ Title: Resiliency in Microsoft Energy Data Services #Required; Must be "Resiliency in *your official service name*"
+description: Find out about reliability in Microsoft Energy Data Services #Required;
++++ Last updated : 12/05/2022 #Required; mm/dd/yyyy format.++
+<!--#Customer intent: As a customer, I want to understand reliability support for Microsoft Energy Data Services so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
+
+<!--
+
+Template for the main reliability article for Azure services.
+Keep the required sections and add/modify any content for any information specific to your service.
+This article should live in the reliability content area of azure-docs-pr.
+This article should be linked to in your TOC. Under a Reliability node or similar. The name of this page should be *reliability-Microsoft Energy Data Services.md* and the TOC title should be "Reliability in Microsoft Energy Data Services".
+Keep the headings in this order.
+
+This template uses comment pseudo code to indicate where you must choose between two options or more.
+
+Conditions are used in this document in the following manner and can be easily searched for:
+-->
+
+<!-- IF (AZ SUPPORTED) -->
+<!-- some text -->
+<!-- END IF (AZ SUPPORTED)-->
+
+<!-- BEGIN IF (SLA INCREASE) -->
+<!-- some text -->
+<!-- END IF (SLA INCREASE) -->
+
+<!-- IF (SERVICE IS ZONAL) -->
+<!-- some text -->
+<!-- END IF (SERVICE IS ZONAL) -->
+
+<!-- IF (SERVICE IS ZONE REDUNDANT) -->
+<!-- some text -->
+<!-- END IF (SERVICE IS ZONAL) -->
+
+<!--
+
+IMPORTANT:
+- Do a search and replace of TODO-service-name with the name of your service. That will make the template easier to read.
+- ALL sections are required unless noted otherwise.
+- MAKE SURE YOU REMOVE ALL COMMENTS BEFORE PUBLISH!!!!!!!!
+
+-->
+
+<!-- 1. H1 --
+Required: Uses the format "What is reliability in X?"
+The "X" part should identify the product or service.
+-->
+
+# What is reliability in Microsoft Energy Data Services?
+
+<!-- 2. Introductory paragraph
+Required: Provide an introduction. Use the following placeholder as a suggestion, but elaborate.
+-->
+
+This article describes reliability support in Microsoft Energy Data Services, and covers intra-regional resiliency with [availability zones](#availability-zone-support). For a more detailed overview of reliability in Azure, see [Azure reliability](../reliability/overview.md).
+
+## Availability zone support
+<!-- IF (AZ SUPPORTED) -->
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. If there's a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview.md).
+
+Microsoft Energy Data Services Preview supports zone-redundant instance by default and there's no setup required by the Customer.
+
+### Prerequisites
+
+The Microsoft Energy Data Services Preview supports availability zones in the following regions:
+
+| Americas | Europe | Middle East | Africa | Asia Pacific |
+||-||--|-|
+| South Central US | North Europe | | | |
+| East US | West Europe | | | |
+
+### Zone down experience
+During a zone-wide outage, no action is required during zone recovery. Customer may however experience brief degradation of performance, until the service self-heals and rebalances underlying capacity to adjust to healthy zones. Customers experiencing failures with Microsoft Energy Data Services APIs may need to be retried for 5XX errors.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](availability-zones-overview.md))
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
Last updated 1/05/2022
-# Authorize access to a search apps using Azure Active Directory
+# Authorize access to a search app using Azure Active Directory
> [!IMPORTANT] > Role-based access control for data plane operations, such as creating or querying an index, is currently in public preview and available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). This functionality is only available in public cloud regions and may impact the latency of your operations while the functionality is in preview. For more information on preview limitations, see [RBAC preview limitations](search-security-rbac.md#preview-limitations).
search Search Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md
Previously updated : 06/08/2022 Last updated : 01/11/2023 # Manage your Azure Cognitive Search service with REST APIs
Last updated 06/08/2022
> * [.NET SDK](/dotnet/api/microsoft.azure.management.search) > * [Python](https://pypi.python.org/pypi/azure-mgmt-search/0.1.0)
-In this article, learn how to create and configure an Azure Cognitive Search service using the [Management REST APIs](/rest/api/searchmanagement/). Only the Management REST APIs are guaranteed to provide early access to [preview features](/rest/api/searchmanagement/management-api-versions#2021-04-01-preview). Set a preview API version to access preview features.
+In this article, learn how to create and configure an Azure Cognitive Search service using the [Management REST APIs](/rest/api/searchmanagement/). Only the Management REST APIs are guaranteed to provide early access to [preview features](/rest/api/searchmanagement/management-api-versions#2021-04-01-preview).
+
+The Management RESt API is available in stable and preview versions. Be sure to set a preview API version if you're accessing preview features.
> [!div class="checklist"] > * [List search services](#list-search-services)
All of the Management REST APIs have examples. If a task isn't covered in this a
* [Postman](https://www.postman.com/downloads/) or another REST client that sends HTTP requests
-* Azure Active Directory (Azure AD) to obtain a bearer token for request authentication
+* [Azure CLI](/cli/azure/install-azure-cli) used to set up a security principle for the client
## Create a security principal
-Management REST API calls are authenticated through Azure Active Directory (Azure AD). You'll need a security principal for your client, along with permissions to create and configure a resource. This section explains how to create a security principal and assign a role.
+Management REST API calls are authenticated through Azure Active Directory (Azure AD). You'll need a security principal for your REST client, along with permissions to create and configure a resource. This section explains how to create a security principal and assign a role.
-The following steps are from ["How to call REST APIs with Postman"](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman).
+> [!NOTE]
+> The following steps are borrowed from the [Azure REST APIs with Postman](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/) blog post.
-An easy way to generate the required client ID and password is using the **Try It** feature in the [Create a service principal](/cli/azure/create-an-azure-service-principal-azure-cli#1-create-a-service-principal) article.
+1. Open a command shell for Azure CLI. If you don't have Azure CLI installed, you can open [Create a service principal](/cli/azure/create-an-azure-service-principal-azure-cli#1-create-a-service-principal), select **Try It**.
-1. In [Create a service principal](/cli/azure/create-an-azure-service-principal-azure-cli#1-create-a-service-principal), select **Try It**. Sign in to your Azure subscription.
+1. Sign in to your Azure subscription.
+
+ ```azurecli
+ az login
+ ```
1. First, get your subscription ID. In the console, enter the following command:
An easy way to generate the required client ID and password is using the **Try I
az account show --query id -o tsv ````
-1. Create a resource group for your security principal:
+1. Create a resource group for your security principal, specifying a location and name. This example uses the West US region.
```azurecli
- az group create -l 'westus2' -n 'MyResourceGroup'
+ az group create -l westus -n MyResourceGroup
```
-1. Paste in the following command. Replace the placeholder values with valid values: a descriptive security principal name, subscription ID, resource group name. Press Enter to run the command. Notice that the security principal has "owner" permissions, necessary for creating or updating an Azure resource.
+1. Create the service principal, replacing the placeholder values with valid values. You'll need a descriptive security principal name, subscription ID, resource group name.
+
+ Notice that the security principal has "owner" permissions, necessary for creating or updating an Azure resource. If you're managing an existing search service, use contributor or "Search Service Contributor" (quote enclosed) instead.
```azurecli az ad sp create-for-rbac --name mySecurityPrincipalName \
An easy way to generate the required client ID and password is using the **Try I
--scopes /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName ```
- You'll use "appId", "password", and "tenantId" for the variables "clientId", "clientSecret", and "tenantId" in the next section.
+ A successful response includes "appId", "password", and "tenant". You'll use these values for the variables "clientId", "clientSecret", and "tenant" in the next section.
## Set up Postman
The following steps are from [this blog post](https://blog.jongallant.com/2021/0
1. In the Authorization tab, select **Bearer Token** as the type.
-1. In the **Token** field, specify the variable placeholder `{{{{bearerToken}}}}`.
+1. In the **Token** field, specify the variable placeholder `{{bearerToken}}`.
1. In the Pre-request Script tab, paste in the following script:
The following steps are from [this blog post](https://blog.jongallant.com/2021/0
1. Save the collection.
-Now that Postman is set up, you can send REST calls similar to the ones described in this article. You'll update the endpoint, and request body where applicable.
+Now that Postman is set up, you can send REST calls similar to the ones described in this article. You'll update the endpoint and request body where applicable.
## List search services Returns all search services under the current subscription, including detailed service information: ```rest
-GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2021-04-01-preview
+GET https://management.azure.com/subscriptions/{{subscriptionId}}/providers/Microsoft.Search/searchServices?api-version=2020-08-01
``` ## Create or update a service
-Creates or updates a search service under the current subscription:
+Creates or updates a search service under the current subscription. This example uses variables for the search service name and region, which haven't been defined yet. Either provide the names directly, or add new variables to the collection.
```rest
-PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-preview
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2020-08-01
{ "location": "{{region}}", "sku": {
PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups
To create an [S3HD](search-sku-tier.md#tier-descriptions) service, use a combination of `-Sku` and `-HostingMode` properties. Set "sku" to `Standard3` and "hostingMode" to `HighDensity`. ```rest
-PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-preview
+PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2020-08-01
{ "location": "{{region}}", "sku": {
PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups
If you're using [customer-managed encryption](search-security-manage-encryption-keys.md), you can enable "encryptionWithCMK" with "enforcement" set to "Enabled" if you want the search service to report its compliance status.
-When you enable this policy, calls that create objects with sensitive data, such as the connection string within a data source, will fail if an encryption key isn't provided: `"Error creating Data Source: "CannotCreateNonEncryptedResource: The creation of non-encrypted DataSources is not allowed when encryption policy is enforced."`
+When you enable this policy, any REST calls that create objects containing sensitive data, such as the connection string within a data source, will fail if an encryption key isn't provided: `"Error creating Data Source: "CannotCreateNonEncryptedResource: The creation of non-encrypted DataSources is not allowed when encryption policy is enforced."`
```rest PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-preview
PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups
## (preview) Disable semantic search
-Although [semantic search is not enabled](semantic-search-overview.md#enable-semantic-search) by default, you could lock down the feature at the service level.
+Although [semantic search isn't enabled](semantic-search-overview.md#enable-semantic-search) by default, you could lock down the feature at the service level.
```rest PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
search Search Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage.md
tags: azure-portal Previously updated : 12/21/2022 Last updated : 01/12/2023 # Service administration for Azure Cognitive Search in the Azure portal
Several aspects of a search service are determined when the service is provision
Service administration includes the following tasks: * [Adjust capacity](search-capacity-planning.md) by adding or removing replicas and partitions
-* [Rotate API keys](search-security-api-keys.md) used for admin and query operations
-* [Control access to admin operations](search-security-rbac.md) through role-based security
+* [Manage API keys](search-security-api-keys.md) used for content access
+* [Manage Azure roles](search-security-rbac.md) used for content and service access
* [Configure IP firewall rules](service-configure-firewall.md) to restrict access by IP address * [Configure a private endpoint](service-create-private-endpoint.md) using Azure Private Link and a private virtual network * [Monitor service health and operations](monitor-azure-cognitive-search.md): storage, query volumes, and latency
Internally, Microsoft collects telemetry data about your service and the platfor
| Telemetry | One and a half years | > [!NOTE]
-> This section is about monitoring data. For questions about customer data and privacy, see the ["Data residency"](search-security-overview.md#data-residency) section of the security overview article.
+> See the ["Data residency"](search-security-overview.md#data-residency) section of the security overview article for more information about data location and privacy.
## Administrator permissions When you open the search service overview page, the Azure role assigned to your account determines what portal content is available to you. The overview page at the beginning of the article shows the portal content available to an Owner or Contributor.
-Control plane roles include the following items:
+Azure roles used for service administration include:
* Owner * Contributor (same as Owner, minus the ability to assign roles)
-* Reader (access to service information and the Monitoring tab)
+* Reader (provides access to service information in the Essentials section and in the Monitoring tab)
-If you want a combination of control plane and data plane permissions, consider Search Service Contributor. For more information, see [Built-in roles](search-security-rbac.md#built-in-roles-used-in-search).
+By default, all search services start with at least one Owner. Owners, service administrators, and co-administrators have permission to create other administrators and other role assignments.
+
+Also by default, search services start with API keys for content-related tasks that an Owner or Contributor might perform in the portal. However, it's possible to turn off [API key authentication](search-security-api-keys.md) and use [Azure role-based access control](search-security-rbac.md#built-in-roles-used-in-search) exclusively. If you turn off API keys, be sure to set up data access role assignments so that all features in the portal remain operational.
> [!TIP]
-> By default, any Owner or Co-owner can create or delete services. To prevent accidental deletions, you can [lock resources](../azure-resource-manager/management/lock-resources.md).
+> By default, any owner or administrator can create or delete services. To prevent accidental deletions, you can [lock resources](../azure-resource-manager/management/lock-resources.md).
## Next steps
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
Title: Connect with API keys
+ Title: Connect using API keys
description: Learn how to use an admin or query API key for inbound access to an Azure Cognitive Search service endpoint.
Last updated 01/10/2023
Cognitive Search offers key-based authentication that you can use on connections to your search service. An API key is a unique string composed of 52 randomly generated numbers and letters. A request made to a search service endpoint will be accepted if both the request and the API key are valid.
-API keys are frequently used when making REST API calls to a search service. You can also use them in search solutions if Azure Active Directory isn't an option.
+API keys are used for content-related requests, such as creating or querying an index. Upon service creation, it's the only authentication mechanism for data plane (content) operations, but you can replace or supplement key authentication with [Azure roles](search-security-rbac.md) if you can't use hard-coded keys in your code.
> [!NOTE]
-> A quick note about "key" terminology in Cognitive Search. An "API key", which is described in this article, refers to a GUID used for authenticating a request. A "document key" refers to a unique string in your indexed content that's used to uniquely identify documents in a search index. API keys and document keys are unrelated.
+> A quick note about how "key" terminology is used in Cognitive Search. An "API key", which is described in this article, refers to a GUID used for authenticating a request. A separate term, "document key", refers to a unique string in your indexed content that's used to uniquely identify documents in a search index.
## Types of API keys
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
Currently, the only external resource that a search service writes customer data
### Exceptions to data residency commitments
-Although customer data isn't stored outside of your region, object names (considered as customer data), will appear in the telemetry logs used by Microsoft Support to troubleshoot your service issues. Telemetry logs include names of indexes, indexers, data sources, skillsets, containers, and key vault store.
+Object names will be stored and processed outside of your selected region or location. Customers shouldn't place any sensitive data in name fields or create applications designed to store sensitive data in these fields. This data will appear in the telemetry logs used by Microsoft to provide support for the service. Object names include names of indexes, indexers, data sources, skillsets, containers, and key vault store.
->[!IMPORTANT]
->Object names aren't obfuscated in the telemetry logs. If possible, please avoid using names that convey sensitive information.
-
-Telemetry logs are retained for one and a half years. During that period, support engineers might access and reference object names under the following conditions:
+Telemetry logs are retained for one and a half years. During that period, Microsoft might access and reference object names under the following conditions:
+ Diagnose an issue, improve a feature, or fix a bug. In this scenario, data access is internal only, with no third-party access.
-+ Proactively suggest to the original customer a workaround or alternative. For example, "Based on your usage of the product, consider using `<feature name>` since it would perform better." In this scenario, Microsoft might expose an object name through dashboards visible to the customer.
++ During support, this information may be used to make suggestions to the customer. For example, "Based on your usage of the product, consider using `<feature name>` since it would perform better." +++ Microsoft might expose an object name in dashboards visible to the customer. Upon request, Microsoft can shorten the retention interval or remove references to specific objects in the telemetry logs. Remember that if you request data removal, Microsoft won't have a full history of your service, which could impede troubleshooting of the object in question.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Title: Use Azure RBAC roles
+ Title: Use Azure role-based access control
-description: Use Azure role-based access control (Azure RBAC) for granular permissions on service administration and content tasks.
+description: Use Azure role-based access control for granular permissions on service administration and content tasks.
Previously updated : 05/24/2022 Last updated : 01/12/2023 # Use Azure role-based access controls (Azure RBAC) in Azure Cognitive Search
-Azure provides a global [role-based access control (RBAC) authorization system](../role-based-access-control/role-assignments-portal.md) for all services running on the platform. In Cognitive Search, you can:
+Azure provides a global [role-based access control authorization system](../role-based-access-control/role-assignments-portal.md) for all services running on the platform. In Cognitive Search, you can:
+ Use generally available roles for service administration.
Built-in roles include generally available and preview roles. If these roles are
| [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default.</br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. | | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. </br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. | | [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: service name, resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>This role doesn't allow access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). </br></br> (Preview) When you enable the RBAC preview for the data plane, the Reader role has read access across the entire service. This allows you to read search metrics, content metrics (storage consumed, number of objects), and the definitions of data plane resources (indexes, indexers, etc.). The Reader role still won't have access to read API keys or read content within indexes. |
-| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role does not give you access to query search indexes or index documents. This role is for search service administrators who need to manage the search service and its objects, but without the ability to view or access object data. </br></br>Like Contributor, members of this role can't make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. |
+| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role doesn't give you access to query search indexes or index documents. This role is for search service administrators who need to manage the search service and its objects, but without the ability to view or access object data. </br></br>Like Contributor, members of this role can't make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. |
| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full data plane access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only data plane access to search indexes on the search service. This role is for apps and users who run queries. |
Built-in roles include generally available and preview roles. If these roles are
+ Role-based access control for data plane operations, such as creating an index or querying an index, is currently in public preview and available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-+ There are no regional, tier, or pricing restrictions for using Azure RBAC preview , but your search service must be in the Azure public cloud. The preview isn't available in Azure Government, Azure Germany, or Azure China 21Vianet.
++ There are no regional, tier, or pricing restrictions for using Azure RBAC preview, but your search service must be in the Azure public cloud. The preview isn't available in Azure Government, Azure Germany, or Azure China 21Vianet.
-+ If you migrate your Azure subscription to a new tenant, the RBAC preview will need to be re-enabled.
++ If you migrate your Azure subscription to a new tenant, the Azure RBAC preview will need to be re-enabled.
-+ Adoption of Azure RBAC might increase the latency of some requests. Each unique combination of service resource (index, indexer, etc.) and service principal used on a request will trigger an authorization check. These authorization checks can add up to 200 milliseconds of latency to a request.
++ Adoption of role-based access control might increase the latency of some requests. Each unique combination of service resource (index, indexer, etc.) and service principal used on a request will trigger an authorization check. These authorization checks can add up to 200 milliseconds of latency to a request. + In rare cases where requests originate from a high number of different service principals, all targeting different service resources (indexes, indexers, etc.), it's possible for the authorization checks to result in throttling. Throttling would only happen if hundreds of unique combinations of search service resource and service principal were used within a second.
New built-in preview roles grant permissions over content on the search service.
1. In the blue banner that mentions the preview, select **Register** to add the feature to your subscription.
- :::image type="content" source="media/search-howto-aad/rbac-signup-portal.png" alt-text="screenshot of how to sign up for the rbac preview in the portal" border="true" :::
+ :::image type="content" source="media/search-howto-aad/rbac-signup-portal.png" alt-text="screenshot of how to sign up for the preview in the portal" border="true" :::
You can also sign up for the preview using Azure Feature Exposure Control (AFEC) and searching for *Role Based Access Control for Search Service (Preview)*. For more information on adding preview features, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md?tabs=azure-portal). > [!NOTE]
-> Once you add the preview to your subscription, all services in the subscription will be permanently enrolled in the preview. If you don't want RBAC on a given service, you can disable RBAC for data plane operations as described in a later section.
+> Once you add the preview to your subscription, all services in the subscription will be permanently enrolled in the preview. If you don't want role-based access control on a given service, you can disable it for data plane operations as described in a later section.
<a name="step-2-preview-configuration"></a>
-## Enable RBAC preview for data plane operations
+## Enable role-based access control preview for data plane operations
**Applies to:** Search Index Data Contributor, Search Index Data Reader, Search Service Contributor
In this step, configure your search service to recognize an **authorization** he
| Role-based access control | Preview | Requires membership in a role assignment to complete the task, described in the next step. It also requires an authorization header. Choosing this option limits you to clients that support the 2021-04-30-preview REST API. | | Both | Preview | Requests are valid using either an API key or an authorization token. |
-If you can't save your selection, or if you get "API access control failed to update for search service `<name>`. DisableLocalAuth is preview and not enabled for this subscription", your subscription enrollment hasn't been initiated or it hasn't been processed.
+All network calls for search service operations and content will respect the option you select: API keys for **API Keys**, an Azure RBAC token for **Role-based access control**, or API keys and Azure RBAC tokens equally for **Both**. This applies to both portal features and clients that access a search service programmatically.
### [**REST API**](#tab/config-svc-rest) Use the Management REST API version 2021-04-01-Preview, [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update), to configure your service.
-If you're using Postman or another web testing tool, see the Tip below for help on setting up the request.
+If you're using Postman or another REST client, see [Manage Azure Cognitive Search using REST](search-manage-rest.md) for help with setting up the client.
1. Under "properties", set ["AuthOptions"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions) to "aadOrApiKey".
- Optionally, set ["AadAuthFailureMode"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#aadauthfailuremode) to specify whether 401 is returned instead of 403 when authentication fails. The default of "disableLocalAuth" is false so you don't need to set it, but it's listed below to emphasize that it must be false whenever authOptions are set.
+ Optionally, set ["AadAuthFailureMode"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#aadauthfailuremode) to specify whether 401 is returned instead of 403 when authentication fails. The default of "disableLocalAuth" is false so you don't need to set it, but it's included in the properties list to emphasize that it must be false whenever "authOptions" are set.
```http PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
If you're using Postman or another web testing tool, see the Tip below for help
1. [Assign roles](#step-3-assign-roles) on the service and verify they're working correctly against the data plane.
-> [!TIP]
-> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principal and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
- <a name="step-3-assign-roles"></a>
Role assignments in the portal are service-wide. If you want to [grant permissio
When [using PowerShell to assign roles](../role-based-access-control/role-assignments-powershell.md), call [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment), providing the Azure user or group name, and the scope of the assignment.
-Before you start, make sure you load the Azure and AzureAD modules and connect to Azure:
+Before you start, make sure you load the Az and AzureAD modules and connect to Azure:
```powershell Import-Module -Name Az
Recall that you can only scope access to top-level resources, such as indexes, s
## Test role assignments
+When testing roles, remember that roles are cumulative and inherited roles that are scoped to the subscription or resource group can't be deleted or denied at the resource (search service) level.
+ ### [**Azure portal**](#tab/test-portal) 1. Open the [Azure portal](https://portal.azure.com).
Recall that you can only scope access to top-level resources, such as indexes, s
1. On the Overview page, select the **Indexes** tab:
+ + Members of the Contributor role can view and create any object, but can't query an index using Search Explorer.
+ + Members of Search Index Data Reader can use Search Explorer to query the index. You can use any API version to check for access. You should be able to issue queries and view results, but you shouldn't be able to view the index definition. + Members of Search Index Data Contributor can select **New Index** to create a new index. Saving a new index will verify write access on the service. ### [**REST API**](#tab/test-rest)
-+ Register your application with Azure Active Directory.
+This approach assumes Postman as the REST client and uses a Postman collection and variables to provide the bearer token. You'll need Azure CLI or another tool to create a security principal for the REST client.
+
+1. Open a command shell for Azure CLI and sign in to your Azure subscription.
+
+ ```azurecli
+ az login
+ ```
+
+1. Get your subscription ID. You'll provide this value as variable in a future step.
+
+ ```azurecli
+ az account show --query id -o tsv
+ ````
+
+1. Create a resource group for your security principal, specifying a location and name. This example uses the West US region. You'll provide this value as variable in a future step.
+
+ ```azurecli
+ az group create -l westus -n MyResourceGroup
+ ```
+
+1. Create the service principal, replacing the placeholder values with valid values. You'll need a descriptive security principal name, subscription ID, and resource group name. This example uses the "Search Index Data Reader" (quote enclosed) role.
+
+ ```azurecli
+ az ad sp create-for-rbac --name mySecurityPrincipalName --role "Search Index Data Reader" --scopes /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName
+ ```
+
+ A successful response includes "appId", "password", and "tenant". You'll use these values for the variables "clientId", "clientSecret", and "tenant".
+
+1. Start a new Postman collection and edit its properties. In the Variables tab, create the following variables:
+
+ | Variable | Description |
+ |-|-|
+ | clientId | Provide the previously generated "appID" that you created in Azure AD. |
+ | clientSecret | Provide the "password" that was created for your client. |
+ | tenantId | Provide the "tenant" that was returned in the previous step. |
+ | subscriptionId | Provide the subscription ID for your subscription. |
+ | resource | Enter `https://search.azure.com`. |
+ | bearerToken | (leave blank; the token is generated programmatically) |
+
+1. In the Authorization tab, select **Bearer Token** as the type.
+
+1. In the **Token** field, specify the variable placeholder `{{bearerToken}}`.
+
+1. In the Pre-request Script tab, paste in the following script:
+
+ ```javascript
+ pm.test("Check for collectionVariables", function () {
+ let vars = ['clientId', 'clientSecret', 'tenantId', 'subscriptionId'];
+ vars.forEach(function (item, index, array) {
+ console.log(item, index);
+ pm.expect(pm.collectionVariables.get(item), item + " variable not set").to.not.be.undefined;
+ pm.expect(pm.collectionVariables.get(item), item + " variable not set").to.not.be.empty;
+ });
+
+ if (!pm.collectionVariables.get("bearerToken") || Date.now() > new Date(pm.collectionVariables.get("bearerTokenExpiresOn") * 1000)) {
+ pm.sendRequest({
+ url: 'https://login.microsoftonline.com/' + pm.collectionVariables.get("tenantId") + '/oauth2/token',
+ method: 'POST',
+ header: 'Content-Type: application/x-www-form-urlencoded',
+ body: {
+ mode: 'urlencoded',
+ urlencoded: [
+ { key: "grant_type", value: "client_credentials", disabled: false },
+ { key: "client_id", value: pm.collectionVariables.get("clientId"), disabled: false },
+ { key: "client_secret", value: pm.collectionVariables.get("clientSecret"), disabled: false },
+ { key: "resource", value: pm.collectionVariables.get("resource") || "https://search.azure.com", disabled: false }
+ ]
+ }
+ }, function (err, res) {
+ if (err) {
+ console.log(err);
+ } else {
+ let resJson = res.json();
+ pm.collectionVariables.set("bearerTokenExpiresOn", resJson.expires_on);
+ pm.collectionVariables.set("bearerToken", resJson.access_token);
+ }
+ });
+ }
+ });
+ ```
-+ Revise your code to use a [Search REST API](/rest/api/searchservice/) (any supported version) and set the **Authorization** header on requests, replacing the **api-key** header.
+1. Save the collection.
- :::image type="content" source="media/search-security-rbac/rest-authorization-header.png" alt-text="Screenshot of an HTTP request with an Authorization header" border="true":::
+1. Send a request that uses the variables you've specified. For the "Search Index Data Reader" role, you can query an index (remember to provide a valid search service name on the URI):
+
+ ```http
+ POST https://<service-name>.search.windows.net/indexes/hotels-quickstart/docs/search?api-version=2020-06-20
+ {
+ "queryType": "simple",
+ "search": "motel",
+ "filter": "",
+ "select": "HotelName,Description,Category,Tags",
+ "count": true
+ }
+ ```
For more information on how to acquire a token for a specific environment, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md). ### [**.NET SDK**](#tab/test-csharp)
-The Azure SDK for .NET supports an authorization header in the [NuGet Gallery | Azure.Search.Documents 11.4.0-beta.2](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.2) package.
+See [Authorize access to a search app using Azure Active Directory](/search-howto-aad.md) for instructions that create an identity for your client app, assign a role, and call [DefaultAzureCredential()](/dotnet/api/azure.identity.defaultazurecredential).
-Configuration is required to register an application with Azure Active Directory, and to obtain and pass authorization tokens:
+The Azure SDK for .NET supports an authorization header in the [NuGet Gallery | Azure.Search.Documents 11.4.0-beta.2](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.2) package. Configuration is required to register an application with Azure Active Directory, and to obtain and pass authorization tokens:
+ When obtaining the OAuth token, the scope is "https://search.azure.com/.default". The SDK requires the audience to be "https://search.azure.com". The ".default" is an Azure AD convention.
These steps create a custom role that augments search query rights to include li
1. Right-click **Search Index Data Reader** (or another role) and select **Clone** to open the **Create a custom role** wizard.
-1. On the Basics tab, provide a name for the custom role, such as "Search Index Data Explorer", and then click **Next**.
+1. On the Basics tab, provide a name for the custom role, such as "Search Index Data Explorer", and then select **Next**.
1. On the Permissions tab, select **Add permission**.
The PowerShell example shows the JSON syntax for creating a custom role that's a
## Disable API key authentication
-API keys can't be deleted, but they can be disabled on your service. If you're using the Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader preview roles and Azure AD authentication, you can disable API keys, causing the search service to refuse all data-related requests that pass an API key in the header for content-related requests.
+API keys can't be deleted, but they can be disabled on your service if you're using the Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader roles and Azure AD authentication. Disabling API keys causes the search service to refuse all data-related requests that pass an API key in the header.
-To disable [key-based authentication](search-security-api-keys.md), use the Management REST API version 2021-04-01-Preview and send two consecutive requests for [Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update).
+Owner or Contributor permissions are required to disable features.
-Owner or Contributor permissions are required to disable features. Use Postman or another web testing tool to complete the following steps (see Tip below):
+To disable [key-based authentication](search-security-api-keys.md), use Azure portal or the Management REST API.
-1. On the first request, set ["AuthOptions"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions) to "aadOrApiKey" to enable Azure AD authentication. Notice that the option indicates availability of either approach: Azure AD or the native API keys.
+### [**Portal**](#tab/disable-keys-portal)
+
+1. In the Azure portal, navigate to your search service.
+
+1. In the left-navigation pane, select **Keys**.
+
+1. Select **Role-based access control**.
+
+The change is effective immediately. Assuming you have permission to assign roles as a member of Owner, service administrator, or co-administrator, you can use portal features to test role-based access.
+
+### [**REST API**](#tab/disable-keys-rest)
+
+Use Postman or another REST client to send two consecutive requests for [Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update). See [Manage a search service using REST APIs](search-manage-rest.md) for instructions on setting up the client.
+
+1. On the first request, set ["AuthOptions"](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions) to "aadOrApiKey" to enable Azure AD authentication. Activating Azure AD authentication is a prerequisite to setting "disableLocalAuth".
```http PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview
You can't combine steps one and two. In step one, "disableLocalAuth" must be fal
To re-enable key authentication, rerun the last request, setting "disableLocalAuth" to false. The search service will resume acceptance of API keys on the request automatically (assuming they're specified).
-> [!TIP]
-> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principal and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
+ ## Conditional Access
security Security Code Analysis Customize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/security-code-analysis-customize.md
Title: Customize Microsoft Security Code Analysis tasks description: This article describes customizing the tasks in the Microsoft Security Code Analysis extension-+ Previously updated : 04/18/2022 Last updated : 01/09/2023
# Configure and customize the build tasks > [!Note]
-> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension is retired. MSCA is replaced by the [Microsoft Security DevOps Azure DevOps extension](/azure/defender-for-cloud/azure-devops-extension). Follow the instructions in [Configure](/azure/defender-for-cloud/azure-devops-extension) to install and configure the extension.
This article describes in detail the configuration options available in each of the build tasks. The article starts with the tasks for security code analysis tools. It ends with the post-processing tasks.
security Security Code Analysis Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/security-code-analysis-onboard.md
Title: Microsoft Security Code Analysis onboarding guide description: Learn how to onboard and install the Microsoft Security Code Analysis extension. See prerequisites and view additional resources.-+ Previously updated : 04/18/2022 Last updated : 01/09/2023
# Onboarding and installing > [!Note]
-> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension is retired. MSCA is replaced by the [Microsoft Security DevOps Azure DevOps extension](/azure/defender-for-cloud/azure-devops-extension). Follow the instructions in [Configure](/azure/defender-for-cloud/azure-devops-extension) to install and configure the extension.
Prerequisites to getting started with Microsoft Security Code Analysis:
security Security Code Analysis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/security-code-analysis-overview.md
Title: Microsoft Security Code Analysis documentation overview description: Learn about the Microsoft Security Code Analysis extension. With this extension, you can add security code analysis to Azure DevOps CI/ID pipelines.-+ Previously updated : 04/18/2022 Last updated : 01/09/2023
# About Microsoft Security Code Analysis > [!Note]
-> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension is retired. MSCA is replaced by the [Microsoft Security DevOps Azure DevOps extension](/azure/defender-for-cloud/azure-devops-extension). Follow the instructions in [Configure](/azure/defender-for-cloud/azure-devops-extension) to install and configure the extension.
With the Microsoft Security Code Analysis extension, teams can add security code analysis to their Azure DevOps continuous integration and delivery (CI/CD) pipelines. This analysis is recommended by the [Secure Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl/practices) experts at Microsoft.
security Security Code Analysis Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/security-code-analysis-releases.md
Title: Microsoft Security Code Analysis releases description: This article describes upcoming releases for the Microsoft Security Code Analysis extension-+ Previously updated : 04/18/2022 Last updated : 01/09/2023
# Microsoft Security Code Analysis releases and roadmap > [!Note]
-> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension is retired. MSCA is replaced by the [Microsoft Security DevOps Azure DevOps extension](/azure/defender-for-cloud/azure-devops-extension). Follow the instructions in [Configure](/azure/defender-for-cloud/azure-devops-extension) to install and configure the extension.
Microsoft Security Code Analysis team in partnership with Developer Support is proud to announce recent and upcoming enhancements to our MSCA extension.
security Yaml Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/yaml-configuration.md
Title: Microsoft Azure Security Code Analysis task customization guide description: This article describes lists YAML configuration options for customizing all tasks in the Microsoft Security Code Analysis extension-+ Previously updated : 04/18/2022 Last updated : 01/09/2023
# YAML configuration options to customize the build tasks > [!Note]
-> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension will be retired. Existing MSCA customers will retain their access to MSCA through December 31, 2022. Please refer to the [OWASP Source Code Analysis Tools](https://owasp.org/www-community/Source_Code_Analysis_Tools) for alternative options in Azure DevOps. For customers planning to migrate to GitHub, you can check out [GitHub Advanced Security](https://docs.github.com/github/getting-started-with-github/about-github-advanced-security).
+> Effective December 31, 2022, the Microsoft Security Code Analysis (MSCA) extension is retired. MSCA is replaced by the [Microsoft Security DevOps Azure DevOps extension](/azure/defender-for-cloud/azure-devops-extension). Follow the instructions in [Configure](/azure/defender-for-cloud/azure-devops-extension) to install and configure the extension.
This article lists all YAML configuration options available in each of the build tasks. The article starts with the tasks for security code analysis tools. It ends with the post-processing tasks.
This article lists all YAML configuration options available in each of the build
|||--|-||-|| | RuleLibrary | pickList | always | True | tslint | custom, microsoft, tslint | All results include the rules shipped with the selected version of TSLint (**Base Only**).<br/><br/>**Base Only -** Only the rules shipped with TSLint.<br/><br/>**Include Microsoft Rules -** Downloads [tslint-microsoft-contrib](https://github.com/Microsoft/tslint-microsoft-contrib) and includes its rules to be available for use in the TSLint run. Choosing this option hides the `Type Checking` checkbox, as it is required by Microsoft's rules and will automatically be used. It also unhides the `Microsoft Contribution Version` field, allowing a version of the `tslint-microsoft-contrib` from [npm](https://www.npmjs.com/package/tslint-microsoft-contrib) to be selected.<br/><br/>**Include Custom Rules -** Unhides the `Rules Directory` field, which accepts an accessible path to a directory of TSLint rules to be available for use in the TSLint run.<br/><br/>**Note:** The default value has changed to tslint, as many users have experienced issues configuring the Microsoft ruleset. For specific version configuration, please see [tslint-microsoft-contrib on GitHub](https://github.com/microsoft/tslint-microsoft-contrib). | RulesDirectory | string | RuleLibrary == custom | True | | | An accessible directory containing additional TSLint rules to be available for use in the TSLint run.
-| Ruleset | pickList | RuleLibrary != microsoft | True | tsrecommended | custom, tslatest, tsrecommended | Defines the rules to run against TypeScript files.<br/><br/>**[tslint:latest](https://github.com/palantir/tslint/blob/master/src/configs/latest.ts) -** Extends `tslint:recommended` and is continuously updated to include configuration for the latest rules in every TSLint release. Using this config may introduce breaking changes across minor releases as a new rules are enabled which cause lint failures in your code. When TSLint reaches a major version bump, `tslint:recommended` will be updated to be identical to `tslint:latest`.<br/><br/>**[tslint:recommended](https://github.com/palantir/tslint/blob/master/src/configs/recommended.ts) -** A stable, somewhat opinionated set of rules which TSLint encourages for general TypeScript programming. This configuration follows `semver`, so it will *not* have breaking changes across minor or patch releases.
+| Ruleset | pickList | RuleLibrary != microsoft | True | tsrecommended | custom, tslatest, tsrecommended | Defines the rules to run against TypeScript files.<br/><br/>**[tslint:latest](https://github.com/palantir/tslint/blob/master/src/configs/latest.ts) -** Extends `tslint:recommended` and is continuously updated to include configuration for the latest rules in every TSLint release. Using this config may introduce breaking changes across minor releases as new rules are enabled which cause lint failures in your code. When TSLint reaches a major version bump, `tslint:recommended` will be updated to be identical to `tslint:latest`.<br/><br/>**[tslint:recommended](https://github.com/palantir/tslint/blob/master/src/configs/recommended.ts) -** A stable, somewhat opinionated set of rules which TSLint encourages for general TypeScript programming. This configuration follows `semver`, so it will *not* have breaking changes across minor or patch releases.
| RulesetMicrosoft | pickList | RuleLibrary == microsoft | True | mssdlrequired | custom, msrecommended, mssdlrecommended, mssdlrequired, tslatest, tsrecommended | Defines the rules to run against TypeScript files.<br/><br/>**[microsoft:sdl-required](https://github.com/Microsoft/tslint-microsoft-contrib/wiki/TSLint-and-the-Microsoft-Security-Development-Lifecycle) -** Run all of the available checks provided by tslint and the tslint-microsoft-contrib rules that satisfy the *required* [Security Development Lifecycle (SDL)](https://www.microsoft.com/sdl/) policies.<br/><br/>**[microsoft:sdl-recommended](https://github.com/Microsoft/tslint-microsoft-contrib/wiki/TSLint-and-the-Microsoft-Security-Development-Lifecycle) -** Run all of the available checks provided by tslint and the tslint-microsoft-contrib rules that satisfy the *required and recommended* [Security Development Lifecycle (SDL)](https://www.microsoft.com/sdl/) policies.<br/><br/>**microsoft:recommended** All checks that are recommended by the creators of the tslint-microsoft-contrib rules. This includes security and non-security checks.<br/><br/>**[tslint:latest](https://github.com/palantir/tslint/blob/master/src/configs/latest.ts) -** Extends `tslint:recommended` and is continuously updated to include configuration for the latest rules in every TSLint release. Using this config may introduce breaking changes across minor releases as a new rules are enabled which cause lint failures in your code. When TSLint reaches a major version bump, `tslint:recommended` will be updated to be identical to `tslint:latest`.<br/><br/>**[tslint:recommended](https://github.com/palantir/tslint/blob/master/src/configs/recommended.ts) -** A stable, somewhat opinionated set of rules which TSLint encourages for general TypeScript programming. This configuration follows `semver`, so it will *not* have breaking changes across minor or patch releases. | RulesetFile | string | Ruleset == custom OR RulesetMicrosoft == custom | True | | | A [configuration file](https://palantir.github.io/tslint/usage/cli/) specifying which rules to run.<br/><br/>The path to the config will be added as the path for [custom rules](https://palantir.github.io/tslint/develop/custom-rules/). | FileSelectionType | pickList | always | True | fileGlob | fileGlob, projectFile |
This article lists all YAML configuration options available in each of the build
| OutputFormat | pickList | always | True | json | checkstyle, codeFrame, filesList, json, msbuild, pmd, prose, stylish, verbose, vso | The [formatter](https://palantir.github.io/tslint/formatters/) to use to generate output. Note that the JSON format is compatible with Post Analysis. | NodeMemory | string | always | False | | | An explicit amount of memory in MBs to allocate to node for running TSLint. Example: 8000<br/><br/>Maps to the `--max_old_space=<value>` CLI option for node, which is a `v8 option`. | ToolVersion | pickList | RuleLibrary != microsoft | True | latest | 4.0.0, 4.0.1, 4.0.2, 4.1.0, 4.1.1, 4.2.0, 4.3.0, 4.3.1, 4.4.0, 4.4.1, 4.4.2, 4.5.0, 4.5.1, 5.0.0, 5.1.0, 5.2.0, 5.3.0, 5.3.2, 5.4.0, 5.4.1, 5.4.2, 5.4.3, 5.5.0, latest | The [version](https://github.com/palantir/tslint/releases) of TSLint to download and run.
-| TypeScriptVersion | pickList | always | True | latest | 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.9.0, 0.9.1, 0.9.5, 0.9.7, 1.0.0, 1.0.1, 1.3.0, 1.4.1, 1.5.3, 1.6.2, 1.7.3, 1.7.5, 1.8.0, 1.8.10, 1.8.2, 1.8.5, 1.8.6, 1.8.7, 1.8.9, 1.9.0, 2.0.0, 2.0.10, 2.0.2, 2.0.3, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.1.1, 2.1.4, 2.1.5, 2.1.6, 2.2.0, 2.2.1, custom, latest | The version of [typescript](https://www.npmjs.com/package/typescript) to download and use.<br/>**Note:** This needs to be the same version of TypeScript as is used to compile your code.
-| TypeScriptVersionCustom | string | TypeScriptVersion == custom | True | latest | | The version of [typescript](https://www.npmjs.com/package/typescript) to download and use.<br/>**Note:** This needs to be the same version of TypeScript as is used to compile your code.
+| TypeScriptVersion | pickList | always | True | latest | 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.9.0, 0.9.1, 0.9.5, 0.9.7, 1.0.0, 1.0.1, 1.3.0, 1.4.1, 1.5.3, 1.6.2, 1.7.3, 1.7.5, 1.8.0, 1.8.10, 1.8.2, 1.8.5, 1.8.6, 1.8.7, 1.8.9, 1.9.0, 2.0.0, 2.0.10, 2.0.2, 2.0.3, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.1.1, 2.1.4, 2.1.5, 2.1.6, 2.2.0, 2.2.1, custom, latest | The version of [TypeScript](https://www.npmjs.com/package/typescript) to download and use.<br/>**Note:** This needs to be the same version of TypeScript as is used to compile your code.
+| TypeScriptVersionCustom | string | TypeScriptVersion == custom | True | latest | | The version of [TypeScript](https://www.npmjs.com/package/typescript) to download and use.<br/>**Note:** This needs to be the same version of TypeScript as is used to compile your code.
| MicrosoftContribVersion | pickList | RuleLibrary == microsoft | | latest | 4.0.0, 4.0.1, 5.0.0, 5.0.1, latest | The version of [tslint-microsoft-contrib](https://www.npmjs.com/package/tslint-microsoft-contrib) (SDL Rules) to download and use.</br>**Note:** The version of [tslint](https://www.npmjs.com/package/tslint) will be chosen that is compatible with the version chosen for tslint-microsoft-contrib. Updates to tslint-microsoft-contrib will be gated by this build task, until a period of testing can occur. ## Publish Security Analysis Logs task
This article lists all YAML configuration options available in each of the build
## Next steps
-If you have further questions about the Security Code Analysis extension and the tools offered, check out [our FAQ page](security-code-analysis-faq.yml).
+If you have further questions about the Security Code Analysis extension and the tools offered, check out [our FAQ page](security-code-analysis-faq.yml).
sentinel Ci Cd Custom Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-deploy.md
description: This article describes how to customize repository deployments for the repositories feature in Microsoft Sentinel. Previously updated : 9/15/2022 Last updated : 1/11/2022 #Customer intent: As a SOC collaborator or MSSP analyst, I want to know how to optimize my source control repositories for continuous integration and continuous delivery (CI/CD). Specifically as an MSSP content manager, I want to know how to deploy one solution to many customer workspaces and still be able to tailor custom content for their environments.
For more information, see [Validate your content](ci-cd-custom-content.md#valida
## Customize the workflow or pipeline
-The default workflow only deploys content that has been modified since the last deployment based on commits to the repository. But you may want to turn off smart deployments or perform other customizations. For example, you can configure different deployment triggers, or deploy content exclusively from a specific root folder.
+The default workflow only deploys content that has been modified since the last deployment based on commits to the repository. But you may need other customizations such as to configure different deployment triggers, or to deploy content exclusively from a specific root folder.
Select one of the following tabs depending on your connection type:
For more information, see the [Azure DevOps documentation](/azure/devops/pipelin
## Scale your deployments with parameter files
-Rather than passing parameters as inline values in your content files, you can [use a JSON file that contains the parameter values](../azure-resource-manager/templates/parameter-files.md). You can then map those parameter JSON files to their associated Sentinel content files to better scale your deployments across different workspaces. There are a number of ways to map parameter files to Sentinel files, and the repositories deployment pipeline considers them in the following order:
+Rather than passing parameters as inline values in your content files, consider [using a JSON file that contains the parameter values](../azure-resource-manager/templates/parameter-files.md). Then map those parameter JSON files to their associated Sentinel content files to better scale your deployments across different workspaces. There are a number of ways to map parameter files to Sentinel files, and the repositories deployment pipeline considers them in the following order:
:::image type="content" source="media/ci-cd-custom-deploy/deploy-parameter-file-precedence.svg" alt-text="A diagram showing the precedence of parameter file mappings.":::
-1. Is there a mapping in the sentinel-deployment.config? [Customize your connection configuration](ci-cd-custom-deploy.md#customize-your-connection-configuration) to learn more.
-1. Is there a workspace-mapped parameter file? This would be a parameter file in the same directory as the content files that ends with .parameters-<WorkspaceID>.json
-1. Is there a default parameter file? This would be any parameter file in the same directory as the content files that ends with .parameters.json
+1. Is there a mapping in the *sentinel-deployment.config*? [Customize your connection configuration](ci-cd-custom-deploy.md#customize-your-connection-configuration) to learn more.
+1. Is there a workspace-mapped parameter file? This would be a parameter file in the same directory as the content files that ends with *.parameters-\<WorkspaceID>.json*
+1. Is there a default parameter file? This would be any parameter file in the same directory as the content files that ends with *.parameters.json*
It is encouraged to map your parameter files through through the configuration file or by specifying the workspace ID in the file name to avoid clashes in scenarios with multiple deployments.
It is encouraged to map your parameter files through through the configuration f
> Once a parameter file match is determined based on the above mapping precedence, the pipeline will ignore any remaining mappings. >
-Modifying the mapped parameter file listed in the sentinel-deployment.config will trigger the deployment of its paired content file. Adding or modifying a *.parameters-\<workspaceID\>.json* file or *.parameters.json* file will also trigger a deployment of the paired content file(s) along with the newly modified parameters, unless a higher precedence parameter mappings is in place. Other content files won't be deployed as long as the smart deployments feature is still enabled in the workflow/pipeline definition file.
+Modifying the mapped parameter file listed in the sentinel-deployment.config will trigger the deployment of its paired content file. Adding or modifying a *.parameters-\<WorkspaceID\>.json* file or *.parameters.json* file will also trigger a deployment of the paired content file(s) along with the newly modified parameters, unless a higher precedence parameter mappings is in place. Other content files won't be deployed as long as the smart deployments feature is still enabled in the workflow/pipeline definition file.
## Customize your connection configuration
service-bus-messaging Service Bus Nodejs How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-nodejs-how-to-use-queues.md
In this tutorial, you complete the following steps:
2. Create a Service Bus queue, using the Azure portal. 3. Write a JavaScript application to use the [@azure/service-bus](https://www.npmjs.com/package/@azure/service-bus) package to: 1. Send a set of messages to the queue.
- 1. Write a .NET console application to receive those messages from the queue.
+ 1. Receive those messages from the queue.
> [!NOTE] > This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for JavaScript repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/servicebus/service-bus/samples/v7).
If you're new to the service, see [Service Bus overview](service-bus-messaging-o
- An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF). - [Node.js LTS](https://nodejs.org/en/download/)-- If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue. ### [Passwordless](#tab/passwordless)
Note down the following, which you'll use in the code below:
> [!NOTE]
-> - This tutorial works with samples that you can copy and run using [Nodejs](https://nodejs.org/). For instructions on how to create a Node.js application, see [Create and deploy a Node.js application to an Azure Website](../app-service/quickstart-nodejs.md), or [Node.js cloud service using Windows PowerShell](../cloud-services/cloud-services-nodejs-develop-deploy-app.md).
+> This tutorial works with samples that you can copy and run using [Nodejs](https://nodejs.org/). For instructions on how to create a Node.js application, see [Create and deploy a Node.js application to an Azure Website](../app-service/quickstart-nodejs.md), or [Node.js cloud service using Windows PowerShell](../cloud-services/cloud-services-nodejs-develop-deploy-app.md).
[!INCLUDE [service-bus-create-namespace-portal](./includes/service-bus-create-namespace-portal.md)]
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
If you want to find a list of all the available Service Fabric runtime versions
| 9.0 CU2<br>9.0.1048.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU1<br>9.0.1028.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 | | 9.0 RTO<br>9.0.1017.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2023 |--
-| Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support |
-| | | | | | | |
| 8.2 CU7<br>8.2.1692.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 | | 8.2 CU6<br>8.2.1686.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 | | 8.2 CU4<br>8.2.1659.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 |
If you want to find a list of all the available Service Fabric runtime versions
| 8.2 CU2<br>8.2.1486.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 6.0 (Preview), .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 | | 8.2 CU1<br>8.2.1363.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 | | 8.2 RTO<br>8.2.1235.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | March 31, 2023 |+
+| Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support |
+| | | | | | | |
| 8.1 CU4<br>8.1.388.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3.1<br>8.1.337.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3<br>8.1.335.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
Support for Service Fabric on a specific OS ends when support for the OS version
| 9.0 CU4<br>9.0.1114.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU3<br>9.0.1103.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU2.1<br>9.0.1086.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 |-
-| Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support |
-| | | | | | | |
-| 9.0 CU2<br>9.0.1056.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
-| 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
-| 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
| 8.2 CU6<br>8.2.1485.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 | | 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 | | 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
Support for Service Fabric on a specific OS ends when support for the OS version
| 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 | | 8.2 CU1<br>8.2.1204.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 | | 8.2 RTO<br>8.2.1124.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |+
+| Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support |
+| | | | | | | |
+| 9.0 CU2<br>9.0.1056.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
| 8.1 CU4<br>8.1.360.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3.1<br>8.1.340.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3<br>8.1.334.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 |
service-health Impacted Resources Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-outage.md
The list of impacted resources can be exported to an excel file by clicking on t
## Accessing Impacted Resources programmatically via an API
-Outage impacted resource information can be retrieved programmatically using the Events API. The API documentation [here](https://learn.microsoft.com/rest/api/resourcehealth/impacted-resources/list-by-subscription-id-and-event-id?tabs=HTTP) provides the details around how customers can access this data.
+Outage impacted resource information can be retrieved programmatically using the Events API. The API documentation [here](https://learn.microsoft.com/rest/api/resourcehealth/2022-05-01/impacted-resources/list-by-subscription-id-and-event-id?tabs=HTTP) provides the details around how customers can access this data.
## Next Steps - See [Introduction to Azure Service Health dashboard](service-health-overview.md) and [Introduction to Azure Resource Health](resource-health-overview.md) to understand more about them.
service-health Service Health Notifications Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/service-health-notifications-properties.md
subStatus | Usually the HTTP status code of the corresponding REST call, but can
eventTimestamp | Timestamp when the event was generated by the Azure service processing the request corresponding to the event. submissionTimestamp | Timestamp when the event became available for querying. subscriptionId | The Azure subscription in which this event was logged.
-status | String describing the status of the operation. Some common values are: **Started**, **In Progress**, **Succeeded**, **Failed**, **Active**, and **Resolved**.
+status | String describing the status of the operation. Values are: **Active**, and **Resolved**.
operationName | The name of the operation. category | This property is always **ServiceHealth**. resourceId | The Resource ID of the impacted resource.
spring-apps How To Configure Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-enterprise-spring-cloud-gateway.md
You can also view or edit those properties in the Azure portal, as shown in the
> > After configuring SSO, remember to set `ssoEnabled: true` for the Spring Cloud Gateway routes.
+## Configure single sign-on (SSO) logout
+
+VMware Spring Cloud Gateway service instances provide a default API endpoint to log out of the current SSO session. The path to this endpoint is `/scg-logout`. You can accomplish one of the following two outcomes depending on how you call the logout endpoint:
+
+- Logout of session and redirect to IdP logout.
+- Just logout the service instance session.
+
+### Logout of IdP and SSO session
+
+If you send a GET request to the `/scg-logout` endpoint, then the endpoint will send a 302 redirect response to the IdP logout URL. To get the endpoint to return the user back to a path on the gateway service instance, add a redirect parameter to the GET `/scg-logout` request. For example, `${serverUrl}/scg-logout?redirect=/home`.
+
+The following steps describe an example of how to implement the function in your microservices.
+
+1. You need [a route config](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/azure/api-route-config.json#L32) to route the logout request to your application.
+
+1. In that application, you can add whatever logout logic you need. At the end, you need to [send a get request](https://github.com/Azure-Samples/animal-rescue/blob/0e343a27f44cc4a4bfbf699280476b0517854d7b/frontend/src/App.js#L84) to the gateway's `/scg-logout` endpoint.
+
+> [!NOTE]
+> The value of the redirect parameter is a valid path on the gateway service instance. You can't redirect to an external URL.
+
+### Log out just the SSO session
+
+If you send the GET request to the `/scg-logout` endpoint using a `XMLHttpRequest` (XHR), then the 302 redirect could be swallowed and not handled in the response handler. In this case, the user would only be logged out of the SSO session on the gateway service instance and would still have a valid IdP session. The behavior typically seen in this case is that if the user attempts to log in again, they are automatically sent back to the gateway as authenticated from IdP.
+
+You need to have a route configuration to route the logout request to your application, as shown in the following example. This code will make a gateway-only logout SSO session.
+
+```java
+const req = new XMLHttpRequest();
+req.open("GET", "/scg-logout);
+req.send();
+```
+ ## Configure cross-origin resource sharing (CORS) Cross-origin resource sharing (CORS) allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served. The available CORS configuration options are described in the following table.
static-web-apps Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-overview.md
The following constraints apply to all API backends:
- Each static web app environment can only be configured with one type of backend API at a time. - The API route prefix must be `/api`.-- Route rules for APIs only support [redirects](configuration.md#defining-routes) and [securing routes with roles](configuration.md#securing-routes-with-roles).
+- Route rules for APIs only support [redirects](configuration.md#define-routes) and [securing routes with roles](configuration.md#securing-routes-with-roles).
- Only HTTP requests are supported for APIs. WebSocket, for example, is not supported. - The maximum duration of each API request 45 seconds. - Network isolated backends are not supported.
static-web-apps Application Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/application-settings.md
Title: Configure application settings for Azure Static Web Apps
-description: Learn to configure application settings for Azure Static Web Apps
+description: Learn how to configure application settings for Azure Static Web Apps.
Previously updated : 12/21/2021 Last updated : 01/10/2023 -+ # Configure application settings for Azure Static Web Apps
-Application settings hold configuration values that may change, such as database connection strings. Adding application settings allows you to modify the configuration input to your app, without having to change application code.
+When you configure application settings and environment variables, you modify the configuration input to your app without the need to change application code, such as with database connection strings. You can also store secrets used in [authentication configuration](key-vault-secrets.md).
-Application settings:
--- Are available as environment variables to the backend API of a static web app-- Can be used to store secrets used in [authentication configuration](key-vault-secrets.md)-- Are encrypted at rest-- Are copied to [staging](review-publish-pull-requests.md) and production environments-- May only be alphanumeric characters, `.`, and `_`
+Application settings are encrypted at rest, copied to [staging](review-publish-pull-requests.md) and production environments, used by backend APIs, and may only be alphanumeric characters, plus `.` and `_`.
> [!IMPORTANT] > The application settings described in this article only apply to the backend API of an Azure Static Web App.
Application settings:
## Prerequisites - An Azure Static Web Apps application-- [Azure CLI](/cli/azure/install-azure-cli) ΓÇö required if you are using the command line
+- [Azure CLI](/cli/azure/install-azure-cli)-required if you are using the command line
## Configure API application settings for local development
-APIs in Azure Static Web Apps are powered by Azure Functions, which allows you to define application settings in the _local.settings.json_ file when you're running the application locally. This file defines application settings in the `Values` property of the configuration.
+APIs in Azure Static Web Apps are powered by Azure Functions, which allows you to define application settings in the _local.settings.json_ file when you run the application locally. This file defines application settings in the `Values` property of the configuration.
> [!NOTE] > The _local.settings.json_ file is only used for local development. Use the [Azure portal](#configure-application-settings) to configure application settings for production.
Settings defined in the `Values` property can be referenced from code as environ
const connectionString = process.env.DATABASE_CONNECTION_STRING; ```
-The `local.settings.json` file is not tracked by the GitHub repository because sensitive information, like database connection strings, are often included in the file. Since the local settings remain on your machine, you need to manually configure your settings in Azure.
+The `local.settings.json` file isn't tracked by the GitHub repository because sensitive information, like database connection strings, are often included in the file. Since the local settings remain on your machine, you need to manually configure your settings in Azure.
-Generally, configuring your settings is done infrequently, and isn't required with every build.
+Generally, your settings are infrequently set, so aren't required with every build.
-## <a name="configure-application-settings"></a>Configure API application settings in Azure
+## Configure application settings
-You can configure application settings via the Azure portal or with the Azure CLI.
+You can configure application settings via the [Azure portal](https://portal.azure.com) or with the [Azure CLI](#use-the-azure-cli).
### Use the Azure portal The Azure portal provides an interface for creating, updating and deleting application settings. 1. Go to the [Azure portal](https://portal.azure.com).- 1. Open your static web app.- 1. Select **Configuration** in the sidebar.-
-1. Select the environment that you want to apply the application settings to. Staging environments are automatically created when a pull request is generated, and are promoted into production once the pull request is merged. You can set application settings per environment.
-
+1. Select the environment to which you want to apply the application settings. You can configure application settings per environment. When you create a pull request, staging environments are automatically created, and then promoted into production when you merge the pull request.
1. Select **+ Add** to add a new app setting.-
- :::image type="content" source="media/application-settings/configuration.png" alt-text="Azure Static Web Apps configuration view":::
-
+ :::image type="content" source="media/application-settings/configuration.png" alt-text="Screenshot of Azure Static Web Apps configuration view":::
1. Enter a **Name** and **Value**.- 1. Select **OK**.- 1. Select **Save**. ### Use the Azure CLI
-You can use the `az staticwebapp appsettings` command to update your settings in Azure.
+Use the `az staticwebapp appsettings` command to update your settings in Azure.
-- In a terminal or command line, execute the following command to add or update a setting named `message` with a value of `Hello world`. Make sure to replace the placeholder `<YOUR_APP_ID>` with your value.
+In a terminal or command line, execute the following command to add or update a setting named `message` with a value of `Hello world`. Make sure to replace the placeholder `<YOUR_APP_ID>` with your value.
```azurecli az staticwebapp appsettings set --name <YOUR_APP_ID> --setting-names "message=Hello world"
You can use the `az staticwebapp appsettings` command to update your settings in
> [!TIP] > You can add or update multiple settings by passing multiple name-value pairs to `--setting-names`.
-### View application settings with the Azure CLI
-
-Application settings are available to view through the Azure CLI.
+#### View application settings with the Azure CLI
-- In a terminal or command line, execute the following command. Make sure to replace the placeholder `<YOUR_APP_ID>` with your value.
+In a terminal or command line, execute the following command. Make sure to replace the placeholder `<YOUR_APP_ID>` with your value.
```azurecli az staticwebapp appsettings list --name <YOUR_APP_ID> ```
-### Delete application settings with the Azure CLI
+#### Delete application settings with the Azure CLI
-Application settings can be deleted through the Azure CLI.
--- In a terminal or command line, execute the following command to delete a setting named `message`. Make sure to replace the placeholder `<YOUR_APP_ID>` with your value.
+In a terminal or command line, execute the following command to delete a setting named `message`. Make sure to replace the placeholder `<YOUR_APP_ID>` with your value.
```azurecli az staticwebapp appsettings delete --name <YOUR_APP_ID> --setting-names "message" ``` > [!TIP]
- > You can delete multiple settings by passing multiple setting names to `--setting-names`.
+ > Delete multiple settings by passing multiple setting names to `--setting-names`.
## Next steps > [!div class="nextstepaction"]
-> [Configure front-end frameworks](front-end-frameworks.md)
+> [Define configuration for Azure Static Web Apps in the _staticwebapp.config.json_ file](configuration.md)
+
+## Related articles
+
+- [Override defaults with custom registration](authentication-custom.md)
+- [Define settings that control the build process](./build-configuration.md)
+- [API overview](apis-overview.md)
static-web-apps Build Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/build-configuration.md
If you want to skip building the API, you can bypass the automatic build and dep
Steps to skip building the API: -- In the *staticwebapp.config.json* file, set `apiRuntime` to the correct runtime and version. Refer to [Configure Azure Static Web Apps](configuration.md#selecting-the-api-language-runtime-version) for the list of supported runtimes and versions.
+- In the *staticwebapp.config.json* file, set `apiRuntime` to the correct runtime and version. Refer to [Configure Azure Static Web Apps](configuration.md#select-the-api-language-runtime-version) for the list of supported runtimes and versions.
```json { "platform": {
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
Title: Configure Azure Static Web Apps
-description: Learn to configure routes, enforce security rules, and global settings for Azure Static Web Apps.
+description: Learn how to configure routes and enforce security rules and global settings for Azure Static Web Apps.
+ Previously updated : 02/03/2022 Last updated : 01/10/2023 # Configure Azure Static Web Apps
-Configuration for Azure Static Web Apps is defined in the _staticwebapp.config.json_ file, which controls the following settings:
+You can define configuration for Azure Static Web Apps in the _staticwebapp.config.json_ file, which controls the following settings:
-- Routing-- Authentication-- Authorization-- Fallback rules-- HTTP response overrides-- Global HTTP header definitions-- Custom MIME types-- Networking
+- [Routing](#routes)
+- [Authentication](#authentication)
+- [Authorization](#routes)
+- [Fallback rules](#fallback-routes)
+- [HTTP response overrides](#response-overrides)
+- [Global HTTP header definitions](#global-headers)
+- [Custom MIME types](#example-configuration-file)
+- [Networking](#networking)
> [!NOTE] > [_routes.json_](https://github.com/Azure/static-web-apps/wiki/routes.json-reference-(deprecated)) that was previously used to configure routing is deprecated. Use _staticwebapp.config.json_ as described in this article to configure routing and other settings for your static web app.
You can define rules for one or more routes in your static web app. Route rules
The routing concerns significantly overlap with authentication (identifying the user) and authorization (assigning abilities to the user) concepts. Make sure to read the [authentication and authorization](authentication-authorization.md) guide along with this article.
-### Defining routes
+### Define routes
Each rule is composed of a route pattern, along with one or more of the optional rule properties. Route rules are defined in the `routes` array. See the [example configuration file](#example-configuration-file) for usage examples.
Each rule is composed of a route pattern, along with one or more of the optional
| Rule property | Required | Default value | Comment | |--|--|--|--| | `route` | Yes | n/a | The route pattern requested by the caller.<ul><li>[Wildcards](#wildcards) are supported at the end of route paths.<ul><li>For instance, the route _/admin\*_ matches any route beginning with _/admin_.</ul></ul> |
-| `methods` | No | All methods | Defines an array of request methods which match a route. Available methods include: `GET`, `HEAD`, `POST`, `PUT`, `DELETE`, `CONNECT`, `OPTIONS`, `TRACE`, and `PATCH`. |
+| `methods` | No | All methods | Defines an array of request methods that match a route. Available methods include: `GET`, `HEAD`, `POST`, `PUT`, `DELETE`, `CONNECT`, `OPTIONS`, `TRACE`, and `PATCH`. |
| `rewrite` | No | n/a | Defines the file or path returned from the request.<ul><li>Is mutually exclusive to a `redirect` rule.<li>Rewrite rules don't change the browser's location.<li>Values must be relative to the root of the app.</ul> | | `redirect` | No | n/a | Defines the file or path redirect destination for a request.<ul><li>Is mutually exclusive to a `rewrite` rule.<li>Redirect rules change the browser's location.<li>Default response code is a [`302`](https://developer.mozilla.org/docs/Web/HTTP/Status/302) (temporary redirect), but you can override with a [`301`](https://developer.mozilla.org/docs/Web/HTTP/Status/301) (permanent redirect).</ul> | | `statusCode` | No | `301` or `302` for redirects | The [HTTP status code](https://developer.mozilla.org/docs/Web/HTTP/Status) of the response. |
Each property has a specific purpose in the request/response pipeline.
| Process after a rule is matched and authorized | `rewrite` (modifies request) <br><br>`redirect`, `headers`, `statusCode` (modifies response) | | Authorize after a route is matched | `allowedRoles` |
-### Specifying route patterns
+### Specify route patterns
The `route` property can be an exact route or a wildcard pattern.
Common uses cases for wildcard routes include:
- Enforcing authentication and authorization rules - Implementing specialized caching rules
-### <a name="securing-routes-with-roles"></a>Securing routes with roles
+### <a name="securing-routes-with-roles"></a>Secure routes with roles
Routes are secured by adding one or more role names into a rule's `allowedRoles` array. See the [example configuration file](#example-configuration-file) for usage examples.
You can create new roles as needed in the `allowedRoles` array. To restrict a ro
> [!IMPORTANT] > When securing content, specify exact files when possible. If you have many files to secure, use wildcards after a shared prefix. For example: `/profile*` secures all possible routes that start with _/profile_, including _/profile_.
-#### Restricting access to entire application
+#### Restrict access to entire application
It's common to require authentication for every route in an application. To enable this, add a rule that matches all routes and include the built-in `authenticated` role in the `allowedRoles` array.
-The following example configuration blocks anonymous access and redirects all unauthenticated users to the Azure Active Directory login page.
+The following example configuration blocks anonymous access and redirects all unauthenticated users to the Azure Active Directory sign-in page.
```json {
The following example configuration blocks anonymous access and redirects all un
## Fallback routes
-Single Page Applications often rely on client-side routing. These client-side routing rules update the browser's window location without making requests back to the server. If you refresh the page, or navigate directly to URLs generated by client-side routing rules, a server-side fallback route is required to serve the appropriate HTML page (which is generally the _https://docsupdatetracker.net/index.html_ for your client-side app).
+Single Page Applications often rely on client-side routing. These client-side routing rules update the browser's window location without making requests back to the server. If you refresh the page, or go directly to URLs generated by client-side routing rules, a server-side fallback route is required to serve the appropriate HTML page, which is generally the _https://docsupdatetracker.net/index.html_ for your client-side app.
-You can define a fallback rule by adding a `navigationFallback` section. The following example returns _/https://docsupdatetracker.net/index.html_ for all static file requests that do not match a deployed file.
+You can define a fallback rule by adding a `navigationFallback` section. The following example returns _/https://docsupdatetracker.net/index.html_ for all static file requests that don't match a deployed file.
```json {
You can control which requests return the fallback file by defining a filter. In
} ```
-The example file structure below, the following outcomes are possible with this rule.
+For example, with the directory structure below, the above navigation fallback rule would result in the outcomes detailed in the table below.
```files Γö£ΓöÇΓöÇ images
The example file structure below, the following outcomes are possible with this
|--|--|--| | _/about/_ | The _/https://docsupdatetracker.net/index.html_ file | `200` | | _/images/logo.png_ | The image file | `200` |
-| _/images/icon.svg_ | The _/https://docsupdatetracker.net/index.html_ file - since the _svg_ file extension is not listed in the `/images/*.{png,jpg,gif}` filter | `200` |
+| _/images/icon.svg_ | The _/https://docsupdatetracker.net/index.html_ file - since the _svg_ file extension isn't listed in the `/images/*.{png,jpg,gif}` filter | `200` |
| _/images/unknown.png_ | File not found error | `404` | | _/css/unknown.css_ | File not found error | `404` | | _/css/global.css_ | The stylesheet file | `200` |
The following HTTP codes are available to override:
|--|--|--| | [400](https://developer.mozilla.org/docs/Web/HTTP/Status/400) | Bad request | Invalid invitation link | | [401](https://developer.mozilla.org/docs/Web/HTTP/Status/401) | Unauthorized | Request to restricted pages while unauthenticated |
-| [403](https://developer.mozilla.org/docs/Web/HTTP/Status/403) | Forbidden | <ul><li>User is logged in but doesn't have the roles required to view the page.<li>User is logged in but the runtime cannot get the user details from their identity claims.<li>There are too many users logged in to the site with custom roles, therefore the runtime can't log in the user.</ul> |
+| [403](https://developer.mozilla.org/docs/Web/HTTP/Status/403) | Forbidden | <ul><li>User is logged in but doesn't have the roles required to view the page.<li>User is logged in but the runtime can't get the user details from their identity claims.<li>There are too many users logged in to the site with custom roles, therefore the runtime can't sign in the user.</ul> |
| [404](https://developer.mozilla.org/docs/Web/HTTP/Status/404) | Not found | File not found | The following example configuration demonstrates how to override an error code.
The following example configuration demonstrates how to override an error code.
The `platform` section controls platform specific settings, such as the API language runtime version.
-### Selecting the API language runtime version
+### Select the API language runtime version
[!INCLUDE [Languages and runtimes](../../includes/static-web-apps-languages-runtimes.md)]
Define each IPv4 address block in Classless Inter-Domain Routing (CIDR) notation
} ```
-When one or more IP address blocks are specified, requests originating from IP addresses that do not match a value in `allowedIpRanges` are denied access.
+When one or more IP address blocks are specified, requests originating from IP addresses that don't match a value in `allowedIpRanges` are denied access.
In addition to IP address blocks, you can also specify [service tags](../virtual-network/service-tags-overview.md) in the `allowedIpRanges` array to restrict traffic to certain Azure services.
In addition to IP address blocks, you can also specify [service tags](../virtual
For details on how to restrict routes to authenticated users, see [Securing routes with roles](#securing-routes-with-roles).
-### Disabling cache for authenticated paths
+### Disable cache for authenticated paths
-If you set up [manual integration with Azure Front Door](front-door-manual.md), you may want to disable caching for your secured routes. If you have enabled [enterprise-grade edge](enterprise-edge.md) this is already configured for you.
+If you set up [manual integration with Azure Front Door](front-door-manual.md), you may want to disable caching for your secured routes. With [enterprise-grade edge](enterprise-edge.md) enabled, this is already configured for you.
To disable Azure Front Door caching for secured routes, add `"Cache-Control": "no-store"` to the route header definition.
The `forwardingGateway` section configures how a static web app is accessed from
### Allowed Forwarded Hosts
-The `allowedForwardedHosts` list specifies which hostnames to accept in the [X-Forwarded-Host](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-Host) header. If a matching domain is in the list, Static Web Apps uses the `X-Forwarded-Host` value when constructing redirect URLs, such as after a successful login.
+The `allowedForwardedHosts` list specifies which hostnames to accept in the [X-Forwarded-Host](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-Host) header. If a matching domain is in the list, Static Web Apps uses the `X-Forwarded-Host` value when constructing redirect URLs, such as after a successful sign-in.
For Static Web Apps to function correctly behind a forwarding gateway, the request from the gateway must include the correct hostname in the `X-Forwarded-Host` header and the same hostname must be listed in `allowedForwardedHosts`.
For example, the following configuration shows how you can add a unique identifi
A trailing slash is the `/` at the end of a URL. Conventionally, trailing slash URL refers to a directory on the web server, while a non-trailing slash indicates a file.
-Search engines treat the two URLs separately, regardless of whether it's a file or a directory. When the same content is rendered at both of these URLs, your website serves duplicate content which can negatively impact search engine optimization (SEO). When explicitly configured, Static Web Apps applies a set of URL normalization and redirect rules that help improve your websiteΓÇÖs performance and SEO.
+Search engines treat the two URLs separately, regardless of whether it's a file or a directory. When the same content is rendered at both of these URLs, your website serves duplicate content, which can negatively affect search engine optimization (SEO). When explicitly configured, Static Web Apps applies a set of URL normalization and redirect rules that help improve your websiteΓÇÖs performance and SEO.
The following normalization and redirect rules apply for each of the available configurations: ### Always
-When setting `trailingSlash` to `always`, all requests that don't include a trailing slash are redirected to a trailing slash URL. For example, `/contact` is redirected to `/contact/`.
+When you're setting `trailingSlash` to `always`, all requests that don't include a trailing slash are redirected to a trailing slash URL. For example, `/contact` is redirected to `/contact/`.
```json "trailingSlash": "always"
When setting `trailingSlash` to `never`, all requests ending in a trailing slash
### Auto
-When setting `trailingSlash` to `auto`, all requests to folders are redirected to a URL with a trailing slash. All requests to files are redirected to a non-trailing slash URL.
+When you set `trailingSlash` to `auto`, all requests to folders are redirected to a URL with a trailing slash. All requests to files are redirected to a non-trailing slash URL.
```json "trailingSlash": "auto"
See the [Quotas article](quotas.md) for general restrictions and limitations.
## Next steps > [!div class="nextstepaction"]
-> [Setup authentication and authorization](authentication-authorization.md)
+> [Set up authentication and authorization](authentication-authorization.md)
+
+## Related articles
+
+- [Set application-level settings and environment variables that can be used by backend APIs](application-settings.md)
+- [Define settings that control the build process](./build-configuration.md)
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets.md
Previously updated : 04/01/2021 Last updated : 01/11/2023
storage File Sync Storsimple Cost Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-storsimple-cost-comparison.md
description: Learn how you can save money and modernize your storage infrastruct
Previously updated : 4/18/2022 Last updated : 01/12/2023 # Comparing the costs of StorSimple to Azure File Sync
-StorSimple is a discontinued physical and virtual appliance product offered by Microsoft to help customers manage their on-premises storage footprint by tiering data to Azure. The [StorSimple 8000 series appliance](/lifecycle/products/azure-storsimple-8000-series) and the [StorSimple 1200 series appliance](/lifecycle/products/azure-storsimple-1200-series) will reach their end of life on December 31, 2022. It is imperative that you begin planning and executing your migration from StorSimple now.
+StorSimple is a discontinued physical and virtual appliance product offered by Microsoft to help customers manage their on-premises storage footprint by tiering data to Azure.
+
+> [!NOTE]
+> The StorSimple Service (including the StorSimple Device Manager for 8000 and 1200 series and StorSimple Data Manager) has reached the end of support. The end of support for StorSimple was published in 2019 on the [Microsoft LifeCycle Policy](/lifecycle/products/?terms=storsimple) and [Azure Communications](https://azure.microsoft.com/updates/storsimpleeol/) pages. Additional notifications were sent via email and posted on the Azure portal and in the [StorSimple overview](../../storsimple/storsimple-overview.md). Contact [Microsoft Support](https://azure.microsoft.com/support/create-ticket/) for additional details.
For most use cases of StorSimple, Azure File Sync is the recommended migration target for file shares being used with StorSimple. Azure File Sync supports similar capabilities to StorSimple, such as the ability to tier to the cloud. However, it provides additional features that StorSimple does not have, such as:
storage Storage Files Migration Storsimple 1200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-1200.md
description: Learn how to migrate a StorSimple 1200 series virtual appliance to
Previously updated : 03/09/2020 Last updated : 01/12/2023 # StorSimple 1200 migration to Azure File Sync
-StorSimple 1200 series is a virtual appliance that is run in an on-premises data center. It is possible to migrate the data from this appliance to an Azure File Sync environment. Azure File Sync is the default and strategic long-term Azure service that StorSimple appliances can be migrated to.
+StorSimple 1200 series is a virtual appliance that is run in an on-premises data center. It's possible to migrate the data from this appliance to an Azure File Sync environment. Azure File Sync is the default and strategic long-term Azure service that StorSimple appliances can be migrated to. This article provides the necessary background knowledge and migrations steps for a successful migration to Azure File Sync.
-StorSimple 1200 series will reach its [end-of-life](https://support.microsoft.com/en-us/lifecycle/search?alpha=StorSimple%201200%20Series) in December 2022. It is important to begin planning your migration as soon as possible. This article provides the necessary background knowledge and migrations steps for a successful migration to Azure File Sync.
+> [!NOTE]
+> The StorSimple Service (including the StorSimple Device Manager for 8000 and 1200 series and StorSimple Data Manager) has reached the end of support. The end of support for StorSimple was published in 2019 on the [Microsoft LifeCycle Policy](/lifecycle/products/?terms=storsimple) and [Azure Communications](https://azure.microsoft.com/updates/storsimpleeol/) pages. Additional notifications were sent via email and posted on the Azure portal and in the [StorSimple overview](../../storsimple/storsimple-overview.md). Contact [Microsoft Support](https://azure.microsoft.com/support/create-ticket/) for additional details.
## Applies to | File share type | SMB | NFS |
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
description: Learn how to migrate a StorSimple 8100 or 8600 appliance to Azure F
Previously updated : 11/28/2022 Last updated : 01/12/2023
# StorSimple 8100 and 8600 migration to Azure File Sync
-The StorSimple 8000 series is represented by either the 8100 or the 8600 physical, on-premises appliances and their cloud service components. StorSimple 8010 and 8020 virtual appliances are also covered in this migration guide. It's possible to migrate the data from either of these appliances to Azure file shares with optional Azure File Sync. Azure File Sync is the default and strategic long-term Azure service that replaces the StorSimple on-premises functionality.
+The StorSimple 8000 series is represented by either the 8100 or the 8600 physical, on-premises appliances and their cloud service components. StorSimple 8010 and 8020 virtual appliances are also covered in this migration guide. It's possible to migrate the data from either of these appliances to Azure file shares with optional Azure File Sync. Azure File Sync is the default and strategic long-term Azure service that replaces the StorSimple on-premises functionality. This article provides the necessary background knowledge and migration steps for a successful migration to Azure File Sync.
-The StorSimple 8000 series will reach its [end of life](/lifecycle/products/azure-storsimple-8000-series) in December 2022. It's important to begin planning your migration as soon as possible. This article provides the necessary background knowledge and migration steps for a successful migration to Azure File Sync.
+> [!NOTE]
+> The StorSimple Service (including the StorSimple Device Manager for 8000 and 1200 series and StorSimple Data Manager) has reached the end of support. The end of support for StorSimple was published in 2019 on the [Microsoft LifeCycle Policy](/lifecycle/products/?terms=storsimple) and [Azure Communications](https://azure.microsoft.com/updates/storsimpleeol/) pages. Additional notifications were sent via email and posted on the Azure portal and in the [StorSimple overview](../../storsimple/storsimple-overview.md). Contact [Microsoft Support](https://azure.microsoft.com/support/create-ticket/) for additional details.
:::row::: :::column:::
synapse-analytics Apache Spark Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/apache-spark-advisor.md
Title: Spark Advisor
+ Title: Apache Spark Advisor in Azure Synapse Analytics
description: Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when a customer executes code or query.
Last updated 06/23/2022
-# Spark Advisor
+# Apache Spark Advisor in Azure Synapse Analytics
-Spark Advisor is a system to automatically analyze commands/queries, and show the appropriate advice when customer executes code or query. After applying the advice, you would have chance to improve your execution performance, decrease cost and fix the execution failures.
+The Apache Spark advisor analyzes commands and code run by Spark and displays real-time advice for Notebook runs. The Spark advisor has built-in patterns to help users avoid common mistakes, offer recommendations for code optimization, perform error analysis, and locate the root cause of failures.
+## Built-in advices
-
-## May return inconsistent results when using 'randomSplit'
+### May return inconsistent results when using 'randomSplit'
Inconsistent or inaccurate results may be returned when working with the results of the 'randomSplit' method. Use Apache Spark (RDD) caching before using the 'randomSplit' method. Method randomSplit() is equivalent to performing sample() on your data frame multiple times, with each sample refetching, partitioning, and sorting your data frame within partitions. The data distribution across partitions and sorting order is important for both randomSplit() and sample(). If either changes upon data refetch, there may be duplicates, or missing values across splits and the same sample using the same seed may produce different results. These inconsistencies may not happen on every run, but to eliminate them completely, cache your data frame, repartition on a column(s), or apply aggregate functions such as groupBy.
-## Table/view name is already in use
+### Table/view name is already in use
A view already exists with the same name as the created table, or a table already exists with the same name as the created view. When this name is used in queries or applications, only the view will be returned no matter which one created first. To avoid conflicts, rename either the table or the view.
-## Hints related advise
### Unable to recognize a hint The selected query contains a hint that isn't recognized. Verify that the hint is spelled correctly.
The selected query contains a hint that prevents another hint from being applied
spark.sql("SELECT /*+ BROADCAST(t1), MERGE(t1, t2) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str") ```
-## Enable 'spark.advise.divisionExprConvertRule.enable' to reduce rounding error propagation
+### Enable 'spark.advise.divisionExprConvertRule.enable' to reduce rounding error propagation
This query contains the expression with Double type. We recommend that you enable the configuration 'spark.advise.divisionExprConvertRule.enable', which can help reduce the division expressions and to reduce the rounding error propagation. ```text "t.a/t.b/t.c" convert into "t.a/(t.b * t.c)" ```
-## Enable 'spark.advise.nonEqJoinConvertRule.enable' to improve query performance
+### Enable 'spark.advise.nonEqJoinConvertRule.enable' to improve query performance
This query contains time consuming join due to "Or" condition within query. We recommend that you enable the configuration 'spark.advise.nonEqJoinConvertRule.enable', which can help to convert the join triggered by "Or" condition to SMJ or BHJ to accelerate this query.
-## Optimize delta table with small files compaction
+### Optimize delta table with small files compaction
This query is on a delta table with many small files. To improve the performance of queries, run the OPTIMIZE command on the delta table. More details could be found within this [article](https://aka.ms/small-file-advise-delta).
-## Optimize Delta table with ZOrder
+### Optimize Delta table with ZOrder
This query is on a Delta table and contains a highly selective filter. To improve the performance of queries, run the OPTIMIZE ZORDER BY command on the delta table. More details could be found within this [article](https://aka.ms/small-file-advise-delta). +
+## User Experience
+
+The Apache Spark advisor displays the advices, including info, warning and errors, at Notebook cell output real-time.
+
+* Info
+
+ ![Screenshot showing for advice information.](./media/apache-spark-advisor/info.png)
+
+* Warning
+
+ ![Screenshot showing for advice warning.](./media/apache-spark-advisor/warning.png)
+
+* Errors
+
+ ![Screenshot showing for advice error.](./media/apache-spark-advisor/error.png)
+ ## Next steps For more information on monitoring pipeline runs, see the [Monitor pipeline runs using Synapse Studio](how-to-monitor-pipeline-runs.md) article.
synapse-analytics System Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/system-integration.md
This article highlights Microsoft system integration partner companies building
| ![Blue Granite](./media/system-integration/blue-granite-logo.png) |**Blue Granite**<br>The BlueGranite Catalyst for Analytics is an engagement approach that features their ΓÇ£think big, but start smallΓÇ¥ philosophy. Starting with collaborative envisioning and strategy sessions, Blue Granite work with clients to discover, create, and realize the value of new modern data and analytics solutions, using the latest technologies on the Microsoft platform.|[Partner page](https://www.blue-granite.com/)<br>| | ![Capax Global](./media/system-integration/capax-global-logo.png) |**Capax Global**<br>We improve your business by making better use of information you already have. Building custom solutions that align to your business goals, and setting you up for long-term success. We combine well-established patterns and practices with technology while using our team's wide range of industry and commercial software development experience. We share a passion for technology, innovation, and client satisfaction. Our pride for what we do drives the success of our projects and is fundamental to why people partner with us.|[Partner page](https://www.capaxglobal.com/)<br>| | ![Coeo](./media/system-integration/coeo-logo.png) |**Coeo**<br>CoeoΓÇÖs team includes cloud consultants with deep expertise in Azure databases, and BI consultants dedicated to providing flexible and scalable analytic solutions. Coeo can help you move to a hybrid or full Azure solution.|[Partner page](https://www.coeo.com/solution/technology/microsoft-azure/)<br>|
-| ![Cognizant](./media/system-integration/cognizant-logo.png) |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Partner page](https://www.cognizant.com/partners/microsoftazure)<br>|
+| ![Cognizant](./media/system-integration/cognizant-logo.png) |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Partner page](https://mbg.cognizant.com/technologies-capabilities/microsoft-azure/)<br>|
| ![Neal Analytics](./media/system-integration/neal-analytics-logo.png) |**Neal Analytics**<br>Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we use data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Cognitive Services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries.|[Partner page](https://nealanalytics.com/)<br>| | ![Pragmatic Works](./media/system-integration/pragmatic-works-logo.png) |**Pragmatic Works**<br>Pragmatic Works can help you capitalize on the value of your data by empowering more users and applications on the same dataset. We kickstart, accelerate, and maintain your cloud environment with a range of solutions that fit your business needs.|[Partner page](https://www.pragmaticworks.com/)<br>|
synapse-analytics Pause And Resume Compute Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-workspace-powershell.md
You can use Azure PowerShell to pause and resume dedicated SQL pool in a Synapse
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. > [!NOTE]
-> This article applies to dedicated SQL pools created in Azure Synapse Workspaces and not dedicated SQL pools (formerly SQL DW). There are different PowerShell cmdlets to use for each, for example, use `Suspend-AzSynapsePool` for a dedicated SQL pool (formerly SQL DW), but `Suspend-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For instructions to pause and resume a dedicated SQL pool (formerly SQL DW), see [Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL DW) with Azure PowerShell](pause-and-resume-compute-powershell.md).
+> This article applies to dedicated SQL pools created in Azure Synapse Workspaces and not dedicated SQL pools (formerly SQL DW). There are different PowerShell cmdlets to use for each, for example, use `Suspend-AzSqlDatabase` for a dedicated SQL pool (formerly SQL DW), but `Suspend-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For instructions to pause and resume a dedicated SQL pool (formerly SQL DW), see [Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL DW) with Azure PowerShell](pause-and-resume-compute-powershell.md).
> For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772). ## Before you begin
To save costs, you can pause and resume compute resources on-demand. For example
> [!NOTE] > There is no charge for compute resources while the pool is paused. However, you continue to be charged for storage.
-To pause a pool, use the [Suspend-AzSynapseSqlPool](/powershell/module/az.synapse/suspend-azsynapsepool?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example pauses a SQL pool named `mySampleDataWarehouse` hosted in workspace named `synapseworkspacename`. The server is in an Azure resource group named **myResourceGroup**.
+To pause a pool, use the [Suspend-AzSynapseSqlPool](/powershell/module/az.synapse/suspend-azsynapsesqlpool?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example pauses a SQL pool named `mySampleDataWarehouse` hosted in workspace named `synapseworkspacename`. The server is in an Azure resource group named **myResourceGroup**.
```powershell Suspend-AzSynapseSqlPool ΓÇôResourceGroupName "myResourceGroup" ` -WorkspaceName "synapseworkspacename" ΓÇôName "mySampleDataWarehouse" ```
-The following example retrieves the pool into the `$pool` object. It then pipes the object to [Suspend-AzSynapseSqlPool](/powershell/module/az.synapse/suspend-azsynapsepool?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). The results are stored in the object `$resultPool`. The final command shows the results.
+The following example retrieves the pool into the `$pool` object. It then pipes the object to [Suspend-AzSynapseSqlPool](/powershell/module/az.synapse/suspend-azsynapsesqlpool?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). The results are stored in the object `$resultPool`. The final command shows the results.
```powershell $pool = Get-AzSynapseSqlPool ΓÇôResourceGroupName "myResourceGroup" `
The **Status** output of the resulting `$resultPool` object contains the new sta
## Resume compute
-To start a pool, use the [Resume-AzSynapsePool](/powershell/module/az.synapse/resume-AzSynapsePool?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example starts a pool named `mySampleDataWarehouse` hosted on a workspace named `sqlpoolservername`. The server is in an Azure resource group named **myResourceGroup**.
+To start a pool, use the [Resume-AzSynapseSqlPool](/powershell/module/az.synapse/resume-AzSynapseSqlPool?toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet. The following example starts a pool named `mySampleDataWarehouse` hosted on a workspace named `sqlpoolservername`. The server is in an Azure resource group named **myResourceGroup**.
```powershell
-Resume-AzSynapsePool ΓÇôResourceGroupName "myResourceGroup" `
+Resume-AzSynapseSqlPool ΓÇôResourceGroupName "myResourceGroup" `
-WorkspaceName "synapseworkspacename" -Name "mySampleDataWarehouse" ```
-The next example retrieves the pool into the `$pool` object. It then pipes the object to [Resume-AzSynapsePool](/powershell/module/az.synapse/resume-AzSynapsePool?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) and stores the results in `$resultpool`. The final command shows the results.
+The next example retrieves the pool into the `$pool` object. It then pipes the object to [Resume-AzSynapseSqlPool](/powershell/module/az.synapse/resume-AzSynapseSqlPool?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) and stores the results in `$resultpool`. The final command shows the results.
```powershell $pool = Get-AzSynapseSqlPool ΓÇôResourceGroupName "myResourceGroup" ` -WorkspaceName "synapseworkspacename" ΓÇôName "mySampleDataWarehouse"
-$resultPool = $pool | Resume-AzSynapsePool
+$resultPool = $pool | Resume-AzSynapseSqlPool
$resultPool ```
synapse-analytics Troubleshoot Sql Link Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/troubleshoot/troubleshoot-sql-link-creation.md
During Azure Synapse Link connection creation, the link creation process may han
3. An incorrect managed identity was provided in the Synapse Link creation, for example, by manually providing an incorrect principal ID or Azure Key vault information.
-To confirm these potential causes, query the [changefeed.change_feed_errors](/sql/relational-databases/system-tables/changefeed-change-feed-errors-transact-sql) dynamic management view and look for error number 22739.
+To confirm these potential causes, query the [sys.dm_change_feed_errors](/sql/relational-databases/system-dynamic-management-views/sys-dm-change-feed-errors) dynamic management view and look for error number 22739.
```sql SELECT session_id, error_number, error_message, source_task, entry_time FROM sys.dm_change_feed_errors
If the SAMI is not enabled, enable the SAMI. Regardless, refresh the Synapse Lin
2. Provide the desired subscription of the source database. Select **Next**. 3. For **Service type**, select **SQL Database**. 4. For **Resource**, select the source database where the initial snapshot is failing.
- 5. For **Summary**, provide any error numbers from `changefeed.change_feed_errors`.
+ 5. For **Summary**, provide any error numbers from `sys.dm_change_feed_errors`.
6. For **Problem type**, select **Data Sync, Replication, CDC and Change Tracking**. 7. For **Problem subtype**, select **Transactional Replication**.
Disable and re-enable the SAMI for the Azure SQL logical server.
- [Managed identities in Azure AD for Azure SQL](/azure/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity) - [Azure Synapse Link for SQL FAQ](../faq.yml) - [Known limitations and issues with Azure Synapse Link for SQL](../synapse-link-for-sql-known-issues.md)
+ - [sys.dm_change_feed_errors (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/sys-dm-change-feed-errors)
virtual-desktop App Attach Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-azure-portal.md
Title: Azure Virtual Desktop MSIX app attach portal - Azure
description: How to set up MSIX app attach for Azure Virtual Desktop using the Azure portal. Previously updated : 04/13/2021 Last updated : 01/12/2023
Here's what you need to configure MSIX app attach:
- The MSIX packaging tool. - An MSIX-packaged application expanded into an MSIX image that's uploaded into a file share. - A file share in your Azure Virtual Desktop deployment where the MSIX package will be stored.-- The file share where you uploaded the MSIX image must also be accessible to all virtual machines (VMs) in the host pool. Users will need read-only permissions to access the image.
+- [The file share where you uploaded the MSIX image](app-attach-file-share.md) must also be accessible to all virtual machines (VMs) in the host pool. Users will need read-only permissions to access the image.
- If the certificate isn't publicly trusted, follow the instructions in [Install certificates](app-attach.md#install-certificates). ## Turn off automatic updates for MSIX app attach applications
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
Title: Set up Private Link for Azure Virtual Desktop preview - Azure
description: How to set up Private Link for Azure Virtual Desktop (preview). Previously updated : 12/06/2022 Last updated : 01/12/2023
To configure Private Link in the Azure portal:
1. In the **Virtual Network** tab, make sure the values in the **Virtual Network** and **subnet** fields are correct.
-1. In the **Private IP configuration** field, choose whether you want to dynamically or statically allocate IP addresses from the subnet you selected in the previous step. <!--What's the difference between these two and why should I choose each?-->
+1. In the **Private IP configuration** field, choose whether you want to dynamically or statically allocate IP addresses from the subnet you selected in the previous step.
- If you choose to statically allocate IP addresses, you'll need to fill in the **Name** and **Private IP** for each listed member.
Follow the directions in [Tutorial: Filter network traffic with a network securi
When you set up your NSG, you must configure it to allow both the URLs in the [required URL list](safe-url-list.md) and your private endpoints. Make sure to include the URLs for Azure Monitor.
->[!NOTE]
->If you intend to restrict network ports from either the user client devices or your session host VMs to the private endpoints, you will need to allow traffic across the entire TCP dynamic port range of 1 - 65535 to the private endpoint for the host pool resource using the *connection* sub-resource. If you restrict ports to the endpoint, your users may not be able to connect successfully to Azure Virtual Desktop.
+> [!NOTE]
+> If you intend to restrict network ports from either the user client devices or your session host VMs to the private endpoints, you will need to allow traffic across the entire TCP dynamic port range of 1 - 65535 to the private endpoint for the host pool resource using the *connection* sub-resource. The entire TCP dynamic port range is needed because port mapping is used to all global gateways through the single private endpoint IP address corresponding to the *connection* sub-resource.
+>
+> If you restrict ports to the private endpoint, your users may not be able to connect successfully to Azure Virtual Desktop.
## Validate your Private Link deployment
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
-Enabling automatic VM guest patching for your Azure VMs helps ease update management by safely and automatically patching virtual machines to maintain security compliance.
+Enabling automatic VM guest patching for your Azure VMs helps ease update management by safely and automatically patching virtual machines to maintain security compliance, while limiting the blast radius of VMs.
Automatic VM guest patching has the following characteristics: - Patches classified as *Critical* or *Security* are automatically downloaded and applied on the VM. - Patches are applied during off-peak hours in the VM's time zone. - Patch orchestration is managed by Azure and patches are applied following [availability-first principles](#availability-first-updates). - Virtual machine health, as determined through platform health signals, is monitored to detect patching failures.
+- Application health can be monitored through the [Application Health extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md).
- Works for all VM sizes. ## How does automatic VM guest patching work?
Patches are installed within 30 days of the monthly patch releases, following av
Definition updates and other patches not classified as *Critical* or *Security* will not be installed through automatic VM guest patching. To install patches with other patch classifications or schedule patch installation within your own custom maintenance window, you can use [Update Management](./windows/tutorial-config-management.md#manage-windows-updates).
+For IaaS VMs, customers can choose to configure VMs to enable automatic VM guest patching. This will limit the blast radius of VMs getting the updated patch and do an orchestrated update of the VMs. The service also provides [health monitoring](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md) to detect issues any issues with the update.
+ ### Availability-first Updates
-The patch installation process is orchestrated globally by Azure for all VMs that have automatic VM guest patching enabled. This orchestration follows availability-first principles across different levels of availability provided by Azure.
+The patch installation process is orchestrated globally by Azure for all VMs that have automatic VM guest patching enabled. This orchestration follows availability-first principles across different levels of availability provided by Azure.
For a group of virtual machines undergoing an update, the Azure platform will orchestrate updates:
For a group of virtual machines undergoing an update, the Azure platform will or
- All VMs in a common availability set are not updated concurrently. - VMs in a common availability set are updated within Update Domain boundaries and VMs across multiple Update Domains are not updated concurrently.
+Narrowing the scope of VMs that are patched across regions, within a region, or an availability set, limit the blast radius of the patch. With health monitoring, any potential issues are flagged without impacting the entire fleet.
+ The patch installation date for a given VM may vary month-to-month, as a specific VM may be picked up in a different batch between monthly patching cycles. ### Which patches are installed?
As a new rollout is triggered every month, a VM will receive at least one patch
|-||--| | Canonical | UbuntuServer | 16.04-LTS | | Canonical | UbuntuServer | 18.04-LTS |
-| Canonical | UbuntuServer | 18_04-LTS-Gen2 |
+| Canonical | UbuntuServer | 18.04-LTS-Gen2 |
| Canonical | 0001-com-ubuntu-pro-bionic | pro-18_04-lts | | Canonical | 0001-com-ubuntu-server-focal | 20_04-lts | | Canonical | 0001-com-ubuntu-server-focal | 20_04-lts-gen2 |
VMs on Azure now support the following patch orchestration modes:
- Custom images are not currently supported. ## Enable automatic VM guest patching
-Automatic VM guest patching can be enabled on any Windows or Linux VM that is created from a supported platform image. To enable automatic VM guest patching on a Windows VM, ensure that the property *osProfile.windowsConfiguration.enableAutomaticUpdates* is set to *true* in the VM template definition. This property can only be set when creating the VM. This additional property is not applicable for Linux VMs.
+Automatic VM guest patching can be enabled on any Windows or Linux VM that is created from a supported platform image.
+ ### REST API for Linux VMs The following example describes how to enable automatic VM guest patching:
PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/
} ```
-### Azure PowerShell for Windows VMs
-Use the [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem) cmdlet to enable automatic VM guest patching when creating or updating a VM.
+### Azure PowerShell when creating a Windows VM
+Use the [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem) cmdlet to enable automatic VM guest patching when creating a VM.
```azurepowershell-interactive Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName $ComputerName -Credential $Credential -ProvisionVMAgent -EnableAutoUpdate -PatchMode "AutomaticByPlatform" ```
+### Azure PowerShell when updating a Windows VM
+Use the [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem) and [Update-AzVM](/powershell/module/az.compute/update-azvm) cmdlet to enable automatic VM guest patching on an existing VM.
+
+```azurepowershell-interactive
+Get-AzVM -VM $VirtualMachine -Windows -ComputerName $ComputerName -Credential $Credential
+Set-AzVMOperatingSystem -VM $VirtualMachine -PatchMode "AutomaticByPlatform"
+Update-AzVM -VM $VirtualMachine
+```
### Azure CLI for Windows VMs Use [az vm create](/cli/azure/vm#az-vm-create) to enable automatic VM guest patching when creating a new VM. The following example configures automatic VM guest patching for a VM named *myVM* in the resource group named *myResourceGroup*:
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
There are limits, per subscription, for deploying resources using Azure Compute
- 100 galleries, per subscription, per region - 1,000 image definitions, per subscription, per region - 10,000 image versions, per subscription, per region-- 100 image version replicas, per subscription, per region however 50 replicas should be sufficient for most use cases
+- 100 replicas per image version however 50 replicas should be sufficient for most use cases
- Any disk attached to the image must be less than or equal to 1TB in size For more information, see [Check resource usage against limits](../networking/check-usage-against-limits.md) for examples on how to check your current usage.
virtual-machines Hc Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hc-series.md
HC-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. These VMs are connecte
| Size | vCPU | Processor | Memory (GiB) | Memory bandwidth GB/s | Base CPU frequency (GHz) | All-cores frequency (GHz, peak) | Single-core frequency (GHz, peak) | RDMA performance (Gb/s) | MPI support | Temp storage (GiB) | Max data disks | Max Ethernet vNICs | | | | | | | | | | | | | | | | Standard_HC44rs | 44 | Intel Xeon Platinum 8168 | 352 | 191 | 2.7 | 3.4 | 3.7 | 100 | All | 700 | 4 | 8 |
+| Standard_HC44-16rs | 16 | Intel Xeon Platinum 8168 | 352 | 191 | 2.7 | 3.4 | 3.7 | 100 | All | 700 | 4 | 8 |
+| Standard_HC44-32rs | 32 | Intel Xeon Platinum 8168 | 352 | 191 | 2.7 | 3.4 | 3.7 | 100 | All | 700 | 4 | 8 |
+ Learn more about the: - [Architecture and VM topology](./workloads/hpc/hc-series-overview.md)
virtual-machines Image Version Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version-encryption.md
Previously updated : 1/9/2023 Last updated : 1/11/2023 ms.devlang: azurecli
virtual-machines Manage Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/manage-restore-points.md
Call the [Restore Point Collections - Get](/rest/api/compute/restore-point-colle
### Step 2: Create a disk
-After you have the list of disk restore point IDs, you can use the [Disks - Create Or Update](/rest/api/compute/disks/create-or-update) API to create a disk from the disk restore points.
+After you have the list of disk restore point IDs, you can use the [Disks - Create Or Update](/rest/api/compute/disks/create-or-update) API to create a disk from the disk restore points. You can choose a zone while creating the disk. The zone can be different from zone in which the disk restore point exists.
## Restore a VM with a restore point
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/configure.md
It is also recommended to create [custom VM images](../../linux/tutorial-custom-
### VM sizes supported by the HPC VM images #### InfiniBand OFED support
-The latest Azure HPC marketplace images come with Mellanox OFED 5.1 and above, which do not support ConnectX3-Pro InfiniBand cards. These VM images only support ConnextX-5 and newer InfiniBand cards. This implies the following VM size support matrix for the InfiniBand OFED in these HPC VM images:
-- [H-series](../../sizes-hpc.md): HB, HC, HBv2, HBv3
+The latest Azure HPC marketplace images come with Mellanox OFED 5.1 and above, which do not support ConnectX3-Pro InfiniBand cards. ConnectX-3 Pro InfiniBand cards require MOFED 4.9 LTS version. These VM images only support ConnextX-5 and newer InfiniBand cards. This implies the following VM size support matrix for the InfiniBand OFED in these HPC VM images:
+- [H-series](../../sizes-hpc.md): HB, HC, HBv2, HBv3, HBv4
- [N-series](../../sizes-gpu.md): NDv2, NDv4 #### GPU driver support
All of the VM sizes in the N-series support [Gen 2 VMs](../../generation-2.md),
### CentOS-HPC VM images #### SR-IOV enabled VMs
-For SR-IOV enabled [RDMA capable VMs](../../sizes-hpc.md#rdma-capable-instances), CentOS-HPC VM images version 7.6 and later are suitable. These VM images come optimized and pre-loaded with the Mellanox OFED drivers for RDMA and various commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images) above.
+For SR-IOV enabled [RDMA capable VMs](../../sizes-hpc.md#rdma-capable-instances), [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC VM images version 7.6 and later are suitable. These VM images come optimized and pre-loaded with the Mellanox OFED drivers for RDMA and various commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images) above.
- The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az-vm-image-list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview). ```bash "publisher": "OpenLogic", "offer": "CentOS-HPC", ```-- Scripts used in the creation of the CentOS-HPC version 7.6 and later VM images from a base CentOS Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/centos).-- Additionally, details on what's included in the CentOS-HPC version 7.6 and later VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094).
+- Scripts used in the creation of the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC version 7.6 and later VM images from a base CentOS Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/centos).
+- Additionally, details on what's included in the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC version 7.6 and later VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094).
> [!NOTE] > Among the CentOS-HPC VM images, currently only the version 7.9 VM image additionally comes pre-configured with the Nvidia GPU drivers and GPU compute software stack (CUDA, NCCL).
For SR-IOV enabled [RDMA capable VMs](../../sizes-hpc.md#rdma-capable-instances)
>- OpenLogic:CentOS-HPC:8_1:8.1.2020062400 >- OpenLogic:CentOS-HPC:8_1-gen2:8.1.2020062401
-#### Non SR-IOV enabled VMs
-For non-SR-IOV enabled [RDMA capable VMs](../../sizes-hpc.md#rdma-capable-instances), CentOS-HPC version 6.5 or a later version, up to 7.4 in the Marketplace are suitable. As an example, for [H16-series VMs](../../h-series.md), versions 7.1 to 7.4 are recommended. These VM images come pre-loaded with the Network Direct drivers for RDMA and Intel MPI version 5.1.
-
-> [!NOTE]
-> On these CentOS-based HPC images for non-SR-IOV enabled VMs, kernel updates are disabled in the **yum** configuration file. This is because the NetworkDirect Linux RDMA drivers are distributed as an RPM package, and driver updates might not work if the kernel is updated.
- ### Ubuntu-HPC VM images For SR-IOV enabled [RDMA capable VMs](../../sizes-hpc.md#rdma-capable-instances), Ubuntu-HPC VM images versions 18.04 and 20.04 are suitable. These VM images come optimized and pre-loaded with the Mellanox OFED drivers for RDMA, Nvidia GPU drivers, GPU compute software stack (CUDA, NCCL), and various commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images) above. - The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az-vm-image-list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-hpc?tab=overview).
vpn-gateway Ipsec Ike Policy Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ipsec-ike-policy-howto.md
Previously updated : 04/28/2021 Last updated : 01/12/2023
-# Configure IPsec/IKE policy for S2S VPN or VNet-to-VNet connections: Azure portal
+# Configure IPsec/IKE policy for S2S VPN and VNet-to-VNet connections: Azure portal
This article walks you through the steps to configure IPsec/IKE policy for VPN Gateway Site-to-Site VPN or VNet-to-VNet connections using the Azure portal. The following sections help you create and configure an IPsec/IKE policy, and apply the policy to a new or existing connection.
This article walks you through the steps to configure IPsec/IKE policy for VPN G
IPsec and IKE protocol standard supports a wide range of cryptographic algorithms in various combinations. Refer to [About cryptographic requirements and Azure VPN gateways](vpn-gateway-about-compliance-crypto.md) to see how this can help ensure cross-premises and VNet-to-VNet connectivity to satisfy your compliance or security requirements.
-This article provides instructions to create and configure an IPsec/IKE policy, and apply it to a new or existing VPN Gateway connection.
+This article provides instructions to create and configure an IPsec/IKE policy, and apply it to a new or existing VPN gateway connection.
### Considerations
This article provides instructions to create and configure an IPsec/IKE policy,
* ***VpnGw1~5 and VpnGw1AZ~5AZ*** * ***Standard*** and ***HighPerformance*** * You can only specify ***one*** policy combination for a given connection.
-* You must specify all algorithms and parameters for both IKE (Main Mode) and IPsec (Quick Mode). Partial policy specification is not allowed.
-* Consult with your VPN device vendor specifications to ensure the policy is supported on your on-premises VPN devices. S2S or VNet-to-VNet connections cannot establish if the policies are incompatible.
+* You must specify all algorithms and parameters for both IKE (Main Mode) and IPsec (Quick Mode). Partial policy specification isn't allowed.
+* Consult with your VPN device vendor specifications to ensure the policy is supported on your on-premises VPN devices. S2S or VNet-to-VNet connections can't establish if the policies are incompatible.
## <a name ="workflow"></a>Workflow This section outlines the workflow to create and update IPsec/IKE policy on a S2S VPN or VNet-to-VNet connection: 1. Create a virtual network and a VPN gateway.
-2. Create a local network gateway for cross premises connection, or another virtual network and gateway for VNet-to-VNet connection.
-3. Create a connection (IPsec or VNet2VNet).
-4. Configure/update/remove the IPsec/IKE policy on the connection resources.
+1. Create a local network gateway for cross premises connection, or another virtual network and gateway for VNet-to-VNet connection.
+1. Create a connection (IPsec or VNet2VNet).
+1. Configure/update/remove the IPsec/IKE policy on the connection resources.
The instructions in this article help you set up and configure IPsec/IKE policies as shown in the diagram:
-## <a name ="params"></a>Supported cryptographic algorithms & key strengths
+## Supported cryptographic algorithms & key strengths
-### <a name ="table1"></a>Algorithms and keys
+### Algorithms and keys
-The following table lists the supported cryptographic algorithms and key strengths configurable by the customers:
+The following table lists the supported configurable cryptographic algorithms and key strengths.
-| **IPsec/IKE** | **Options** |
-| | |
-| IKE Encryption | AES256, AES192, AES128, DES3, DES |
-| IKE Integrity | SHA384, SHA256, SHA1, MD5 |
-| DH Group | DHGroup24, ECP384, ECP256, DHGroup14, DHGroup2048, DHGroup2, DHGroup1, None |
-| IPsec Encryption | GCMAES256, GCMAES192, GCMAES128, AES256, AES192, AES128, DES3, DES, None |
-| IPsec Integrity | GCMASE256, GCMAES192, GCMAES128, SHA256, SHA1, MD5 |
-| PFS Group | PFS24, ECP384, ECP256, PFS2048, PFS2, PFS1, None |
-| QM SA Lifetime | (**Optional**: default values are used if not specified)<br>Seconds (integer; **min. 300**/default 27000 seconds)<br>KBytes (integer; **min. 1024**/default 102400000 KBytes) |
-| Traffic Selector | UsePolicyBasedTrafficSelectors** ($True/$False; **Optional**, default $False if not specified) |
-| DPD timeout | Seconds (integer: min. 9/max. 3600; default 45 seconds) |
-| | |
#### Important requirements
-* Your on-premises VPN device configuration must match or contain the following algorithms and parameters that you specify on the Azure IPsec/IKE policy:
- * IKE encryption algorithm (Main Mode / Phase 1)
- * IKE integrity algorithm (Main Mode / Phase 1)
- * DH Group (Main Mode / Phase 1)
- * IPsec encryption algorithm (Quick Mode / Phase 2)
- * IPsec integrity algorithm (Quick Mode / Phase 2)
- * PFS Group (Quick Mode / Phase 2)> * Traffic Selector (if UsePolicyBasedTrafficSelectors is used)
- * The SA lifetimes are local specifications only, do not need to match.
-
-* If GCMAES is used as for IPsec Encryption algorithm, you must select the same GCMAES algorithm and key length for IPsec Integrity; for example, using GCMAES128 for both.
-
-* In the [algorithms and keys table](#table1) above:
- * IKE corresponds to Main Mode or Phase 1
- * IPsec corresponds to Quick Mode or Phase 2
- * DH Group specifies the Diffie-Hellmen Group used in Main Mode or Phase 1
- * PFS Group specified the Diffie-Hellmen Group used in Quick Mode or Phase 2
-
-* IKE Main Mode SA lifetime is fixed at 28,800 seconds on the Azure VPN gateways.
-
-* If you set **UsePolicyBasedTrafficSelectors** to $True on a connection, it will configure the Azure VPN gateway to connect to policy-based VPN firewall on premises. If you enable PolicyBasedTrafficSelectors, you need to ensure your VPN device has the matching traffic selectors defined with all combinations of your on-premises network (local network gateway) prefixes to/from the Azure virtual network prefixes, instead of any-to-any. For example, if your on-premises network prefixes are 10.1.0.0/16 and 10.2.0.0/16, and your virtual network prefixes are 192.168.0.0/16 and 172.16.0.0/16, you need to specify the following traffic selectors:
- * 10.1.0.0/16 <====> 192.168.0.0/16
- * 10.1.0.0/16 <====> 172.16.0.0/16
- * 10.2.0.0/16 <====> 192.168.0.0/16
- * 10.2.0.0/16 <====> 172.16.0.0/16
-
- For more information regarding policy-based traffic selectors, see [Connect multiple on-premises policy-based VPN devices](vpn-gateway-connect-multiple-policybased-rm-ps.md).
-
-* DPD timeout - The default value is 45 seconds on Azure VPN gateways. Setting the timeout to shorter periods will cause IKE to rekey more aggressively, causing the connection to appear to be disconnected in some instances. This may not be desirable if your on-premises locations are farther away from the Azure region where the VPN gateway resides, or the physical link condition could incur packet loss. The general recommendation is to set the timeout between **30 to 45** seconds.
### Diffie-Hellman Groups The following table lists the corresponding Diffie-Hellman Groups supported by the custom policy:
-| **Diffie-Hellman Group** | **DHGroup** | **PFSGroup** | **Key length** |
-| | | | |
-| 1 | DHGroup1 | PFS1 | 768-bit MODP |
-| 2 | DHGroup2 | PFS2 | 1024-bit MODP |
-| 14 | DHGroup14<br>DHGroup2048 | PFS2048 | 2048-bit MODP |
-| 19 | ECP256 | ECP256 | 256-bit ECP |
-| 20 | ECP384 | ECP384 | 384-bit ECP |
-| 24 | DHGroup24 | PFS24 | 2048-bit MODP |
Refer to [RFC3526](https://tools.ietf.org/html/rfc3526) and [RFC5114](https://tools.ietf.org/html/rfc5114) for more details.
-## <a name ="S2S"></a>S2S VPN with IPsec/IKE policy
+## <a name="crossprem"></a>Create S2S VPN connection with custom policy
This section walks you through the steps to create a Site-to-Site VPN connection with an IPsec/IKE policy. The following steps create the connection as shown in the following diagram:
-### <a name="createvnet1"></a>Step 1 - Create the virtual network, VPN gateway, and local network gateway
+### Step 1 - Create the virtual network, VPN gateway, and local network gateway for TestVNet1
-Create the following resources, as shown in the screenshots below. For steps, see [Create a Site-to-Site VPN connection](./tutorial-site-to-site-portal.md).
+Create the following resources using the following values. For steps, see [Create a Site-to-Site VPN connection](./tutorial-site-to-site-portal.md).
-* **Virtual network:** TestVNet1
+**Virtual network** TestVNet1
- :::image type="content" source="./media/ipsec-ike-policy-howto/testvnet-1.png" alt-text="VNet":::
+* **Resource group:** TestRG1
+* **Name:** TestVNet1
+* **Region:** (US) East US
+* **IPv4 address space:** 10.1.0.0/16
+* **Subnet 1 name:** FrontEnd
+* **Subnet 1 address range:** 10.1.0.0/24
+* **Subnet 2 name:** BackEnd
+* **Subnet 2 address range:** 10.1.1.0/24
-* **VPN gateway:** VNet1GW
+**VPN gateway:** VNet1GW
- :::image type="content" source="./media/ipsec-ike-policy-howto/vnet-1-gateway.png" alt-text="Gateway":::
+* **Name:** VNet1GW
+* **Region:** East US
+* **Gateway type:** VPN
+* **VPN type:** Route-based
+* **SKU:** VpnGw2
+* **Generation:** Generation 2
+* **Virtual network:** VNet1
+* **Gateway subnet address range:** 10.1.255.0/27
+* **Public IP address type:** Basic or Standard
+* **Public IP address:** Create new
+* **Public IP address name:** VNet1GWpip
+* **Enable active-active mode:** Disabled
+* **Configure BGP:** Disabled
-* **Local network gateway:** Site6
+### Step 2 - Configure the local network gateway and connection resources
- :::image type="content" source="./media/ipsec-ike-policy-howto/lng-site-6.png" alt-text="Site":::
+Create the local network gateway resource.
-* **Connection:** VNet1 to Site6
+**Local network gateway** Site6
- :::image type="content" source="./media/ipsec-ike-policy-howto/connection-site-6.png" alt-text="Connection":::
+* **Name:** Site6
+* **Resource Group:** TestRG1
+* **Location:** East US
+* **Local gateway IP address:** 5.4.3.2 (example value only - use the IP address of your on-premises device)
+* **Address Spaces** 10.61.0.0/16, 10.62.0.0/16 (example value only)
-### <a name="s2sconnection"></a>Step 2 - Configure IPsec/IKE policy on the S2S VPN connection
+**Connection:** VNet1 to Site6
-In this section, configure an IPsec/IKE policy with the following algorithms and parameters:
+From the virtual network gateway, add a connection to the local network gateway.
-* IKE: AES256, SHA384, DHGroup24, DPD timeout 45 seconds
-* IPsec: AES256, SHA256, PFS None, SA Lifetime 30000 seconds and 102400000KB
+* **Connection name:** VNet1toSite6
+* **Connection type:** IPsec
+* **Local network gateway:** Site6
+* **Shared key:** abc123 (example value - must match the on-premises device key used)
+* **IKE protocol:** IKEv2
+
+### Step 3 - Configure a custom IPsec/IKE policy on the S2S VPN connection
+
+In this section, configure a custom IPsec/IKE policy with the following algorithms and parameters:
-1. Navigate to the connection resource, **VNet1toSite6**, in the Azure portal. Select **Configuration** page and select **Custom** IPsec/IKE policy to show all configuration options. The screenshot below shows the configuration according to the list:
+* IKE Phase 1: AES256, SHA384, DHGroup24
+* IKE Phase 2(IPsec): AES256, SHA256, PFS None
+* IPsec SA Lifetime in KB: 102400000
+* IPsec SA lifetime in seconds: 30000
+* DPD timeout: 45 seconds
- :::image type="content" source="./media/ipsec-ike-policy-howto/policy-site-6.png" alt-text="Site 6":::
+1. Go to the **Connection** resource you created, **VNet1toSite6**. Open the **Configuration** page. Select **Custom** IPsec/IKE policy to show all configuration options. The following screenshot shows the configuration according to the list:
-1. If you use GCMAES for IPsec, you must use the same GCMAES algorithm and key length for both IPsec encryption and integrity. For example, the screenshot below specifies GCMAES128 for both IPsec encryption and IPsec integrity:
+ :::image type="content" source="./media/ipsec-ike-policy-howto/configuration-connection.png" alt-text="Screenshot shows the Site 6 connection configuration." lightbox="./media/ipsec-ike-policy-howto/configuration-connection.png":::
- :::image type="content" source="./media/ipsec-ike-policy-howto/gcmaes.png" alt-text="GCMAES for IPsec":::
+ If you use GCMAES for IPsec, you must use the same GCMAES algorithm and key length for both IPsec encryption and integrity. For example, the following screenshot specifies GCMAES128 for both IPsec encryption and IPsec integrity:
-1. You can optionally select **Enable** for the **Use policy based traffic selectors** option to enable Azure VPN gateway to connect to policy-based VPN devices on premises, as described above.
+ :::image type="content" source="./media/ipsec-ike-policy-howto/gcmaes.png" alt-text="Screenshot shows GCMAES for IPsec." lightbox="./media/ipsec-ike-policy-howto/gcmaes.png":::
- :::image type="content" source="./media/ipsec-ike-policy-howto/policy-based-selector.png" alt-text="Policy based traffic selector":::
+1. If you want to enable Azure VPN gateway to connect to policy-based on-premises VPN devices, you can select **Enable** for the **Use policy based traffic selectors** option.
1. Once all the options are selected, select **Save** to commit the changes to the connection resource. The policy will be enforced in about a minute.
In this section, configure an IPsec/IKE policy with the following algorithms and
> > * Once an IPsec/IKE policy is specified on a connection, the Azure VPN gateway will only send or accept the IPsec/IKE proposal with specified cryptographic algorithms and key strengths on that particular connection. Make sure your on-premises VPN device for the connection uses or accepts the exact policy combination, otherwise the S2S VPN tunnel will not establish. >
-> * **Policy-based traffic selector** and **DPD timeout** options can be specified with **Default** policy, without the custom IPsec/IKE policy as shown in the screenshot above.
+> * **Policy-based traffic selector** and **DPD timeout** options can be specified with **Default** policy, without the custom IPsec/IKE policy.
>
-## <a name ="vnet2vnet"></a>VNet-to-VNet with IPsec/IKE policy
+## Create VNet-to-VNet connection with custom policy
+
+The steps to create a VNet-to-VNet connection with an IPsec/IKE policy are similar to that of an S2S VPN connection. You must complete the previous sections in [Create an S2S vpn connection](#crossprem) to create and configure TestVNet1 and the VPN gateway.
++
+### Step 1 - Create the virtual network, VPN gateway, and local network gateway for TestVNet2
-The steps to create a VNet-to-VNet connection with an IPsec/IKE policy are similar to that of an S2S VPN connection.
+Use the steps in the [Create a VNet-to-VNet connection](/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to create TestVNet2 and create a VNet-to-VNet connection to TestVNet1.
+Example values:
-1. Use the steps in the [Create a VNet-to-VNet connection](vpn-gateway-vnet-vnet-rm-ps.md) article to create your VNet-to-VNet connection.
+**Virtual network** TestVNet2
-2. After completing the steps, you will see two VNet-to-VNet connections as shown in the screenshot below from the VNet2GW resource:
+* **Resource group:** TestRG2
+* **Name:** TestVNet2
+* **Region:** (US) West US
+* **IPv4 address space:** 10.2.0.0/16
+* **Subnet 1 name:** FrontEnd
+* **Subnet 1 address range:** 10.2.0.0/24
+* **Subnet 2 name:** BackEnd
+* **Subnet 2 address range:** 10.2.1.0/24
- :::image type="content" source="./media/ipsec-ike-policy-howto/vnet-vnet-connections.png" alt-text="VNet-to-VNet connections":::
+**VPN gateway:** VNet2GW
-3. Navigate to the connection resource, and go to the **Configuration** page on the portal. Select **Custom** on the **IPsec/IKE policy** to show the custom policy options. Select the cryptographic algorithms with the corresponding key lengths.
+* **Name:** VNet2GW
+* **Region:** West US
+* **Gateway type:** VPN
+* **VPN type:** Route-based
+* **SKU:** VpnGw2
+* **Generation:** Generation 2
+* **Virtual network:** TestVNet2
+* **Gateway subnet address range:** 10.2.255.0/27
+* **Public IP address type:** Basic or Standard
+* **Public IP address:** Create new
+* **Public IP address name:** VNet2GWpip
+* **Enable active-active mode:** Disabled
+* **Configure BGP:** Disabled
- The screenshot shows a different IPsec/IKE policy with the following algorithms and parameters:
- * IKE: AES128, SHA1, DHGroup14, DPD timeout 45 seconds
- * IPsec: GCMAES128, GCMAES128, PFS14, SA Lifetime 14400 seconds & 102400000KB
+### Step 2 - Configure the VNet-to-VNet connection
- :::image type="content" source="./media/ipsec-ike-policy-howto/vnet-vnet-policy.png" alt-text="Connection policy":::
+1. From the VNet1GW gateway, add a VNet-to-VNet connection to VNet2GW, **VNet1toVNet2**.
-4. Select **Save** to apply the policy changes on the connection resource.
+1. Next, from the VNet2GW, add a VNet-to-VNet connection to VNet1GW, **VNet2toVNet1**.
-5. Apply the same policy to the other connection resource, VNet2toVNet1. If you don't, the IPsec/IKE VPN tunnel will not connect due to policy mismatch.
+1. After you add the connections, you'll see the VNet-to-VNet connections as shown in the following screenshot from the VNet2GW resource:
+
+ :::image type="content" source="./media/ipsec-ike-policy-howto/vnet-connections.png" alt-text="Screenshot shows VNet-to-VNet connections." border="false" lightbox="./media/ipsec-ike-policy-howto/vnet-connections.png":::
+
+### Step 3 - Configure a custom IPsec/IKE policy on VNet1toVNet2
+
+1. From the **VNet1toVNet2** connection resource, go to the **Configuration** page.
+
+1. For **IPsec / IKE policy**, select **Custom** to show the custom policy options. Select the cryptographic algorithms with the corresponding key lengths. This policy doesn't need to match the previous policy you created for the VNet1toSite6 connection.
+
+ Example values:
+
+ * IKE Phase 1: AES128, SHA1, DHGroup14
+ * IKE Phase 2(IPsec): GCMAES128, GCMAES128, PFS2048
+ * IPsec SA Lifetime in KB: 102400000
+ * IPsec SA lifetime in seconds: 14400
+ * DPD timeout: 45 seconds
+
+1. Select **Save** at the top of the page to apply the policy changes on the connection resource.
+
+### Step 4 - Configure a custom IPsec/IKE policy on VNet2toVNet1
+
+1. Apply the same policy to the VNet2toVNet1 connection, VNet2toVNet1. If you don't, the IPsec/IKE VPN tunnel won't connect due to policy mismatch.
> [!IMPORTANT] > Once an IPsec/IKE policy is specified on a connection, the Azure VPN gateway will only send or accept
The steps to create a VNet-to-VNet connection with an IPsec/IKE policy are simil
> connection. Make sure the IPsec policies for both connections are the same, otherwise the > VNet-to-VNet connection will not establish.
-6. After completing these steps, the connection is established in a few minutes, and you will have the following network topology:
-
- :::image type="content" source="./media/ipsec-ike-policy-howto/policy-diagram.png" alt-text="IPsec/IKE policy diagram" border="false":::
-
-## <a name ="deletepolicy"></a>To remove custom IPsec/IKE policy from a connection
-
-1. To remove a custom policy from a connection, navigate to the connection resource and go to the **Configuration** page to see the current policy.
+1. After you complete these steps, the connection is established in a few minutes, and you'll have the following network topology.
-2. Select **Default** on the **IPsec/IKE policy** option. This will remove all custom policy previously specified on the connection, and restore the Default IPsec/IKE settings on this connection:
+ :::image type="content" source="./media/ipsec-ike-policy-howto/policy-diagram.png" alt-text="Diagram shows IPsec/IKE policy." border="false" lightbox="./media/ipsec-ike-policy-howto/policy-diagram.png":::
- :::image type="content" source="./media/ipsec-ike-policy-howto/delete-policy.png" alt-text="Delete policy":::
+## To remove custom policy from a connection
-3. Select **Save** to remove the custom policy and restore the default IPsec/IKE settings on the connection.
+1. To remove a custom policy from a connection, go to the connection resource.
+1. On the **Configuration** page, change the IPse /IKE policy from **Custom** to **Default**. This will remove all custom policy previously specified on the connection, and restore the Default IPsec/IKE settings on this connection.
+1. Select **Save** to remove the custom policy and restore the default IPsec/IKE settings on the connection.
## Next steps
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
You can use the same VPN client configuration package on each Windows client com
## <a name="azurevpn"></a>OpenVPN: Azure VPN Client steps
-This section applies to certificate authentication configurations that use the OpenVPN tunnel type. The following steps help you download, install, and configure the Azure VPN Client to connect to your VNet. To connect to your VNet, each client must have the following items:
+This section applies to certificate authentication configurations that use the OpenVPN tunnel type. The following steps help you download, install, and configure the Azure VPN Client to connect to your VNet. Each client computer requires the following items:
-* The Azure VPN Client software is installed.
-* Azure VPN Client profile is configured using the downloaded **azurevpnconfig.xml** configuration file.
-* The client certificate is installed locally.
+* The Azure VPN Client software must be installed on each client computer that you want to connect.
+* The Azure VPN Client profile must be configured using the downloaded **azurevpnconfig.xml** configuration file.
+* The client computer must have a client certificate that's installed locally.
### <a name="view-azurevpn"></a>View configuration files
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
Title: 'Connect to a VNet using P2S VPN & certificate authentication: portal'
+ Title: 'Configure P2S server configuration - certificate authentication: Azure portal'
-description: Learn how to connect Windows, macOS, and Linux clients securely to a VNet using VPN Gateway point-to-site connections and self-signed or CA issued certificates.
+description: Learn how to configure VPN Gateway server settings for P2S configurations - certificate authentication.
Previously updated : 11/07/2022 Last updated : 01/11/2023
-# Configure a point-to-site VPN connection using Azure certificate authentication: Azure portal
+# Configure server settings for P2S VPN Gateway connections - certificate authentication - Azure portal
-This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. point-to-site VPN connections are useful when you want to connect to your VNet from a remote location, such as when you're telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. point-to-site connections don't require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md).
+This article helps you configure the necessary VPN Gateway point-to-site (P2S) server settings to let you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. P2S VPN connections are useful when you want to connect to your VNet from a remote location, such as when you're telecommuting from home or a conference. You can also use P2S instead of a site-to-site (S2S) VPN when you have only a few clients that need to connect to a virtual network (VNet). P2S connections don't require a VPN device or a public-facing IP address.
-
-For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md). To create this configuration using the Azure PowerShell, see [Configure a point-to-site VPN using Azure PowerShell](vpn-gateway-howto-point-to-site-rm-ps.md).
+There are various different configuration options available for P2S. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md). This article helps you create a P2S configuration that uses **certificate authentication** and the Azure portal. To create this configuration using the Azure PowerShell, see the [Configure P2S - Certificate - PowerShell](vpn-gateway-howto-point-to-site-rm-ps.md) article. For RADIUS authentication, see the [P2S RADIUS](point-to-site-how-to-radius-ps.md) article. For Azure Active Directory authentication, see the [P2S Azure AD](openvpn-azure-ad-tenant.md) article.
[!INCLUDE [P2S basic architecture](../../includes/vpn-gateway-p2s-architecture.md)]
You can use the following values to create a test environment, or refer to these
## <a name="createvnet"></a>Create a VNet
-In this section, you create a virtual network.
+In this section, you create a VNet. Refer to the [Example values](#example) section for the suggested values to use for this configuration.
[!INCLUDE [About cross-premises addresses](../../includes/vpn-gateway-cross-premises.md)]
In this section, you create a virtual network.
In this step, you create the virtual network gateway for your VNet. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. >[!NOTE]
->The Basic gateway SKU does not support IKEv2 or RADIUS authentication. If you plan on having Mac clients connect to your virtual network, do not use the Basic SKU.
+>The Basic gateway SKU does not support IKEv2 or RADIUS authentication. If you plan on having Mac clients connect to your VNet, do not use the Basic SKU.
> [!INCLUDE [About gateway subnets](../../includes/vpn-gateway-about-gwsubnet-portal-include.md)]
In this step, you create the virtual network gateway for your VNet. Creating a g
[!INCLUDE [Create a vpn gateway](../../includes/vpn-gateway-add-gw-portal-include.md)] [!INCLUDE [Configure PIP settings](../../includes/vpn-gateway-add-gw-pip-portal-include.md)]
-You can see the deployment status on the Overview page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
+You can see the deployment status on the **Overview** page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the VNet in the portal. The gateway appears as a connected device.
[!INCLUDE [NSG warning](../../includes/vpn-gateway-no-nsg-include.md)] ## <a name="generatecert"></a>Generate certificates
-Certificates are used by Azure to authenticate clients connecting to a VNet over a point-to-site VPN connection. Once you obtain a root certificate, you [upload](#uploadfile) the public key information to Azure. The root certificate is then considered 'trusted' by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
+Certificates are used by Azure to authenticate clients connecting to a VNet over a point-to-site VPN connection. Once you obtain a root certificate, you [upload](#uploadfile) the public key information to Azure. The root certificate is then considered 'trusted' by Azure for connection over P2S to the VNet.
+
+You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
+
+The root certificate must be generated and extracted prior to creating your point-to-site configuration in the next sections.
### <a name="getcer"></a>Generate a root certificate
Certificates are used by Azure to authenticate clients connecting to a VNet over
[!INCLUDE [generate-client-cert](../../includes/vpn-gateway-p2s-clientcert-include.md)]
-## <a name="addresspool"></a>Add the VPN client address pool
+## <a name="addresspool"></a>Add the address pool
+
+The **Point-to-site configuration** page contains the configuration information that's needed for the P2S VPN. Once all the P2S settings have been configured and the gateway has been updated, the Point-to-site configuration page is used to view or change P2S VPN settings.
+
+1. Go to the gateway you created in the previous section.
+1. In the left pane, select **Point-to-site configuration**.
+1. Click **Configure now** to open the configuration page.
The client address pool is a range of private IP addresses that you specify. The clients that connect over a point-to-site VPN dynamically receive an IP address from this range. Use a private IP address range that doesn't overlap with the on-premises location that you connect from, or the VNet that you want to connect to. If you configure multiple protocols and SSTP is one of the protocols, then the configured address pool is split between the configured protocols equally.
-1. Once the virtual network gateway has been created, navigate to the **Settings** section of the virtual network gateway page. In **Settings**, select **Point-to-site configuration**. Select **Configure now** to open the configuration page.
+ :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/configuration-address-pool.png" alt-text="Screenshot of Point-to-site configuration page - address pool." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/configuration-address-pool.png":::
- :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/configure-now.png" alt-text="Point-to-site configuration page." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/configure-now.png":::
1. On the **Point-to-site configuration** page, in the **Address pool** box, add the private IP address range that you want to use. VPN clients dynamically receive an IP address from the range that you specify. The minimum subnet mask is 29 bit for active/passive and 28 bit for active/active configuration.
-1. Continue to the next section to configure authentication and tunnel types.
-## <a name="type"></a>Specify tunnel type and authentication type
+1. Next, configure tunnel and authentication type.
+
+## <a name="type"></a>Specify tunnel and authentication type
+
+>[!NOTE]
+>If you don't see tunnel type or authentication type on the **Point-to-site configuration** page, your gateway is using the Basic SKU. The Basic SKU doesn't support IKEv2 or RADIUS authentication. If you want to use these settings, you need to delete and recreate the gateway using a different gateway SKU.
+>
+
+In this section, you specify the tunnel type and the authentication type. These settings can become complex, depending on the tunnel type you require and the VPN client software that will be used to make the connection from the user's operating system. The steps in this article will walk you through basic configuration settings and choices.
+
+You can select options that contain multiple tunnel types from the dropdown - such as *IKEv2 and OpenVPN(SSL)* or *IKEv2 and SSTP (SSL)*, however, only certain combinations of tunnel types and authentication types are supported. For example, Azure Active Directory authentication can only be used when you select *OpenVPN (SSL)* from the tunnel type dropdown, and not *IKEv2 and OpenVPN(SSL)*.
+
+Additionally, the tunnel type and the authentication type you choose impact the VPN client software that can be used to connect to Azure. Some VPN client software can only connect via IKEv2, others can only connect via OpenVPN. And some client software, while it supports a certain tunnel type, may not support the authentication type you choose.
+
+As you can tell, planning the tunnel type and authentication type is important when you have a variety of VPN clients connecting from different operating systems. Consider the following criteria when you choose your tunnel type in combination with **Azure certificate** authentication. Other authentication types have different considerations.
-In this section, you specify the tunnel type and the authentication type. If you don't see tunnel type or authentication type on the Point-to-site configuration page, your gateway is using the Basic SKU. The Basic SKU doesn't support IKEv2 or RADIUS authentication. If you want to use these settings, you need to delete and recreate the gateway using a different gateway SKU.
+* **Windows**:
+
+ * Windows computers connecting via the native VPN client already installed in the operating system will try IKEv2 first and, if that doesn't connect, they fall back to SSTP (if you selected both IKEv2 and SSTP from the tunnel type dropdown).
+ * If you select the OpenVPN tunnel type, you can connect using an OpenVPN Client or the Azure VPN Client.
+ * The Azure VPN Client can support additional [optional configuration settings](azure-vpn-client-optional-configurations.md) such as custom routes and forced tunneling.
+
+* **macOS and iOS**:
+
+ * The native VPN client for iOS and macOS can only use the IKEv2 tunnel type to connect to Azure.
+ * The Azure VPN Client isn't supported for certificate authentication at this time, even if you select the OpenVPN tunnel type.
+ * If you want to use the OpenVPN tunnel type with certificate authentication, you can use an OpenVPN client.
+ * For macOS, you can use the Azure VPN Client with the OpenVPN tunnel type and Azure AD authentication (not certificate authentication).
+
+* **Android and Linux**:
+
+ * The strongSwan client on Android and Linux can use only the IKEv2 tunnel type to connect. If you want to use the OpenVPN tunnel type, use a different VPN client.
### <a name="tunneltype"></a>Tunnel type
-On the **Point-to-site configuration** page, select the **Tunnel type**. When selecting the tunnel type, note the following:
+On the **Point-to-site configuration** page, select the **Tunnel type**. For this exercise, from the dropdown, select **IKEv2 and OpenVPN(SSL)**.
-* The strongSwan client on Android and Linux and the native IKEv2 VPN client on iOS and macOS will use only the IKEv2 tunnel type to connect.
-* Windows clients will try IKEv2 first and if that doesn't connect, they fall back to SSTP.
-* You can use the OpenVPN client to connect to the OpenVPN tunnel type.
### <a name="authenticationtype"></a>Authentication type
-For **Authentication type**, select **Azure certificate**.
+For this exercise, select **Azure certificate** for the authentication type. If you're interested in other authentication types, see the articles for [Azure AD](openvpn-azure-ad-tenant.md) and [RADIUS](point-to-site-how-to-radius-ps.md).
- :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/authentication-type.png" alt-text="Screenshot of authentication type with Azure certificate selected." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/authentication-type.png" :::
## <a name="uploadfile"></a>Upload root certificate public key information
In this section, you upload public root certificate data to Azure. Once the publ
* **Name** the certificate. :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/root-certificate.png" alt-text="Screenshot of certificate data field." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/root-certificate.png":::
+1. Additional routes aren't necessary for this exercise. For more information about the custom routing feature, see [Advertise custom routes](vpn-gateway-p2s-advertise-custom-routes.md).
1. Select **Save** at the top of the page to save all of the configuration settings. :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/save-configuration.png" alt-text="Screenshot of P2S configuration with Save selected." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/save-configuration.png" ::: ## <a name="installclientcert"></a>Install exported client certificate
-If you want to create a P2S connection from a client computer other than the one you used to generate the client certificates, you need to install a client certificate. When installing a client certificate, you need the password that was created when the client certificate was exported.
+Each VPN client that wants to connect needs to have a client certificate. When you generate a client certificate, the computer you used will typically automatically install the client certificate for you. If you want to create a P2S connection from another computer, you need to install a client certificate on the computer that wants to connect. When installing a client certificate, you need the password that was created when the client certificate was exported.
Make sure the client certificate was exported as a .pfx along with the entire certificate chain (which is the default). Otherwise, the root certificate information isn't present on the client computer and the client won't be able to authenticate properly. For install steps, see [Install a client certificate](point-to-site-how-to-vpn-client-install-azure-cert.md).
-## <a name="clientconfig"></a>Configure settings for VPN clients
-
-To connect to the virtual network gateway using P2S, each computer can use the VPN client that is natively installed as a part of the operating system. For example, when you go to VPN settings on your Windows computer, you can add VPN connections without installing a separate VPN client. You configure each VPN client by using a client configuration package. The client configuration package contains settings that are specific to the VPN gateway that you created.
-
-For steps to generate and install VPN client configuration files, see [Configure point-to-site VPN clients - certificate authentication](point-to-site-vpn-client-cert-windows.md).
+## <a name="clientconfig"></a>Configure VPN clients and connect to Azure
-## <a name="connect"></a>Connect to Azure
+Each VPN client is configured using the files in a VPN client profile configuration package that you generate and download. The configuration package contains settings that are specific to the VPN gateway that you created. If you make changes to the gateway, such as changing a tunnel type, certificate, or authentication type, you'll need to generate another VPN client profile configuration package and install it on each client. Otherwise, your VPN clients may not be able to connect.
-### To connect from a Windows VPN client
+For steps to generate a VPN client profile configuration package, configure your VPN clients, and connect to Azure, see the following articles:
--
-### To connect from a Mac VPN client
-
-From the Network dialog box, locate the client profile that you want to use, specify the settings from the [VpnSettings.xml](point-to-site-vpn-client-cert-mac.md), and then select **Connect**. For detailed instructions, see [Configure point-to-site VPN clients - certificate authentication - macOS](point-to-site-vpn-client-cert-mac.md).
-
-If you're having trouble connecting, verify that the virtual network gateway isn't using a Basic SKU. The Basic SKU isn't supported for Mac clients.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/mac/select-connect.png" alt-text="Screenshot shows connect button." lightbox="./media/point-to-site-vpn-client-cert-mac/mac/select-connect.png":::
+* [Windows](point-to-site-vpn-client-cert-windows.md)
+* [macOS-iOS](point-to-site-vpn-client-cert-mac.md)
+* [Linux](point-to-site-vpn-client-cert-linux.md)
## <a name="verify"></a>To verify your connection These instructions apply to Windows clients. 1. To verify that your VPN connection is active, open an elevated command prompt, and run *ipconfig/all*.
-2. View the results. Notice that the IP address you received is one of the addresses within the point-to-site VPN Client Address Pool that you specified in your configuration. The results are similar to this example:
+1. View the results. Notice that the IP address you received is one of the addresses within the point-to-site VPN Client Address Pool that you specified in your configuration. The results are similar to this example:
``` PPP adapter VNet1:
To remove a trusted root certificate:
## <a name="revokeclient"></a>To revoke a client certificate
-You can revoke client certificates. The certificate revocation list allows you to selectively deny point-to-site connectivity based on individual client certificates. This is different than removing a trusted root certificate. If you remove a trusted root certificate .cer from Azure, it revokes the access for all client certificates generated/signed by the revoked root certificate. Revoking a client certificate, rather than the root certificate, allows the other certificates that were generated from the root certificate to continue to be used for authentication.
+You can revoke client certificates. The certificate revocation list allows you to selectively deny P2S connectivity based on individual client certificates. This is different than removing a trusted root certificate. If you remove a trusted root certificate .cer from Azure, it revokes the access for all client certificates generated/signed by the revoked root certificate. Revoking a client certificate, rather than the root certificate, allows the other certificates that were generated from the root certificate to continue to be used for authentication.
The common practice is to use the root certificate to manage access at team or organization levels, while using revoked client certificates for fine-grained access control on individual users.
You can revoke a client certificate by adding the thumbprint to the revocation l
For frequently asked questions, see the [FAQ](vpn-gateway-vpn-faq.md#P2S). ## Next steps
-Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual Machines](../index.yml). To understand more about networking and virtual machines, see [Azure and Linux VM network overview](../virtual-network/network-overview.md).
+
+Once your connection is complete, you can add virtual machines to your VNets. For more information, see [Virtual Machines](../index.yml). To understand more about networking and virtual machines, see [Azure and Linux VM network overview](../virtual-network/network-overview.md).
For P2S troubleshooting information, [Troubleshooting Azure point-to-site connections](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).
vpn-gateway Vpn Gateway Ipsecikepolicy Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-ipsecikepolicy-rm-powershell.md
The following table lists the supported cryptographic algorithms and key strengt
| **IPsec/IKEv2** | **Options** | | |
-| IKEv2 Encryption | AES256, AES192, AES128, DES3, DES
-| IKEv2 Integrity | SHA384, SHA256, SHA1, MD5 |
+| IKEv2 Encryption | GCMAES256, GCMAES128, AES256, AES192, AES128, DES3, DES |
+| IKEv2 Integrity | GCMAES256, GCMAES128, SHA384, SHA256, SHA1, MD5 |
| DH Group | DHGroup24, ECP384, ECP256, DHGroup14, DHGroup2048, DHGroup2, DHGroup1, None | | IPsec Encryption | GCMAES256, GCMAES192, GCMAES128, AES256, AES192, AES128, DES3, DES, None | | IPsec Integrity | GCMAES256, GCMAES192, GCMAES128, SHA256, SHA1, MD5 | | PFS Group | PFS24, ECP384, ECP256, PFS2048, PFS2, PFS1, None | QM SA Lifetime | (**Optional**: default values are used if not specified)<br>Seconds (integer; **min. 300**/default 27000 seconds)<br>KBytes (integer; **min. 1024**/default 102400000 KBytes) | | Traffic Selector | UsePolicyBasedTrafficSelectors** ($True/$False; **Optional**, default $False if not specified) |
-| | |
+| DPD timeout | Seconds (integer: min. 9/max. 3600; default 45 seconds) |
+ > [!IMPORTANT] > 1. **Your on-premises VPN device configuration must match or contain the following algorithms and parameters that you specify on the Azure IPsec/IKE policy:**
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
Previously updated : 06/10/2022 Last updated : 01/10/2023
For non-zone-redundant and non-zonal gateways (gateway SKUs that do *not* have *
Zone-redundant and zonal gateways (gateway SKUs that have *AZ* in the name) both rely on a *Standard SKU* Azure public IP resource. Azure Standard SKU public IP resources must use a static allocation method.
-For non-zone-redundant and non-zonal gateways (gateway SKUs that do *not* have *AZ* in the name), only dynamic IP address assignment is supported. However, this doesn't mean that the IP address changes after it has been assigned to your VPN gateway. The only time the VPN gateway IP address changes is when the gateway is deleted and then re-created. The VPN gateway public IP address doesn't change when you resize, reset, or complete other internal maintenance and upgrades of your VPN gateway.
+For non-zone-redundant and non-zonal gateways (gateway SKUs that do *not* have *AZ* in the name), dynamic IP address assignment is supported. When you use a dynamic IP address, the IP address doesn't change after it has been assigned to your VPN gateway. The only time the VPN gateway IP address changes is when the gateway is deleted and then re-created. The VPN gateway public IP address doesn't change when you resize, reset, or complete other internal maintenance and upgrades of your VPN gateway.
### How does my VPN tunnel get authenticated?